![]() TRAFFIC SIGN RECOGNITION DEVICE
专利摘要:
traffic sign recognition device. a traffic sign recognition apparatus (100) including a control device (10) is provided. the control device (10) performs: a capture image acquisition function, which obtains an image captured around a vehicle owned by a camera (20); a self-vehicle position get function, which obtains a self-vehicle moment position; a selected traffic sign specification function, which refers to the map information to specify a traffic sign to be selected on a commute link to own vehicle, the map information including commute link data and signal data traffic, the travel link data including a connected link and travel direction information, which are related to each other, the connected link being defined by a directed link, including a starting point, an ending point and an upward direction or a downward direction, and connected to each of the starting and ending points including directly ahead, right turn or left turn, traffic sign data including position information about the traffic sign relating to the displacement link ; a traffic sign image area extraction function, which extracts an image area from the traffic sign, based on position information about the traffic sign and image capture property information about the camera (20); and a traffic sign content recognition function, which recognizes a traffic sign content of the traffic sign, based on an image of the traffic sign in the image area. 公开号:BR112015024721B1 申请号:R112015024721-0 申请日:2014-02-25 公开日:2022-01-18 发明作者:Naoki Kojo 申请人:Nissan Motor Co., Ltd; IPC主号:
专利说明:
TECHNICAL FIELD [001] The present invention relates to a traffic signal recognition device, which specifies a traffic signal to be selected on a roadway for a vehicle itself and which recognizes the content of the traffic signal. BACKGROUND [002] With respect to this type of apparatus, an apparatus that recognizes the state of a traffic sign is known. This device assumes an image area, which includes an image of the traffic sign within an image captured by a camera, based on the distance from the vehicle itself to an intersection and information about the height of a traffic signal. And, this device recognizes the content of the traffic sign, based on an image of the image area of the traffic sign (see patent document 1 presented below). ] Japanese Patent No. 3857698 SUMMARY OF THE INVENTION PROBLEMS TO BE SOLVED BY THE INVENTION [003] However, in the case where several traffic signs are provided at an intersection, some difficulties may occur. For example, a difficulty in which an image of a traffic sign, which must not be selected by the vehicle itself, is captured in the image captured by an in-vehicle camera, may occur. Another difficulty is that an image of a traffic sign, which must be selected, is not captured can occur. These difficulties may depend on the difference in place, due to the direction of travel of the vehicle itself, the traffic lanes, the direction of travel or the like. Therefore, the process according to the prior art has a problem in which the vehicle itself can misspecify a traffic sign to be selected and incorrectly recognize the content of the traffic sign presented by it. [004] An object of the present invention is to provide a traffic signal recognition apparatus, which specifies a traffic signal to be selected by the vehicle itself and which recognizes the content of the traffic signal with a high accuracy, even under difficulties, such as such as when an image of a traffic sign, which should not be selected, is captured in the image captured by a self-vehicle camera, or an image of a traffic sign, which must be selected, is not captured in the image captured by the camera of own vehicle. MEANS TO SOLVE PROBLEMS [005] The traffic sign recognition apparatus of the present invention refers to map information, to specify a traffic sign to be selected, when traveling on the next travel link, based on a travel link which the moment position of the vehicle itself belongs. Map information includes travel link data and traffic sign data. Travel link data is defined for each traffic lane by a directed link. Travel link data includes a connected link and travel directions information, which are relative to each other. Traffic sign data is related to commute link data. EFFECT OF THE INVENTION [006] In accordance with the present invention, map information includes travel link data and traffic sign data, which are related to each other. The offset link is specified for each traffic lane by using the directed link, including a starting point, an end point and an up direction or a down direction, and the offset link is related to the connected link and information of displacement directions. Traffic sign data includes data relating to a traffic sign to be selected when traveling on the travel link. Map information is thus used to specify the traffic sign to be selected when traveling on each travel link. Therefore, even in a complex traffic situation, in which several traffic signs are provided, a traffic sign to be selected by the driver can be correctly specified in the displacement link for the own vehicle. BRIEF DESCRIPTION OF THE DRAWINGS [007] Figure 1 is a block diagram of a drive support system, according to an embodiment of the present invention. [008] Figure 2 is a view showing an example of map information, according to the present embodiment. [009] Figure 3 is a view to describe a traffic situation in which several traffic signs are provided. [010] Figure 4 is a view to describe a process of specifying a traffic sign to be selected. [011] Figure 5 is a view to describe the timing of a presumption process for a displacement link. [012] Figure 6 is a flowchart showing a control procedure of the drive support system, according to the present embodiment. METHOD(S) FOR CONDUCTING THE INVENTION [013] In the following, an embodiment of the present invention will be described with reference to the drawings. The present embodiment exemplifies an example in which the traffic sign recognition apparatus according to the present invention is applied to a drive support system. [014] Figure 1 is a block diagram of a drive support system 1, comprising a traffic sign recognition apparatus 100, according to the invention. The drive support system 1 and the traffic sign recognition apparatus 100 included therein are equipped in a vehicle. The traffic sign recognition apparatus 100 specifies a traffic sign to be selected by a driver in his own vehicle and which recognizes the information content presented by the traffic sign. [015] The drive support system 1 comprises the traffic sign recognition apparatus 100, a navigation device 50, a vehicle controller 60, and a drive system 70, which controls the direction of the vehicle. The drive support system 1 controls the steering/braking of the own vehicle via the vehicle controller 60 and the drive system 70, based on the content of the signal recognized by the traffic sign recognition apparatus 100. The support system drive 1 provides the driver of the own vehicle with supporting information for steering by the navigation device 50. [016] As shown in Figure 1, the traffic sign recognition apparatus 100 of the present embodiment comprises a control device 10, a camera 20, a position detection device 30 and a database 40. [017] Camera 20, which is mounted on the own vehicle, captures an image around the own vehicle. For example, camera 20 is a camera that includes an image capturing element, such as a CCD. The lens used therein may be a telescopic lens having a narrow viewing angle, which is capable of capturing an image of a distant preceding vehicle. The lens can also be a fisheye lens, having a wide field of view to respond to a curve and a variation of tilt. A lens for unidirectional cameras, which can capture full-view images, can also be used. The position in which the camera 20 is secured is not limited, but, in the present embodiment, the camera 20 is provided to face forward, in the vicinity of an interior rear view mirror of the vehicle itself. An example will be described that uses a standard lens, having a viewing angle of about 25 to 50°. Image capture property information 42, which includes internal parameters, such as lens distortion 20, and external parameters, which represent the position of attachment to the vehicle, can be stored in a ROM in advance. [018] The camera 20 has, in the present embodiment, a magnification function, which increases and decreases an image capture increase, when capturing an image. The magnification function includes, in the present embodiment, an image capture magnification control function, which is controlled based on a surface ratio of the image area of a specific traffic sign, with respect to the captured image, which is to be described later. [019] The camera 20 includes, in the present embodiment, a movement mechanism 21. The movement mechanism 21 can vary the imaging direction of the camera 20. The movement mechanism 21 has, in the present embodiment, a triggering function , and the camera 20 can be triggered in the vertical direction of the vehicle itself by the triggering function. The movement mechanism 21 can be configured as a supporting device, separate from the camera 20. The camera 20 used in the present embodiment is an integrated camera with movement mechanism, such as a PTZ camera (a camera capable of effect functions). pan/tilt/zoom), which has a mechanism to rotate the camera body. The camera 20 may be, but is not particularly limited to, a PTZ camera arranged on an upper surface of a dashboard in the own vehicle, so that the camera can capture an image seen from the front of the own vehicle. The specific shape of the moving mechanism 21 is not particularly limited, and it may employ a method of providing a mirror in front of the image capturing element and driving the mirror, to thereby vary the direction of image formation. The external parameters, including the fixing position of the camera 20 on the vehicle, and other parameters, including the variable degree of the fixing position of the camera 20, due to the operation of the movement mechanism 21, can be preliminarily calculated and stored in the database. 40. The movement mechanism 21 varies, in the present embodiment, the imaging direction of the camera 20, based on the position of the image area of a specific traffic sign with respect to the captured image. [018] Position detection device 30, which comprises a GPS (Global Positioning System), detects a moment position (latitude/longitude) of a vehicle, which is moving. The moment position of the own vehicle can be obtained from the position detection device 30, included in the navigation device 50, which is mounted on the own vehicle. The position detection device 30 has, in the present embodiment, a function of detecting a point of view of the vehicle itself. To detect the point of view of the own vehicle, the position detection device 30 can comprise two GPS receivers and calculate the direction of the own vehicle. This position detection device 30 may further comprise an azimuth meter, for calculating the direction of the vehicle itself. [019] The position detection device 30 can calculate the point of view of the own vehicle, based on the image captured by the camera 20. An example will be described of a process when calculating the point of view of the own vehicle, based on in the captured image. When the own vehicle is traveling on a straight roadway, the viewpoint for the image captured by the camera 20 is adjusted from above in relation to the vehicle, and the captured image is converted into a bird's flight view image, in which the vehicle itself is viewed from above the vehicle. If lines (lane markings) or curbs are recognized in the bird's-eye view image, the slope of the lines or curbs can be used to calculate the vehicle's direction. The position/viewpoint of the own vehicle can be calculated using SLAM (Simultaneous Location and Mapping), which is a technique of presumption of self-location using an image, and which was known in the art at the time of filing this application for patent. Specifically, three-dimensional positions of image feature points can be preliminarily stored in the database 40, and the image feature points in the image captured by the camera 20 can be compared for verification with the image feature points stored in the database 40. , in order to calculate the position / point of view of the vehicle itself. [020] Database 40 stores information from maps 41. Map information 41 may alternatively be stored in navigation device 50, which is accessible by control device 10. Map information 41 includes, in In the present embodiment, offset link data 411, and traffic signal data 412, which are associated with offset link data 411. [021] Each displacement link in the displacement link data 411 is defined for each traffic lane by a directed link, which includes a starting point, an ending point, and an up or down direction. The directed link is defined, at least partially, by the starting and ending points (ie the extent) and by information about the direction. Each travel link (directed link) is, in the present embodiment, related to a connected link, which is connected to each of the start and end points, and also related to travel direction information, which specifies a direction of travel. displacement including straight ahead or a turn to the right or to the left. In addition, traffic sign data is related to the displacement link. Traffic sign data is data that includes position information about a traffic sign, to be selected by the driver, when traveling on each travel link. [022] Figure 2 shows an example of map information 41. As shown in Figure 2, map information 41, in the present embodiment, includes: all points specified by map coordinate values, such as latitude and longitude, as in usual map information; all nodes assigned an identifier (ID); all travel links assigned an identifier (ID); all traffic signs assigned an identifier (ID); and the points of interest (POI) information for each point. [023] Information about a node is specified by map coordinate values and has a node identifier, to be connected with the node, and node identifiers in offset parts. A node can be established at the center of the width direction of a traffic lane, but at a complex and similar intersection having multiple traffic lanes, the position of a node can be defined using a trace on which a vehicle has actually moved. . [024] Displacement link data includes: a node identifier, which is specified by map coordinate values and defines start and end points; a displacement direction, such as a straight line, a curve on the right and a curve on the left, of each offset link; a classification of each offset link, i.e., whether the offset link is a junction link or a bypass link (a link to be joined or a link to branch); an identifier of another shift link, which is connected to the shift link; eidentifiers of various displacement links in offset parts. [025] The identifier of a traffic sign, to be selected, is related to each displacement link. Travel link data is, in the present embodiment, assigned different IDs for each traffic lane, even when the lanes are present on the same carriageway, and treated as a different travel link. This allows an offset link for a right turn and an offset link for straight ahead to be processed as different offset links, albeit on the same raceway. In this way, even when the information about the starting and ending points are the same, the traffic sign, to be selected in the shift link for right turn, and the traffic sign, to be selected in the shift link for directly to the right. front are different, and therefore stored as different items of information. According to the present embodiment, displacement links are defined as different displacement links, when a vehicle displacement lane is bent or displacement lanes intersect with each other and when traffic signs, to be selected are different. That is, in the travel link information according to the present invention, the travel link is defined for each traffic lane, and a traffic signal to be selected is related to a travel link. Therefore, even when multiple traffic signs are provided on the same displacement lane, the various traffic signs are not related to a displacement link in the displacement link data, in accordance with the present embodiment. When a commute link is determined for the own vehicle to travel, the traffic sign to be selected can be uniquely specified, because a traffic sign to be selected is related to a commute link, which is defined for each traffic lane. Traffic sign data includes: height information; an installation position specified by map coordinate values; an identifier of a travel link having a traffic sign, to be selected by a vehicle, which is traveling on the travel link, i.e. an identifier of a travel link, which is controlled by the traffic signal; and selection information about the traffic sign. [026] The traffic sign selection information, as referred to in this descriptive report, is information that the traffic sign is a traffic sign that displays the color red, yellow or green, the traffic sign is a traffic sign traffic that displays red or green color for pedestrians, or traffic sign is a traffic sign that displays right and left turns by arrows. [027] Figure 3 is a view showing examples of commute link data and traffic sign data at a real intersection. As shown in Figure 3, an offset link LKD1 is established for a roadway, on which an offset link LKU1 is present for displacement of the own vehicle V. In this case, the offset link LDK1 is an offset link for the track down, and both receive different identifiers as different displacement links. On the travel direction side of the travel link LKU1, in order for the own vehicle to travel, three travel links LKU2A, LKU2B and LKU2C are present, which are linked to the travel link LKU1 at a node ND2. the TRAVEL LINK LKU2A is a shift link for a track directly ahead, the shift link LKU2B is a shift link for a right-turn lane, and the shift link LKU2C is an shift link for a lane of left turn. The LKU2A and LKU2C offset links are defined as different offset links, because the links attached to them are different. [028] As shown in Figure 4, when viewed from the front of the vehicle V itself, shown in Figure 3, both traffic signals SG1 and SG2 will stay within the field of view (image capture area) at the same time. It can be difficult to determine which traffic signal the V own vehicle should select. In this situation, an erroneous process may possibly be conducted (a traffic signal, which should not have been selected, may be misspecified), such as depending on the image recognition technique, in a process of specifying any one of the traffic signals. Traffic. Note that the z axis, shown in Figure 4, is in the imaging direction. [029] Although details are described later, the traffic signal SG1, to be selected, can be specified, and an image area R, including this traffic signal SG1, can be extracted, even when two traffic signals are included. concurrently in the image capture area, as shown in Figure 4. This is because the traffic sign recognition apparatus 100, according to the present embodiment, is configured so that the displacement link is related to all traffic signals, which are going to be selected, and in the example shown in Figure 3, only the traffic signal SG1 is related to the displacement link LKU1 for own vehicle V. [030] In other words, according to the present embodiment, a traffic signal, to be selected, can be specified for a displacement link by reference to the displacement links data and the traffic signals data, even in a situation in which multiple traffic signs are provided at an intersection, to complicate the traffic situation. Next, the descriptions will be directed to a traffic sign recognition process, according to the present embodiment, which specifies a traffic sign to be selected and recognizes the content presented by the traffic sign. [031] The control device 10 of the traffic sign recognition apparatus 100, which specifies a traffic sign to be selected, is a computer, comprising: a ROM (Read Exclusive Memory) 12, which stores a program for recognition content of information presented by the traffic sign; a CPU (Central Processing Unit) 11 as an operating circuit, which executes the program stored in ROM, to function as the traffic sign recognition apparatus 100; and a RAM (Random Access Memory) 13, which serves as the accessible storage. [032] The control device 10 of the traffic sign recognition apparatus 100, according to the present embodiment, has a function of capturing captured image, a function of obtaining own vehicle position, a function of signal specification traffic signal, a traffic sign image area extraction function, and a traffic sign content recognition function. The control device 10 performs, in the present embodiment, all functions by software, to carry out the aforementioned functions together with the hardware described above. [033] All the functions performed by the traffic sign recognition apparatus 100, in accordance with the present embodiment, will be described below. [034] The captured image acquisition function performed by the control device 10, according to the present embodiment, will be described first. The control device 10 obtains an image captured around the own vehicle obtained by the camera 20. [035] The function of obtaining own vehicle position performed by the control device 10, according to the present embodiment, will then be described. The control device 10 obtains a moment position of the own vehicle, detected by the position detection device 30. The control device 10 also obtains, if any, field of view information about the own vehicle detected by the position detection device. 30. [036] The target traffic sign specification function performed by the control device 10 is a function for referencing the map information 41 and for specifying a traffic sign to be selected by using information about the displacement link, in the to which the moment position of the vehicle itself belongs. The control device 10 assumes the displacement link for the displacement of the own vehicle, using the displacement link data obtained from the map information 41, and the moment position of the own vehicle, obtained from the position detection device. The control device 10 assumes a displacement link relative to the point or area to which the moment position of the own vehicle belongs, as the displacement link for displacement of the own vehicle. The control device 10 can assume a displacement link separated from the moment position of the own vehicle by a minimum distance (the closest displacement link), such as the displacement link for displacement of the own vehicle. When a posture, including the direction of travel of the own vehicle, is obtained, a link of displacement in which a difference between the direction of travel of the own vehicle and the direction of a vector, directed from the starting point of the link of displacement to the point end, is not greater than a predetermined threshold, e.g. 90°, and the distance from the moment position is minimal (i.e. the closest displacement link), it can be considered as the displacement link for self-vehicle displacement . [037] When travel link data includes information about linked links, which are linked to the travel link (all being transit), the options of a travel link for subsequent travel are restricted to linked links, which are linked to the moment displacement link. Therefore, among the displacement links limited to the options, a displacement link, separated from the moment position by a minimum distance, can be selected as a next displacement link for subsequent displacement. This provides a correct assumption of a displacement link for the displacement of the own vehicle, even in a complex traffic situation, in which the traffic signal to be selected is different for each displacement link, such as the traffic in which the own vehicle subsequently follows. In this way, the displacement link for the own vehicle to turn right at the intersection and the displacement link for the own vehicle to go directly straight at the intersection can be foreseen without fail. [038] The control device 10, in the present embodiment, refers to the displacement link data 411 from the map information 41 for own vehicle displacement, and calculates the next displacement link for subsequent displacement, when it is determined that the vehicle itself has reached the end point of the shift link. Specifically, when a distance between the own vehicle and the endpoint of the directed link, relative to the displacement link, becomes less than a predetermined value, the control device 10 refers to the displacement link data 411 and calculates a connected link. , which is connected to the end point of the displacement link to which the self-vehicle moment position belongs. [039] More specifically, as shown in Figure 5, the distance to the node from the end point is calculated in comparison with the vector directed from the start point to the end point of the displacement link. And, when a distance D from the node from the endpoint ND2 to the displacement link LKU1 to the moment position VL of the own vehicle V becomes less than a predetermined value, the control device 10 assumes the next displacement link for the vehicle V own vehicle, for subsequent synchronous travel when the V own vehicle has reached the end point of the LKU1 travel link. [040] Control device 10, in the present embodiment, refers to travel link data 411 and/or traffic sign data 412, which are related to each other in map information 41, and specifies a traffic sign to be selected, which is relative to the assumed displacement link for the own vehicle. Control device 10 identifies the travel link ID for the vehicle fit for travel and refers to the traffic sign data 412 relating to the travel link ID in the travel link data 411, thereby specifying the travel signal. traffic to be selected by the vehicle itself, when traveling on the displacement link. Furthermore, the control device 10 obtains, from the traffic signs data 412, the three-dimensional position information, such as the installation position and height of the specific traffic sign and its type, etc. [041] Next, the traffic sign image area extraction function performed by the control device 10 will be described. The control device 10 extracts an image area of the traffic sign from the image captured by the camera 20, based on the position information about the specific traffic sign and the image capture properties information 42 about the camera 20. The device control 10 calculates the relative position relationship of the traffic sign based on the camera 20, using a geometric relationship, from the own vehicle position obtained by the own vehicle position get function, from three-dimensional position information about the traffic sign , to be selected, which are specified by the traffic sign specification function, and external parameters (image capture property information 42), which include information about the attachment position of the camera 20 on the own vehicle. [042] The control device 10, in the present embodiment, extracts the image area corresponding to a traffic sign image from the captured image, while considering the own vehicle's field of view, detected by the traffic signal specification function. In this way, the field of view of the vehicle itself is also considered, in order to allow an accurate extraction of the image area in which there is an image of the traffic sign. The specific processing is not limited, but it can be designed so that the coordinate conversion is done according to the vehicle's own field of view, to get the traffic sign image area of the captured image, when the signal image traffic is captured in the momentary field of view of the vehicle itself. [043] When the traffic sign specification function only obtains the position of the own vehicle, that is, when information about the field of view of the own vehicle cannot be obtained, the control device 10 can use the direction of travel from the shift link to own vehicle, to shift as the own vehicle's field of view. In that case, the control device 10 can calculate the vehicle's field of view using the travel direction of the travel link. For example, in an offset link having a curvature, such as a right turn and a left turn, the vehicle's field of view at the start point node and the vehicle's field of view at the end point node are preliminarily included. in the travel link data from the map information 41, and the control device 10 calculates the distances from the position of the own vehicle to the starting point node and to the end point node of the travel link. These calculated distances, which reflect the percentage of the vehicle's own progress on the travel link, can be used to calculate the variation in the vehicle's field of view at each position, from the starting point node to the end point node, from the link displacement by the control device 10. [044] As described above, the movement mechanism 21 for the camera 20 varies the imaging direction of the camera 20, based on the position of the traffic sign image area in the captured image. With respect to the position of the traffic sign image area used when varying the imaging direction of the camera 20, it is preferred that not only the relative position information about the traffic sign based on the moment position of the camera 20 are calculated, but also the displacement information about the own vehicle, such as the vehicle speed and yaw degree of the own vehicle, and the estimated time required for a process to calculate the image area of the signal of traffic, are considered to calculate the image area of a traffic sign, which can be specified in the subsequent period. The displacement information about the vehicle itself is obtained from the vehicle controller 60. After predicting the image area of a traffic signal, when performing the subsequent process to specify the traffic signal, the movement mechanism 21 adjusts the direction forming image of the camera 20, so that the camera 20 can capture the predicted image area of the traffic sign. This allows the subsequent specification of the traffic sign to be done correctly. [045] Also as described above, the camera 20 varies the image capture magnification of the camera 20, based on a ratio of the image area of the traffic sign to the captured image. Regarding the relationship occupied by the image area of the traffic sign with respect to the entire area of the captured image, which relationship is used when varying the image capture increase of the camera 20, it is preferred that not only the position information about the traffic signal, based on the moment position of the camera 20 are calculated, but also the displacement information about the own vehicle, such as the speed and degree of yaw of the vehicle of the own vehicle, and the estimated time required for the a process to calculate the image area of the traffic sign, are considered to calculate the image area of a traffic sign, which can be specified in the subsequent period. After predicting the image area of a traffic sign, when performing the subsequent process to specify the traffic sign, the drive mechanism 21 adjusts the image capture increase of the camera 20 so that the camera 20 can capture with clarity the predicted image area of the traffic sign. This allows the subsequent recognition of the traffic sign content, presented by the traffic sign, to be done correctly. [046] Control device 10 extracts an image area in the captured image (the traffic sign image area), in which the image of the specific traffic sign appears, using the relative position of the traffic sign, with based on camera 20 and internal parameters (image capture property information 420, including lens distortion information related to camera 20. The center coordinate of the traffic sign image area can be obtained by performing image conversion projection to the three-dimensional coordinates of the central part of the traffic sign (for example, the central part of the yellow light, in the case of a red - yellow - green traffic signal) in a coordinate system of the image captured by the camera 20. The size of the traffic sign image area can be set to some fixed size or adjusted according to the type of traffic sign, or the size can be varied according to the distance of the vehicle itself from the sign of traffic. Although not particularly limited, the image area of the traffic sign can be adjusted larger as the distance to the traffic signal becomes longer, and the image area of the traffic signal can be adjusted smaller as the distance to the traffic signal becomes longer. distance to the traffic sign becomes shorter so that the size of the image area of the traffic sign in the captured image is within a predetermined area. The size of the traffic sign image area can be varied according to the detection accuracy of the position detection device 30 in the vehicle itself. [047] Finally, the traffic sign content recognition function performed by the control device 10 in the present embodiment will be described. The control device 10 recognizes, in the present embodiment, the traffic signal content presented by the traffic signal, based on the image of the traffic signal included in the extracted image area. The control device 10 performs a process for recognizing a traffic sign content presented by the traffic sign, for the traffic sign image area extracted by the traffic sign content recognition function, and obtains information presented by the traffic sign. of traffic. The process for recognizing the traffic sign content presented by the traffic sign is not particularly limited, but can be configured to: preliminarily store a template image for each item of information presented by the traffic sign; performing model matching generally known in the field of image processing, to specifically target the image area of the traffic sign; and then check, by color recognition, the lighting status of each traffic sign (eg red/yellow/green) displayed by the traffic sign, to recognize the lighting color as the information presented by the traffic sign. The process of recognizing traffic signal content is not particularly limited, and any process, which was known in the art at the time of filing of the present patent application, can be suitably used. [048] The control procedure in the traffic signal recognition apparatus 100, in accordance with the present embodiment, will then be described with reference to the flowchart of Figure 6. The control device 10, having a traffic signal recognition function transit, sequentially executes the process shown in Figure 6 at a specified time interval, for example about 50 ms. [049] As shown in Figure 6, the control device 10 obtains in step S110 the moment position of the own vehicle, detected by the position detection device 30. [050] In step S120, the control device 10 determines whether the own vehicle has reached the end point node of the displacement link, based on the position of the own vehicle, which was obtained in step S110, and the information about the link of displacement, in which it was determined, in the previous processing, that the vehicle itself was moving (position of the vehicle obtained belonging to it). The determination process, which will be used in the present specification, may be the process described with reference to Figure 5. If it is determined that the own vehicle has reached the endpoint node, the routine proceeds to step S130, while it is determined that the own vehicle has not reached the endpoint node, the routine continues processing in steps S110 and S120. [051] In step S130, the control device 10 calculates a displacement link for displacement of the own vehicle, based on the moment position of the own vehicle, obtained in step S110, and the displacement links data 411 included in the information of maps 41, and the routine goes to step S140. [052] In step S140, the control device 10 uses the identifier (ID) of the displacement link, calculated in step S130, and the map information 41, to specify a traffic sign to be selected, based on the link of displacement to which the self-vehicle moment position belongs. [053] In step S150, the control device 10 calculates a relative position of the traffic sign with respect to the camera 20, using: the position information, obtained with reference to the traffic signs data 412 from the map information 41, about the traffic signal specified in step S140; the moment position of the own vehicle, obtained in step S110; and the external parameters (the image capture property information 42) of the camera 20 attached to the vehicle itself. [054] In step S160, the control device 10 extracts an image area of the traffic sign from the image captured by the camera 20, based on the position information about the specific traffic sign and the image capture property information 42 about camera 20. [055] In step S170, the control device 10 recognizes the traffic sign content of the traffic sign, based on the image of the traffic sign in the image area. [056] In step S180, the control device 10 sends the recognition result signal to the navigation device 50, vehicle controller 60 or drive system 70. These devices on board the vehicle perform drive support, in accordance with the traffic sign content displayed by the traffic sign. Although not particularly limited, the drive support system 1, according to the present embodiment, can perform the control to decrease the speed of the own vehicle, upon recognition of the stop sign or warning sign, displayed by the traffic sign. When the stop sign or warning sign displayed by the traffic sign is recognized, the drive support system 1 according to the present embodiment can announce it by voice. [057] In step S220, the control device 10 varies the imaging direction of the camera 20 based on the position, obtained in step S160, of the traffic signal image area of the captured image. This is because when the traffic sign image is shifted from the center of the captured image, the imaging direction of camera 20 may need to be corrected so that the entire traffic sign image can be obtained. [058] In the present embodiment, the image area of the traffic signal, obtained in step S160, is used in the step S210, to predict the image area of the traffic signal in sync with a process to be performed in the subsequent period. And the imaging direction of the camera 20 is varied in step S220, based on the position of the predicted image area of the traffic sign. Once the vehicle is moving, it is assumed that the position of the traffic sign image area in the present processing period and the position of the traffic sign image area in the subsequent processing period (next period ), are different. In the present embodiment, displacement information about the own vehicle, such as the speed and degree of yaw of the vehicle of the own vehicle, and the estimated time required for a process to calculate the image area of the traffic sign, are considered for calculate the image area of the traffic sign, to be specified, in the subsequent period. [059] The traffic sign recognition apparatus 100 according to the present embodiment has the following advantageous effects.(1) The traffic sign recognition apparatus 100 of the present embodiment refers to the map information 41 , to specify a traffic sign to be selected by using the offset link, to which the moment position of the own vehicle belongs. Map information 41 includes travel link data 411 and traffic sign data 412. Travel link data 411 is data defined by the directed link and relative to the information about the connected link and the travel direction. Traffic signal data 412 is data relating to travel link data. Therefore, even in a complex traffic situation, in which several traffic signs are present, the displacement link for the own vehicle is correctly predicted, so that the traffic sign, to be selected by the driver, can be correctly specified. [060] In the present embodiment, the displacement link is defined by the directed link, including the starting and ending points. This displacement link is established for each traffic lane in the carriageway. At an intersection, this travel link is established for each item of travel direction information, such as straight ahead, right turn, or left turn. Therefore, even in a situation at a complex intersection, for example, as shown in Figure 3, the traffic signal, to be selected by the vehicle itself, can be specified. [061] In the present embodiment, the displacement link is relative to information about the connected link, to be subsequently connected to it. Therefore, even when several displacement links are present on the same roadway, the displacement link, to which the vehicle itself belongs, can be adequately provided. For example, when the vehicle turns left at the intersection at which traffic sign SG2 is located, as shown in Figure 3, the left turn shift link LKU2C and the straight ahead shift link LKU2A are present in the same track. However, the left turn shift link LKU2C is relative to the connected link LKU3-TL, such as traffic, while the straight ahead shift link LKU is relative to the connected link LKU3-ST, such as traffic, and Therefore, the shift link LKU2C and the shift link LKU2A can be correctly identified.(2) According to the traffic sign recognition apparatus 100 of the present embodiment, the process for specifying the shift link is conducted synchronously when the own vehicle has approached to be within a predetermined distance of the endpoint node, and the travel link for subsequent travel can therefore be correctly specified. For example, when the detection accuracy of the self-vehicle moment position becomes too low, it is possible to eliminate the selection of an incorrect connected link in the present embodiment. (3) In accordance with the traffic sign recognition apparatus 100 of the present embodiment. , the field of view of the vehicle itself is obtained, and the image area of the traffic sign is extracted into the field of view of the vehicle. Therefore, the position of the image area at which the image of the traffic sign can be obtained precisely. (4) The traffic sign recognition apparatus 100 of the present embodiment employs a form of information in which a data item traffic signal is relatively related to a displacement link data item. Therefore, a traffic signal to be selected can be specified using information from travel links, which have been predicted with high accuracy using information about the lane, the directed link, the connected link and the travel direction.(5) In the traffic sign recognition apparatus 100 of the present embodiment, the camera 20 can be driven to move. Therefore, the camera 20 can capture an image of a traffic sign irrespective of its location. For example, in an intersection media situation, as shown in Figure 3, in which the vehicle waits to turn right at the TRP point of the LKU3-TR travel link, the camera 20 can capture an image of the traffic signal SG3 , which would be out of the field of view if an ordinary camera were used to capture an image seen ahead. A trigger-type camera having a narrow viewing angle can preferably be used, because in that case the possibility of capturing the traffic sign, which is not going to be selected, can be reduced, thereby eliminating the misrecognition compared to using a camera having a wide viewing angle.(6) In the traffic sign recognition apparatus 100 of the present embodiment, the camera 20 has a zoom function and performs zooming in and out control. the image capture increase, according to the size of the traffic sign image area. Therefore, the size of the traffic sign image area in the captured image can be properly maintained regardless of the traffic sign distance relationship. This allows the risk of capturing another traffic signal to be further reduced. Furthermore, the accuracy of the traffic sign recognition process can be increased, because the size of the image area of the traffic sign, as a recognition object, is approximately constant. [062] The embodiments explained so far are described to facilitate understanding of the present invention and are not described to limit it. Therefore, the elements described in the aforementioned embodiments are intended to include all design variations and equivalents to be within the technical scope of the present invention. [063] That is, in the present description, the drive support system 1 is described as an example, which comprises the traffic sign recognition apparatus 100, according to the present embodiment, the navigation apparatus 50, the controller vehicle 60 and drive system 70, but the present invention is not limited thereto. [064] In the present description, the traffic sign recognition apparatus 100, comprising the control device 10, the camera 20 with an image capture unit, the position detection device 30 and the database 40, is described as an example of an embodiment of the traffic sign recognition apparatus according to the present embodiment, but the present invention is not limited thereto. [065] In the present description, the traffic sign recognition apparatus 100, having the control device 10, which performs the captured image acquisition function, the own vehicle position acquisition function, the signal specification function traffic signal, the traffic sign image area extraction function and the traffic sign content recognition function, is described as an example of the traffic sign recognition apparatus according to the present invention, which comprises a captured image acquisition unit, an own vehicle position acquisition unit, a target traffic signal specification unit, a traffic signal image area extraction unit and a content recognition unit of traffic sign, but the present invention is not limited thereto. DESCRIPTION OF REFERENCE NUMBERS1 drive support system100 traffic sign recognition apparatus10 traffic sign device and control11 CPU12 ROM13 RAM20 camera30 position detection device40 database41 map information411 travel link data412 traffic sign data50 navigation device60 vehicle controller70 drive system
权利要求:
Claims (5) [0001] 1. Traffic sign recognition apparatus (100), comprising: a captured image acquisition unit (10) configured to obtain a captured image around a vehicle (V) from an image capture unit ( 20) on the vehicle; a vehicle position obtaining unit (30), configured to obtain a current position of the vehicle itself (V); a target traffic sign specification unit (10), configured to: (i) refer go to map information (41); and (ii) specify a road sign (SG) to be selected; a traffic sign image area extraction unit (10), configured to extract an image area of a traffic sign from the captured image, by use of position information about the specified traffic sign (SG), the position the vehicle itself (V) and the image capture property information about the image capture unit (20); and a traffic sign content recognition unit (10) configured to recognize a traffic sign content from the traffic sign (SG) by use of an image of the traffic sign included in the extracted image area, wherein: map information (41) includes travel links data (411) and traffic signs data (412), travel links data (411) includes: travel links (LKU, LKD), each travel link (LKU, LKD) including a starting point (ND1) and an ending point (ND2); and the traffic signal data (412) includes the position information about the traffic signal to be selected; CHARACTERIZED by the fact that each displacement link (LKU, LKD) of the displacement link data (411) is a directed link defined by the starting point (ND1), the ending point (ND2) and, in addition, by an up or down direction; Each shift link (LKD, LKU) further includes: a connected link that is linked to the endpoint (ND2) of the shift link (LKU, LKD) and shift direction information, which specifies a shift direction, including directly ahead turn right or turn left; and a unique traffic signal data reference (412) that specifies the traffic signal (SG) to be targeted for the displacement link (LKU, LKD); wherein an individual traffic link (LKU, LKD, LKU2A - LKU2C) is defined for each traffic lane of a road comprising two or more lanes with the same direction of travel; A different offset link is assigned to each of the left turn (LKU2C), right turn (LKU2B) and straight ahead (LKU2A) lanes. [0002] 2. Traffic sign recognition apparatus (100), according to claim 1, CHARACTERIZED by the fact that the target traffic sign specification unit (10) is configured to: refer to travel link data (411) to travel the vehicle itself; calculate the next travel link (LKU, LKD), when a distance between the vehicle and the end point (ND2) of the travel link (LKU, LKD) is less than one default value; especify the traffic sign (SG) to be selected, when traveling on the next travel link (LKU, LKD). [0003] 3. Traffic sign recognition device (100), according to claim 1 or 2, CHARACTERIZED by the fact that: the vehicle's own position obtaining unit (30) is configured to obtain a field of view of the vehicle itself vehicle in current position; and the traffic sign image area extraction unit (10) is configured to extract the traffic sign image area (SG) from the captured image, according to the field of view obtained from the vehicle itself (V). [0004] 4. Traffic signal recognition device (100), according to any one of claims 1 to 3, CHARACTERIZED by the fact that: the image capture unit (20) comprises a movement mechanism (21), which is configured to vary an imaging direction of the imaging unit (20); and the movement mechanism (21) is configured to vary the image formation direction of the image capture unit (20) according to a position of the traffic sign image area (SG) in the captured image. [0005] 5. Traffic sign recognition apparatus (100), according to any one of claims 1 to 4, CHARACTERIZED in that the image capture unit (20) has a magnification function, and the magnification function of the image capture (20) includes a function of controlling an image capture magnification, which is controlled based on a surface ratio of the image area of the traffic sign (SG) to the image area of the captured image.
类似技术:
公开号 | 公开日 | 专利标题 BR112015024721B1|2022-01-18|TRAFFIC SIGN RECOGNITION DEVICE JP4483305B2|2010-06-16|Vehicle periphery monitoring device US20180365991A1|2018-12-20|Traffic Light Recognition Device and Traffic Light Recognition Method EP3477614B1|2021-01-06|Vehicle control method and vehicle control device JP2004265432A|2004-09-24|Travel environment recognition device JP6566132B2|2019-09-04|Object detection method and object detection apparatus EP2959266A1|2015-12-30|Intelligent video navigation for automobiles JP4270010B2|2009-05-27|Object danger judgment device BR112017005761B1|2022-01-18|MOTION CONTROL DEVICE AND MOTION CONTROL METHOD WO2015177864A1|2015-11-26|Traffic-light recognition device and traffic-light recognition method BR112019027564A2|2020-07-07|vehicle information storage method, vehicle travel control method, and vehicle information storage device KR20190063845A|2019-06-10|Lane keeping assistance method and apparatus US10846546B2|2020-11-24|Traffic signal recognition device BR102018072683A2|2019-06-04|VEHICLE LOCATION DEVICE JP2008165326A|2008-07-17|Feature recognition device, own vehicle location recognizing device, navigation device and feature recognition method US10473481B2|2019-11-12|Lane display device and lane display method CN110871796A|2020-03-10|Lane keeping control device JP5974718B2|2016-08-23|Vehicle travel support device and vehicle travel support method JP2018063122A|2018-04-19|Route selection method and route selection device BR112020001645A2|2020-07-21|travel assistance method and travel assistance device JP2006344133A|2006-12-21|Road division line detector KR101836246B1|2018-03-08|Current Lane Detecting Method US10989558B2|2021-04-27|Route guidance method and route guidance device JP6758160B2|2020-09-23|Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection JP2014089137A|2014-05-15|System, method, and program for guiding exit road
同族专利:
公开号 | 公开日 WO2014162797A1|2014-10-09| MX2015013884A|2016-06-06| JPWO2014162797A1|2017-02-16| BR112015024721A2|2017-07-18| US9389093B2|2016-07-12| CN105074794A|2015-11-18| EP2983153A1|2016-02-10| EP2983153B1|2018-12-19| CN105074794B|2017-10-24| US20160054138A1|2016-02-25| MX346231B|2017-03-13| RU2015140803A|2017-05-15| EP2983153A4|2016-06-01| RU2666061C2|2018-09-05| JP5962850B2|2016-08-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 KR100224326B1|1995-12-26|1999-10-15|모리 하루오|Car navigation system| JP3619628B2|1996-12-19|2005-02-09|株式会社日立製作所|Driving environment recognition device| JPH11306489A|1998-04-16|1999-11-05|Matsushita Electric Ind Co Ltd|Camera system| EP1540937A4|2002-09-20|2008-11-12|M7 Visual Intelligence Lp|Vehicule based data collection and porcessing system| JP3857698B2|2004-04-05|2006-12-13|株式会社日立製作所|Driving environment recognition device| JP2007071581A|2005-09-05|2007-03-22|Xanavi Informatics Corp|Navigation device| JP4577655B2|2005-12-27|2010-11-10|アイシン・エィ・ダブリュ株式会社|Feature recognition device| JP4631750B2|2006-03-06|2011-02-23|トヨタ自動車株式会社|Image processing system| JP4446201B2|2007-03-30|2010-04-07|アイシン・エィ・ダブリュ株式会社|Image recognition apparatus and image recognition method| JP4915281B2|2007-05-24|2012-04-11|アイシン・エィ・ダブリュ株式会社|Signal detection device, signal detection method and program| US8751154B2|2008-04-24|2014-06-10|GM Global Technology Operations LLC|Enhanced clear path detection in the presence of traffic infrastructure indicator| US8559673B2|2010-01-22|2013-10-15|Google Inc.|Traffic signal mapping and detection| JP5691915B2|2011-07-27|2015-04-01|アイシン・エィ・ダブリュ株式会社|Movement guidance system, movement guidance apparatus, movement guidance method, and computer program| CN104956418B|2013-01-25|2018-01-23|三菱电机株式会社|Auxiliary device for moving and mobile householder method|JPH0326826B2|1983-04-01|1991-04-12|Sumitomo Chemical Co| US9164511B1|2013-04-17|2015-10-20|Google Inc.|Use of detected objects for image processing| US9558408B2|2013-10-15|2017-01-31|Ford Global Technologies, Llc|Traffic signal prediction| JP6546450B2|2015-05-29|2019-07-17|株式会社Subaru|Road map information processing device| EP3309767B1|2015-06-09|2020-01-01|Nissan Motor Co., Ltd.|Traffic signal detection device and traffic signal detection method| WO2016203616A1|2015-06-18|2016-12-22|日産自動車株式会社|Traffic light detection device and traffic light detection method| JP6712447B2|2015-06-30|2020-06-24|株式会社ゼンリン|Driving support device, program| CA2992405A1|2015-07-13|2017-01-19|Nissan Motor Co., Ltd.|Traffic light recognition device and traffic light recognition method| EP3324383B1|2015-07-13|2021-05-19|Nissan Motor Co., Ltd.|Traffic light recognition device and traffic light recognition method| JP6742736B2|2016-01-22|2020-08-19|株式会社デンソーテン|Lighting color determination device for traffic light and lighting color determination method for traffic light| DE102016216070A1|2016-08-26|2018-03-01|Siemens Aktiengesellschaft|Control unit, system with such a control unit and method of operating such a system| WO2018145245A1|2017-02-07|2018-08-16|Bayerische Motoren Werke Aktiengesellschaft|Method, device and system for configuration of a sensor on a moving object| DE102017102593A1|2017-02-09|2018-08-09|SMR Patents S.à.r.l.|Method and device for detecting the signaling state of at least one signaling device| JP6552064B2|2017-03-31|2019-07-31|株式会社Subaru|Vehicle travel control system| JP6838522B2|2017-08-10|2021-03-03|トヨタ自動車株式会社|Image collection systems, image collection methods, image collection devices, and recording media| CN110617821B|2018-06-19|2021-11-02|北京嘀嘀无限科技发展有限公司|Positioning method, positioning device and storage medium| JP6843819B2|2018-11-27|2021-03-17|本田技研工業株式会社|Traffic guide recognition device, traffic guide recognition method, and program| JP2021018737A|2019-07-23|2021-02-15|トヨタ自動車株式会社|Signal indication estimation system| JP6965325B2|2019-11-05|2021-11-10|三菱スペース・ソフトウエア株式会社|Auto-discovery system and auto-detection program| JP6979438B2|2019-11-05|2021-12-15|三菱スペース・ソフトウエア株式会社|Database generation system and database generation program|
法律状态:
2018-11-13| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-03-17| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-11-16| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2022-01-18| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 25/02/2014, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 JP2013-078368|2013-04-04| JP2013078368|2013-04-04| PCT/JP2014/054435|WO2014162797A1|2013-04-04|2014-02-25|Signal recognition device| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|