CN111717201A - Vehicle system, control method of vehicle system, and storage medium - Google Patents

Vehicle system, control method of vehicle system, and storage medium Download PDF

Info

Publication number
CN111717201A
CN111717201A CN202010193773.1A CN202010193773A CN111717201A CN 111717201 A CN111717201 A CN 111717201A CN 202010193773 A CN202010193773 A CN 202010193773A CN 111717201 A CN111717201 A CN 111717201A
Authority
CN
China
Prior art keywords
vehicle
overhead view
view data
image
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010193773.1A
Other languages
Chinese (zh)
Other versions
CN111717201B (en
Inventor
安井裕司
土屋成光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
Original Assignee
Honda Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd filed Critical Honda Motor Co Ltd
Publication of CN111717201A publication Critical patent/CN111717201A/en
Application granted granted Critical
Publication of CN111717201B publication Critical patent/CN111717201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a vehicle system, a control method for the vehicle system, and a storage medium, wherein a travel path for traveling the vehicle can be generated with higher accuracy. A vehicle system is provided with: an image acquisition device that acquires an image of a traveling direction of a vehicle; an identification unit that identifies a surrounding environment of a vehicle; and a driving control unit that performs driving control based on speed control and steering control of the vehicle based on a recognition result of the recognition unit, wherein the recognition unit includes an image processing unit that generates overhead view data based on the image, and the driving control unit generates a track for causing the vehicle to travel in the future based on the overhead view data and causes the vehicle to travel on the generated track.

Description

Vehicle system, control method of vehicle system, and storage medium
Technical Field
The invention relates to a vehicle system, a control method of the vehicle system, and a storage medium.
Background
In recent years, research on techniques for automatically controlling a vehicle to run the vehicle has been advanced, and some of the techniques have been developed for practical use. In addition, as a technology developed for practical use, there is a technology for generating a travel route for a vehicle to reach a destination. In this current technology, a travel route is generated based on an image captured by a camera provided in a vehicle. Therefore, the vehicle travels in the depth direction of the image. In addition, in a two-dimensional image obtained by capturing an image of a three-dimensional space with a camera, the distance of the real space represented by 1 pixel in the depth direction is longer than the distance of the real space represented by 1 pixel in the near direction, and therefore the resolution of the distance decreases toward the depth direction.
A technique related to an image conversion method and a vehicle periphery monitoring device for creating an overhead view based on an image captured by a monocular camera and using the overhead view for monitoring the periphery of a vehicle is disclosed (see, for example, japanese unexamined patent application publication No. 2002-034035).
However, in the technique disclosed in japanese unexamined patent application publication No. 2002-034035, an overhead view is created based on an image captured while the vehicle is stopped. Therefore, the technique disclosed in japanese laid-open patent publication No. 2002-034035 has not been sufficiently studied for using an overhead view for generating a travel path for continuing the next travel of the traveling vehicle.
The present invention has been made in view of the above-described recognition of the problem, and an object of the present invention is to provide a vehicle system, a control method of the vehicle system, and a storage medium, which are capable of generating a travel route for traveling a vehicle with higher accuracy.
Disclosure of Invention
Problems to be solved by the invention
The vehicle system, the control method of the vehicle system, and the storage medium according to the present invention adopt the following configurations.
(1): a vehicle system according to an aspect of the present invention includes: an image acquisition device that acquires an image of a traveling direction of a vehicle; an identification unit that identifies a surrounding environment of the vehicle; and a driving control unit that performs driving control based on speed control and steering control of the vehicle based on a recognition result of the recognition unit, wherein the recognition unit includes an image processing unit that generates overhead view data based on the image, and the driving control unit generates a track for causing the vehicle to travel in the future based on the overhead view data and causes the vehicle to travel on the generated track.
(2): in the aspect of (1) above, the image processing unit extracts edge points based on differences between each data included in the overhead view data and the surrounding data, and the recognition unit recognizes the lane included in the overhead view data based on the edge points extracted by the image processing unit.
(3): in the aspect of (1) above, the image processing unit extracts an edge point based on a difference between data included in the image and peripheral data, and specifies a lane in the overhead view data using the extracted edge point.
(4): in any one of the above (1) to (3), the image acquisition device acquires a plurality of the images in time series, and the image processing unit generates one overhead view data using the plurality of the images acquired in time series by the image acquisition device.
(5): in the aspect (4) described above, the image processing unit generates the data of the remote area in the one overhead view data using the data of the remote area in the plurality of images acquired in time series by the image acquisition device.
(6): in addition to any one of the above (1) to (5), the image processing unit specifies an area covered by an object on a road surface included in the overhead view data, supplements the specified area with pixel data around the area covered in the overhead view data, generates a trajectory for future travel of the vehicle using the supplemented overhead view, and travels the vehicle on the generated trajectory.
(7): in the aspect of (6) above, the image processing unit supplements the data indicating the road surface area of the overhead view data with map information separately stored at the time of supplementing the mask area.
(8): a control method of a vehicle system according to an aspect of the present invention causes a computer of the vehicle system to perform: acquiring an image of a traveling direction of the vehicle from an image acquisition device; identifying a surrounding environment of the vehicle; performing driving control based on speed control and steering control of the vehicle based on the recognition result; generating aerial view data based on the image; and generating a track for the vehicle to travel in the future based on the overhead view data, and causing the vehicle to travel with the generated track.
(9): a storage medium according to an aspect of the present invention stores a program that causes a computer of a vehicle system to perform: acquiring an image of a traveling direction of the vehicle from an image acquisition device; identifying a surrounding environment of the vehicle; performing driving control based on speed control and steering control of the vehicle based on the recognition result; generating aerial view data based on the image; and generating a track for the vehicle to travel in the future based on the overhead view data, and causing the vehicle to travel with the generated track.
Effects of the invention
According to the aspects (1) to (8) described above, the travel path for the vehicle to travel can be generated with higher accuracy.
Drawings
Fig. 1 is a configuration diagram of a vehicle system of the first embodiment.
Fig. 2 is a functional configuration diagram of the first control unit and the second control unit.
Fig. 3 is a flowchart showing an example of a flow of a series of processes executed by the image processing unit and the action plan generating unit according to the first embodiment.
Fig. 4 is a diagram schematically showing an example of a series of processes performed by the image processing unit and the action plan generating unit according to the first embodiment.
Fig. 5 is a diagram schematically showing an example of processing for the image processing unit of the first embodiment to supplement the road surface area corresponding to the mask area included in the overhead view data.
Fig. 6 is a flowchart showing an example of a flow of a series of processes executed by the image processing unit and the action plan generating unit according to the second embodiment.
Fig. 7 is a diagram schematically showing an example of a series of processes performed by the image processing unit and the action plan generating unit according to the second embodiment.
Fig. 8 is a diagram showing an example of the hardware configuration of the automatic driving control device according to the embodiment.
Detailed Description
Embodiments of a vehicle system, a control method of the vehicle system, and a storage medium according to the present invention will be described below with reference to the drawings. In the following, the case where the right-hand traffic rule is applied will be described, but the right-hand traffic rule may be applied by switching the right-hand and left-hand reading.
< first embodiment >
[ integral Structure ]
Fig. 1 is a configuration diagram of a vehicle system 1 of the first embodiment. The vehicle on which the vehicle system 1 is mounted is, for example, a two-wheel, three-wheel, four-wheel or the like vehicle, and the drive source thereof is an internal combustion engine such as a diesel engine or a gasoline engine, an electric motor, or a combination thereof. The electric motor operates using generated power generated by a generator connected to the internal combustion engine or discharge power of a secondary battery or a fuel cell.
The vehicle system 1 includes, for example, a camera 10, a radar device 12, a probe 14, an object recognition device 16, a communication device 20, an hmi (human Machine interface)30, a vehicle sensor 40, a navigation device 50, an mpu (map positioning unit)60, a driving operation unit 80, an automatic driving control device (automatic driving control device)100, a driving force output device 200, a brake device 210, and a steering device 220. These devices and apparatuses are connected to each other by a multiplex communication line such as a can (controller a network) communication line, a serial communication line, a wireless communication network, and the like. The configuration shown in fig. 1 is merely an example, and a part of the configuration may be omitted or another configuration may be added.
The camera 10 is a digital camera using a solid-state imaging device such as a ccd (charge Coupled device) or a cmos (complementary metal oxide semiconductor). The camera 10 is mounted on an arbitrary portion of a vehicle (hereinafter referred to as a host vehicle M) on which the vehicle system 1 is mounted. When photographing forward, the camera 10 is attached to the upper part of the front windshield, the rear surface of the vehicle interior mirror, or the like. The camera 10 repeatedly images the periphery of the host vehicle M periodically, for example. The camera 10 may also be a stereo camera. The camera 10 is an example of an "image acquisition apparatus".
The radar device 12 radiates radio waves such as millimeter waves to the periphery of the host vehicle M, and detects radio waves (reflected waves) reflected by an object to detect at least the position (distance and direction) of the object. The radar device 12 is mounted on an arbitrary portion of the vehicle M. The radar device 12 may detect the position and velocity of the object by an FM-cw (frequency Modulated Continuous wave) method.
The detector 14 is a LIDAR (light Detection and ranging). The detector 14 irradiates light to the periphery of the host vehicle M to measure scattered light. The detector 14 detects the distance to the subject based on the time from light emission to light reception. The light to be irradiated is, for example, pulsed laser light. The probe 14 is attached to an arbitrary portion of the vehicle M.
The object recognition device 16 performs a sensor fusion process on the detection results detected by some or all of the camera 10, the radar device 12, and the probe 14, and recognizes the position, the type, the speed, and the like of the object. The object recognition device 16 outputs the recognition result to the automatic driving control device 100. The object recognition device 16 may directly output the detection results of the camera 10, the radar device 12, and the detector 14 to the automatic driving control device 100. The object recognition device 16 may also be omitted from the vehicle system 1.
The communication device 20 communicates with another vehicle present in the vicinity of the host vehicle M or with various server devices via a wireless base station, for example, using a cellular network, a Wi-Fi network, Bluetooth (registered trademark), dsrc (dedicatedshort Range communication), or the like.
The HMI30 presents various information to the occupant of the host vehicle M, and accepts input operations by the occupant. The HMI30 includes various display devices, speakers, buzzers, touch panels, switches, keys, and the like.
The vehicle sensors 40 include a vehicle speed sensor that detects the speed of the own vehicle M, an acceleration sensor that detects acceleration, a yaw rate sensor that detects an angular velocity about a vertical axis, an orientation sensor that detects the orientation of the own vehicle M, and the like.
The Navigation device 50 includes, for example, a gnss (global Navigation Satellite system) receiver 51, a Navigation HMI52, and a route determination unit 53. The navigation device 50 holds the first map information 54 in a storage device such as an hdd (hard Disk drive) or a flash memory. The GNSS receiver 51 determines the position of the own vehicle M based on the signals received from the GNSS satellites. The position of the host vehicle M may also be determined or supplemented by an ins (inertial navigation system) that utilizes the output of the vehicle sensors 40. The navigation HMI52 includes a display device, a speaker, a touch panel, keys, and the like. The navigation HMI52 may also be partially or wholly shared with the aforementioned HMI 30. The route determination unit 53 determines a route (hereinafter referred to as an on-map route) from the position of the host vehicle M (or an arbitrary input position) specified by the GNSS receiver 51 to the destination input by the occupant using the navigation HMI52, for example, with reference to the first map information 54. The first map information 54 is information representing a road shape by, for example, a line representing a road and nodes connected by the line. The first map information 54 may also include curvature Of a road, poi (point Of interest) information, and the like. The map upper path is output to the MPU 60. The navigation device 50 may perform route guidance using the navigation HMI52 based on the on-map route. The navigation device 50 may be realized by a function of a terminal device such as a smartphone or a tablet terminal held by the occupant. The navigation device 50 may transmit the current position and the destination to the navigation server via the communication device 20, and acquire a route equivalent to the route on the map from the navigation server.
The MPU60 includes, for example, the recommended lane determining unit 61, and holds the second map information 62 in a storage device such as an HDD or a flash memory. The recommended lane determining unit 61 divides the on-map route provided from the navigation device 50 into a plurality of sections (for example, every 100[ m ] in the vehicle traveling direction), and determines the recommended lane for each section with reference to the second map information 62. The recommended lane determining unit 61 determines to travel in the first lane from the left. The recommended lane determining unit 61 determines the recommended lane so that the host vehicle M can travel on a reasonable route for traveling to the branch destination when there is a branch point on the route on the map.
The second map information 62 is map information with higher accuracy than the first map information 54. The second map information 62 includes, for example, information on the center of a lane, information on the boundary of a lane, and the like. The second map information 62 may include road information, traffic restriction information, residence information (residence, zip code), facility information, telephone number information, and the like. The second map information 62 can be updated at any time by the communication device 20 communicating with other devices.
The driving operation member 80 includes, for example, an accelerator pedal, a brake pedal, a shift lever, a steering wheel, a joystick, and other operation members. A sensor for detecting the operation amount or the presence or absence of operation is attached to the driving operation element 80, and the detection result is output to some or all of the automatic driving control device 100, the running driving force output device 200, the brake device 210, and the steering device 220.
The automatic driving control device 100 includes, for example, a first control unit 120 and a second control unit 160. The first control unit 120 and the second control unit 160 are each realized by a hardware processor such as a cpu (central Processing unit) executing a program (software). Some or all of these components may be realized by hardware (including circuit units) such as lsi (large scale integration), asic (application Specific Integrated circuit), FPGA (Field-Programmable Gate Array), gpu (graphics Processing unit), or the like, or may be realized by cooperation between software and hardware. The program may be stored in advance in a storage device (a storage device including a non-transitory storage medium) such as an HDD or a flash memory of the automatic drive control device 100, or may be stored in a removable storage medium such as a DVD or a CD-ROM, and attached to the HDD or the flash memory of the automatic drive control device 100 by being mounted on the drive device via the storage medium (the non-transitory storage medium).
Fig. 2 is a functional configuration diagram of the first control unit 120 and the second control unit 160. The first control unit 120 includes, for example, a recognition unit 130 and an action plan generation unit 140. The first control unit 120 implements, for example, an AI (artificial intelligence) function and a model function in parallel. For example, the function of "recognizing an intersection" can be realized by "performing recognition of an intersection by deep learning or the like and recognition based on a predetermined condition (presence of a signal, a road sign, or the like that enables pattern matching) in parallel, and scoring both sides and comprehensively evaluating them. Thereby, the reliability of automatic driving is ensured. The action plan generating unit 140 and the second control unit 160 are combined to form an example of the "driving control unit".
The recognition unit 130 recognizes the state of an object in the vicinity of the host vehicle M, such as the position, velocity, and acceleration, based on information input from the camera 10, radar device 12, and probe 14 via the object recognition device 16. The position of the object is recognized as a position on absolute coordinates with the origin at a representative point (center of gravity, center of drive axis, etc.) of the host vehicle M, for example, and used for control. The position of the object may be represented by a representative point such as the center of gravity and a corner of the object, or may be represented by a region to be represented. The "state" of the object may also include acceleration, jerk, or "state of action" of the object (e.g., whether a lane change is being made, or is to be made).
The recognition unit 130 recognizes, for example, a lane (traveling lane) in which the host vehicle M is traveling. For example, the recognition unit 130 recognizes the traveling lane by comparing the pattern of road dividing lines (e.g., the arrangement of solid lines and broken lines) obtained from the second map information 62 with the pattern of road dividing lines around the host vehicle M recognized from the image captured by the camera 10. The recognition unit 130 recognizes not only the road dividing line but also a traveling lane by recognizing a traveling lane boundary (road boundary) including a road dividing line, a shoulder, a curb, a center barrier, a guardrail, and the like. In this recognition, the position of the own vehicle M acquired from the navigation device 50 and the processing result by the INS processing may be added. The recognition part 130 recognizes a temporary stop line, an obstacle, a red light, a toll booth, and other road phenomena.
The recognition unit 130 recognizes the position and posture of the host vehicle M with respect to the travel lane when recognizing the travel lane. The recognition unit 130 may recognize, for example, a deviation of the reference point of the host vehicle M from the center of the lane and an angle formed by the traveling direction of the host vehicle M with respect to a line connecting the centers of the lanes as the relative position and posture of the host vehicle M with respect to the traveling lane. Instead, the recognition unit 130 may recognize the position of the reference point of the host vehicle M with respect to an arbitrary side end portion (road partition line or road boundary) of the traveling lane, as the relative position of the host vehicle M with respect to the traveling lane.
The recognition unit 130 includes an image processing unit 132. The image processing unit 132 acquires an image captured by the camera 10, and generates overhead view data that indicates a road surface area captured in the image as viewed virtually from directly above, based on the acquired image. The overhead view data includes first edge points representing boundaries of the road surface area, second edge points representing positions of white lines drawn on the road surface, such as road division lines, temporary stop lines, and the like. The second edge point is different in value (e.g., luminance difference) from the first edge point by a large amount. Hereinafter, the first edge point and the second edge point are referred to as edge points. The image processing unit 132 outputs the generated overhead view data to the action plan generating unit 140.
The image processing unit 132 acquires a plurality of time-series images continuously captured by the camera 10 every predetermined time, and generates one overhead view data using the acquired plurality of images. The image processing unit 132 generates data of the distant area in the overhead view data using data of the distant area distant to the front of the host vehicle M captured in each time-series image. At this time, the image processing unit 132 uses the data of the near area in front of the host vehicle M in each image for registration when generating the data of the far area. Thus, the captured distant area of each image after the alignment is slightly shifted, and the image processing unit 132 generates one overhead view data by compensating (supplementing) data of each position of the distant area that cannot be generated from 1 image with a plurality of images.
The image processing unit 132 specifies a covered area (so-called covered area) covered by an object such as a building, for example, in the road surface area included in the generated overhead view data, and supplements data of the road surface area covered by the covered area in the overhead view data with map information including the covered area. The image processing unit 132 uses the second map information 62 as the map information. An example of the process of generating the overhead view data in the image processing unit 132 will be described later.
The action plan generating unit 140 generates a target trajectory on which the host vehicle M will automatically (automatically) travel in the future so as to travel on the recommended lane determined by the recommended lane determining unit 61 in principle and to be able to cope with the surrounding situation of the host vehicle M. The action plan generating unit 140 generates a target trajectory based on the overhead view data generated by the image processing unit 132. The target track contains, for example, a velocity element. For example, the target track is represented by a track in which the points (track points) to which the vehicle M should arrive are arranged in order. The track point is a point to which the host vehicle M should arrive at every predetermined travel distance (for example, several [ M ] or so) in terms of a distance along the way, and, unlike this, a target speed and a target acceleration at every predetermined sampling time (for example, several zero-point [ sec ] or so) are generated as a part of the target track. The track point may be a position to which the host vehicle M should arrive at a predetermined sampling time. In this case, the information of the target velocity and the target acceleration is expressed by the interval between the track points.
The action plan generating unit 140 may set an event of autonomous driving when generating the target trajectory. Examples of the event of the automatic driving include a constant speed driving event, a low speed follow-up driving event, a lane change event, a branch event, a merge event, and a take-over event. The action plan generating unit 140 generates a target trajectory corresponding to the started event.
The second control unit 160 controls the running driving force output device 200, the brake device 210, and the steering device 220 so that the host vehicle M passes through the target trajectory generated by the action plan generation unit 140 at a predetermined timing.
Returning to fig. 2, the second control unit 160 includes, for example, an acquisition unit 162, a speed control unit 164, and a steering control unit 166. The acquisition unit 162 acquires information of the target track (track point) generated by the action plan generation unit 140, and stores the information in a memory (not shown). The speed control unit 164 controls the running drive force output device 200 or the brake device 210 based on the speed element associated with the target track stored in the memory. The steering control unit 166 controls the steering device 220 according to the curve condition of the target track stored in the memory. The processing of the speed control unit 164 and the steering control unit 166 is realized by, for example, a combination of feedforward control and feedback control. For example, the steering control unit 166 performs a combination of feedforward control according to the curvature of the road ahead of the host vehicle M and feedback control based on deviation from the target trajectory.
Running drive force output device 200 outputs running drive force (torque) for running the vehicle to the drive wheels. The travel driving force output device 200 includes, for example, a combination of an internal combustion engine, a motor, a transmission, and the like, and an ecu (electronic Control unit) that controls them. The ECU controls the above configuration in accordance with information input from the second control unit 160 or information input from the driving operation element 80.
The brake device 210 includes, for example, a caliper, a hydraulic cylinder that transmits hydraulic pressure to the caliper, an electric motor that generates hydraulic pressure in the hydraulic cylinder, and a brake ECU. The brake ECU controls the electric motor so that a braking torque corresponding to a braking operation is output to each wheel, in accordance with information input from the second control unit 160 or information input from the driving operation element 80. The brake device 210 may be provided with a mechanism for transmitting the hydraulic pressure generated by the operation of the brake pedal included in the driving operation tool 80 to the hydraulic cylinder via the master cylinder as a backup. The brake device 210 is not limited to the above-described configuration, and may be an electronically controlled hydraulic brake device that transmits the hydraulic pressure of the master cylinder to the hydraulic cylinder by controlling the actuator in accordance with information input from the second control unit 160.
The steering device 220 includes, for example, a steering ECU and an electric motor. The electric motor changes the orientation of the steering wheel by applying a force to a rack-and-pinion mechanism, for example. The steering ECU drives the electric motor in accordance with information input from the second control unit 160 or information input from the driving operation element 80 to change the direction of the steered wheels.
[ creation processing of bird's-eye view data and target orbit ]
The following describes processing for generating a target trajectory on which the host vehicle M will travel in the future, which is realized by the image processing unit 132 and the action plan generating unit 140. In the following description, the image processing unit 132 generates overhead view data based on 1 image captured by the camera 10, and the action plan generating unit 140 generates a target trajectory based on the overhead view data. Fig. 3 is a flowchart showing an example of a flow of a series of processes executed by the image processing unit 132 and the action plan generating unit 140 according to the first embodiment.
The image processing unit 132 acquires an image captured by the camera 10 (step S100). Next, the image processing unit 132 converts the acquired image into a virtual overhead view (step S110).
Then, the image processing unit 132 determines whether or not a blocked area exists in the converted overhead view (step S120). If it is determined in step S120 that there is no hidden area in the overhead view, the image processing unit 132 advances the process to step S150.
On the other hand, if it is determined in step S120 that there is a hidden area in the overhead view, the image processing unit 132 specifies a hidden area that is included in the overhead view and that covers the road surface area (step S130). Then, the image processing unit 132 supplements the data of the road surface area corresponding to the occlusion area, using the second map information 62 including the identified occlusion area (step S140).
Next, the image processing unit 132 extracts edge points included in the overhead view (step S150). Then, the image processing unit 132 identifies the road surface area and the lane in the road surface area included in the overhead view data based on the extracted edge points (step S160).
Next, the image processing unit 132 generates overhead view data including the recognized road surface area and the lane in the road surface area (step S170). Then, the image processing unit 132 outputs the generated overhead view data to the action plan generating unit 140.
Thereafter, the action plan generating unit 140 generates a target trajectory based on the overhead view data output from the image processing unit 132 (step S180). Thus, the second control unit 160 controls the travel of the host vehicle M so as to sequentially pass through the target trajectory generated by the action plan generating unit 140.
[ example of generating overhead view data and target track ]
Fig. 4 is a diagram schematically showing an example of a series of processes performed by the image processing unit 132 and the action plan generating unit 140 according to the first embodiment. Fig. 4 shows an example of each process when the image processing unit 132 generates the overhead view data and an example of the target trajectory generated by the action plan generating unit 140. In the following description, the image processing unit 132 does not complement the occlusion region.
When the image as shown in fig. 4 (a) is acquired from the camera 10, the image processing unit 132 converts the acquired image into a virtual overhead view as shown in fig. 4 (b). At this time, the image processing unit 132 converts the image acquired from the camera 10 into one overhead view by arranging data of each pixel constituting the acquired image at a position corresponding to the virtual overhead view.
In a distant area (for example, an upper area in fig. 4 (b)) in the overhead view obtained by the conversion, there is a data blank position N where no pixel is disposed, that is, where data is blank (NULL). Then, the image processing unit 132 converts the image into a single overhead view by using data of pixels in a distant area (for example, an upper area in fig. 4 (a)) included in the plurality of images acquired in time series from the camera 10. In this way, the image processing unit 132 converts a plurality of time-series images acquired from the camera 10 into a single virtual overhead view as shown in fig. 4 (c).
Further, the following is also considered: even if a plurality of images obtained in time series from the camera 10 are used, data may not be arranged at the positions of all the pixels constituting the overhead view, and the data blank position N still remains. In this case, as shown in fig. 4 (b), the image processing unit 132 may supplement the data at the data blank position N based on the data around the data blank position N, that is, the data of other pixels located around the data blank position N. For example, the image processing unit 132 may perform arithmetic averaging on the values of 4 pixels located in the periphery of the data blank position N (for example, located in the upper, lower, left, and right sides), and set the value as the value of the pixel at the data blank position N. At this time, the image processing unit 132 may set the value of the pixel at the data blank position N, which is orthogonal to the traveling direction of the host vehicle M and is considered to represent the span of the temporary stop line, to a value obtained by increasing the ratio of the values of the other pixels positioned on the left and right sides and performing arithmetic averaging. The image processing unit 132 may set the value of the pixel at the data blank position N, which is parallel to the traveling direction of the host vehicle M and is considered to represent, for example, the boundary of the road surface region or the span of the road dividing line, to a value obtained by increasing the ratio of the values of other pixels located above and below and performing arithmetic averaging. Thereby, the image processing section 132 can more appropriately complement the value of the pixel at the data blank position N.
In this way, the image processing unit 132 converts the image acquired from the camera 10 as shown in fig. 4 (a) into a bird's eye view in which the data of the data blank position N as shown in fig. 4 (c) is supplemented. In fig. 4 (b), the range of the data blank position N is not shown as a range corresponding to the range of 1 pixel, but the range of 1 data blank position N is also a range corresponding to the range of 1 pixel because the image processing unit 132 performs the process of converting the data blank position N into an overhead view for each pixel.
Then, the image processing unit 132 extracts edge points included in the overhead view shown in fig. 4 (c) and generates overhead view data as shown in fig. 4 (d). At this time, the image processing unit 132 extracts pixels at the edge points based on the difference between the data of each pixel included in the overhead view and the data of other pixels located in the periphery of the pixel. More specifically, when a certain 1 pixel is considered, the luminance value of the 1 pixel is compared with the luminance values of other pixels located in the periphery of the pixel, and when the difference in luminance values is greater than a preset threshold value, it is determined that the pixel is a pixel of an edge point, and when the difference in luminance values is equal to or less than the preset threshold value, it is determined that the pixel is not a pixel of an edge point, thereby extracting a pixel of an edge point. That is, the image processing unit 132 extracts a high-frequency component in the data of each pixel included in the overhead view as an edge point. Then, the image processing unit 132 generates overhead view data including the edge point E as shown in fig. 4 (d).
The image processing unit 132 outputs the generated overhead view data to the action plan generating unit 140. Thus, the action plan generating unit 140 generates the target trajectory as shown in fig. 4 (e) based on the overhead view data output from the image processing unit 132. Fig. 4 (e) shows an example of a case where a target trajectory in which the trajectory points K to which the host vehicle M should sequentially arrive are sequentially aligned is generated in order to make the host vehicle M turn right at the next intersection.
[ supplementary example of occlusion region ]
Fig. 5 is a diagram schematically showing an example of processing for the image processing unit 132 of the first embodiment to supplement the road surface area corresponding to the mask area (blocked area) included in the overhead view data. Fig. 5 shows an example of a case where a blocking area exists in each corner of an intersection where the host vehicle M enters next. In the following description, it is assumed that the values of the pixels at the data blank position N in the overhead view converted by the image processing unit 132 have already been supplemented.
When the image as shown in fig. 5 (a) acquired from the camera 10 is converted into an overhead view having the block areas O1 to O4 as shown in fig. 5 (b), the image processing unit 132 specifies the block areas that block the road surface area and are included in the overhead view. The image processor 132 supplements the data of each road surface region shielded by the shield regions O1 to O4 with the second map information 62 including information corresponding to the identified shield regions O1 to O4, respectively. More specifically, the image processing unit 132 applies information on the center of the lane, information on the boundary of the lane, information on the temporary stop line, and the like included in the second map information 62 to the corresponding road surface region other than the shielded region, thereby finding the positions of the lane, the boundary of the lane, and the temporary stop line in the shielded region, and supplements the information on the found positions to the data of the road surface region. Fig. 5 (C) shows an example of a case where the data of the supplementary areas C1 to C4 is supplemented with the second map information 62.
The image processing unit 132 may select a supplementary mask region. For example, the image processing unit 132 may not supplement a blocking area that blocks a road surface area that is not a travel path on which the host vehicle M will travel in the future.
In this way, the image processing unit 132 converts the image acquired from the camera 10 as shown in fig. 5 (a) into a single overhead view without an occlusion region as shown in fig. 5 (d). In fig. 5 (b) and 5 (C), although the ranges of the occlusion regions O1 to O4 and the supplemental regions C1 to C4 are not shown as ranges corresponding to the range of 1 pixel, the image processing unit 132 performs processing for the occlusion regions for each pixel, and therefore the ranges of the occlusion regions and the supplemental regions are also determined for each pixel.
As described above, according to the vehicle system 1 of the first embodiment, the image processing unit 132 generates overhead view data more clearly showing the depth direction, the boundary of the road surface area in the covered area, the road division line, the temporary stop line, and the like included in the overhead view obtained by converting the image by using the plurality of time-series images acquired from the camera 10 and the second map information 62, and outputs the overhead view data to the action plan generating unit 140. Thus, the action plan generating unit 140 can generate a target trajectory in which the accuracy of a region far in the depth direction is particularly improved as compared with the target trajectory generated for the image captured by the camera 10. That is, according to the vehicle system 1 of the first embodiment, the travel path for traveling the host vehicle M can be generated with higher accuracy.
< second embodiment >
The second embodiment is explained below. In the second embodiment, the order of processing for generating overhead view data is different from that in the first embodiment. More specifically, in the second embodiment, the road surface area and the lane in the road surface area identified from the overhead view obtained by the conversion in the first embodiment are identified from the image acquired from the camera 10.
Fig. 6 is a flowchart showing an example of a flow of a series of processes executed by the image processing unit 132 and the action plan generating unit 140 according to the second embodiment. In the following description, the image processing unit 132 generates overhead view data based on 1 image captured by the camera 10, and the action plan generating unit 140 generates a target trajectory based on the overhead view data.
The image processing unit 132 acquires an image captured by the camera 10 (step S200). Next, the image processing unit 132 extracts edge points included in the acquired image (step S210). Then, the image processing unit 132 identifies the road surface area and the lane in the road surface area included in the overhead view data based on the extracted edge points (step S220).
Next, the image processing unit 132 generates overhead view data in which the recognized road surface area and the lane in the road surface area are shown as an overhead view (step S230). Then, the image processing unit 132 determines whether or not a blocked area exists in the road surface area indicated by the generated overhead view data (step S240). If it is determined in step S240 that there is no occlusion area in the road surface area indicated by the overhead view data, the image processing unit 132 advances the process to step S270. In other words, the image processing unit 132 outputs the generated overhead view data to the action plan generating unit 140.
On the other hand, if it is determined in step S240 that there is a blocked area in the road surface area indicated by the overhead view data, the image processing unit 132 identifies a blocked area that blocks the road surface area in the overhead view data (step S250). Then, the image processing unit 132 supplements the data of the shielded road surface area with the second map information 62 including the identified shielded area (step S260). Then, the image processing unit 132 outputs the overhead view data supplemented with the data of the road surface area to the action plan generating unit 140.
Thereafter, the action plan generating unit 140 generates a target trajectory based on the overhead view data output from the image processing unit 132 (step S270). Thus, the second control unit 160 controls the travel of the host vehicle M so as to sequentially pass through the target trajectory generated by the action plan generating unit 140.
[ example of generating overhead view data and target track ]
Fig. 7 is a diagram schematically showing an example of a series of processes performed by the image processing unit 132 and the action plan generating unit 140 according to the second embodiment. Fig. 7 shows an example of each process when the image processing unit 132 generates overhead view data, and an example of the target trajectory generated by the action plan generating unit 140. The process of supplementing the blocked-out region by the image processing unit 132 according to the second embodiment can be easily understood by considering the same process as the process of supplementing the blocked-out region by the image processing unit 132 according to the first embodiment. Therefore, in the following description, a description of the addition of the blocking area by the image processing unit 132 according to the second embodiment will be omitted.
When the image as shown in fig. 7 (a) is acquired from the camera 10, the image processing unit 132 extracts a pixel of the edge point EP as shown in fig. 7 (b) based on a difference between data of each pixel included in the acquired image and data of the periphery of the pixel, that is, data of other pixels located in the periphery of the pixel.
In the second embodiment, the image processing unit 132 acquires a plurality of images in time series from the camera 10. Therefore, the image processing unit 132 extracts the pixels of the edge point EP from each of the acquired images. Here, the processing of extracting the pixels of the edge point EP may be performed by replacing the overhead view in the processing of extracting the pixels of the edge point E performed by the image processing unit 132 of the first embodiment with the image acquired from the camera 10.
Then, the image processing unit 132 generates overhead view data in which the positions of the extracted edge points EP are shown in a virtual single overhead view as shown in fig. 7 (c). At this time, the image processing unit 132 generates the overhead view data shown in the overhead view by arranging the positions of the edge points EP extracted from the respective images at corresponding positions of the virtual overhead view. Thus, in the overhead view data, the edge point EP is also arranged in a distant area (for example, an upper area in fig. 7 (c)). In this way, the image processing unit 132 generates overhead view data as shown in fig. 7 (d) in which the edge points EP extracted from the time-series plurality of images acquired from the camera 10 are arranged.
Further, the following is also considered: even when edge points EP are extracted from a plurality of images acquired in time series from the camera 10 and bird's-eye view data is generated, edge points EP indicating all of the road surface area and the lanes in the road surface area are not necessarily arranged in the bird's-eye view data, and a position where the edge points EP are not arranged, that is, the same state as the data blank position N in the first embodiment remains. In this case, as shown in fig. 7 (c), the image processing unit 132 may supplement the edge point EP of the data blank position N based on the peripheral data of the data blank position N, that is, other edge points EP located in the periphery of the data blank position N. However, in the second embodiment, the edge point EP indicates only the position indicating the boundary of the road surface region, the road marking line, or the temporary stop line, for example. Therefore, in the second embodiment, even if the processing for arithmetically averaging the values of 4 pixels located in the periphery is not performed as in the first embodiment, if any one of the upper, lower, left, and right sides of the data blank position N is the edge point EP, the data blank position N may be considered to be the edge point EP, and the edge point EP at the data blank position N may be supplemented.
In this way, the image processing unit 132 extracts the edge point EP from the image acquired from the camera 10 as shown in fig. 7 (a), and generates overhead view data shown in one overhead view as shown in fig. 7 (d) by supplementing the edge point EP at the data blank position N. In fig. 7 (c), the range of the data blank position N is not shown as a range corresponding to the range of 1 pixel, that is, the range of 1 edge point EP, but the range of 1 data blank position N is also a range corresponding to the range of 1 edge point EP, that is, the range of 1 pixel, because the image processing unit 132 generates the overhead view data shown in the overhead view for each edge point EP.
The image processing unit 132 outputs the generated overhead view data to the action plan generating unit 140. Thus, the action plan generating unit 140 generates the target trajectory as shown in fig. 7 (e) based on the overhead view data output from the image processing unit 132. Fig. 7 (e) also shows an example of a case where a target trajectory in which the trajectory points K to which the host vehicle M should sequentially arrive are sequentially arranged is generated in order to make the host vehicle M turn right at the next intersection.
As described above, according to the vehicle system 1 of the second embodiment, the image processing unit 132 generates overhead view data more clearly showing the depth direction included in the overhead view obtained by converting the image, the boundary of the road surface area in the covered area, the road division line, the temporary stop line, and the like, by using the plurality of time-series images acquired from the camera 10 and the second map information 62, and outputs the overhead view data to the action plan generating unit 140. Thus, the action plan generating unit 140 can generate a target trajectory in which the accuracy of a region far in the depth direction is particularly improved as compared with the target trajectory generated for the image captured by the camera 10. That is, according to the vehicle system 1 of the second embodiment, the travel path for traveling the host vehicle M can be generated with higher accuracy.
[ hardware configuration ]
Fig. 8 is a diagram showing an example of the hardware configuration of the automatic driving control apparatus 100 according to the embodiment. As shown in the figure, the automatic driving control apparatus 100 is configured such that a communication controller 100-1, a CPU100-2, a ram (random Access memory)100-3 used as a work memory, a rom (read Only memory)100-4 storing a boot program and the like, a flash memory, a storage apparatus 100-5 such as an hdd (hard Disk drive) and the like, and a drive apparatus 100-6 are connected to each other via an internal bus or a dedicated communication line. The communication controller 100-1 performs communication with components other than the automatic driving control apparatus 100. The storage device 100-5 stores a program 100-5a executed by the CPU 100-2. The program is developed into the RAM100-3 by a dma (direct memory access) controller (not shown) or the like, and executed by the CPU 100-2. In this way, a part or all of the first and second control units 120 and 160, more specifically, the image processing unit 132 and the action plan generating unit 140 are realized.
The above-described embodiments can be expressed as follows.
A vehicle system configured to include:
a storage device in which a program is stored; and
a hardware processor for executing a program of a program,
the hardware processor performs the following processing by executing a program stored in the storage device:
acquiring an image of a traveling direction of the vehicle from an image acquisition device;
identifying a surrounding environment of the vehicle;
performing driving control based on speed control and steering control of the vehicle based on the recognition result;
generating aerial view data based on the image; and
generating a trajectory for the vehicle to travel in the future based on the overhead view data, and causing the vehicle to travel with the generated trajectory.
While the present invention has been described with reference to the embodiments, the present invention is not limited to the embodiments, and various modifications and substitutions can be made without departing from the scope of the present invention.

Claims (9)

1. A vehicle system, wherein,
the vehicle system includes:
an image acquisition device that acquires an image of a traveling direction of a vehicle;
an identification unit that identifies a surrounding environment of the vehicle; and
a driving control unit that performs driving control based on speed control and steering control of the vehicle based on a recognition result of the recognition unit,
the recognition unit includes an image processing unit that generates overhead view data based on the image,
the driving control unit generates a track for the vehicle to travel in the future based on the overhead view data, and causes the vehicle to travel on the generated track.
2. The vehicle system according to claim 1,
the image processing section extracts edge points based on differences between the peripheral data and each data included in the overhead view data,
the recognition unit recognizes a lane included in the overhead view data based on the edge point extracted by the image processing unit.
3. The vehicle system according to claim 1,
the image processing unit extracts edge points based on a difference between data included in the image and peripheral data, and specifies a lane in the overhead view data using the extracted edge points.
4. The vehicle system according to any one of claims 1 to 3,
the image acquisition means acquires a plurality of the images in time series,
the image processing unit generates one overhead view data by using a plurality of images acquired in time series by the image acquisition device.
5. The vehicle system according to claim 4,
the image processing unit generates data of the remote area in one of the overhead view data using data of the remote area in a plurality of images acquired in time series by the image acquisition device.
6. The vehicle system according to claim 1,
the image processing unit specifies a mask area in which the road surface included in the overhead view data is masked by an object,
supplementing the determined masked area with pixel data of a periphery of the masked area in the overhead view data,
generating a trajectory for the vehicle to travel in the future using the supplemented overhead view, and causing the vehicle to travel with the generated trajectory.
7. The vehicle system according to claim 6,
the image processing unit supplements data representing the road surface area of the overhead view data with map information that is separately stored when supplementing the masked area.
8. A control method of a vehicle system, wherein,
the control method of the vehicle system causes a computer of the vehicle system to perform:
acquiring an image of a traveling direction of the vehicle from an image acquisition device;
identifying a surrounding environment of the vehicle;
performing driving control based on speed control and steering control of the vehicle based on the recognition result;
generating aerial view data based on the image; and
generating a trajectory for the vehicle to travel in the future based on the overhead view data, and causing the vehicle to travel with the generated trajectory.
9. A storage medium storing a program, wherein,
the program causes a computer of a vehicle system to perform the following processing:
acquiring an image of a traveling direction of the vehicle from an image acquisition device; and
identifying a surrounding environment of the vehicle;
performing driving control based on speed control and steering control of the vehicle based on the recognition result;
generating aerial view data based on the image; and
generating a trajectory for the vehicle to travel in the future based on the overhead view data, and causing the vehicle to travel with the generated trajectory.
CN202010193773.1A 2019-03-20 2020-03-18 Vehicle system, control method for vehicle system, and storage medium Active CN111717201B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019052471A JP7160730B2 (en) 2019-03-20 2019-03-20 VEHICLE SYSTEM, VEHICLE SYSTEM CONTROL METHOD, AND PROGRAM
JP2019-052471 2019-03-20

Publications (2)

Publication Number Publication Date
CN111717201A true CN111717201A (en) 2020-09-29
CN111717201B CN111717201B (en) 2024-04-02

Family

ID=72559202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010193773.1A Active CN111717201B (en) 2019-03-20 2020-03-18 Vehicle system, control method for vehicle system, and storage medium

Country Status (2)

Country Link
JP (1) JP7160730B2 (en)
CN (1) CN111717201B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011030140A (en) * 2009-07-29 2011-02-10 Hitachi Automotive Systems Ltd External world recognition device
CN102470832A (en) * 2009-07-15 2012-05-23 日产自动车株式会社 Vehicle-driving support system and vehicle-driving support method
CN103299617A (en) * 2011-01-11 2013-09-11 爱信精机株式会社 Image generating device
CN104660977A (en) * 2013-11-15 2015-05-27 铃木株式会社 Bird's eye view image generating device
CN104859542A (en) * 2015-05-26 2015-08-26 寅家电子科技(上海)有限公司 Vehicle monitoring system and vehicle monitoring processing method
CN105416394A (en) * 2014-09-12 2016-03-23 爱信精机株式会社 Control device and control method for vehicle
CN107274719A (en) * 2016-03-31 2017-10-20 株式会社斯巴鲁 Display device
WO2018179275A1 (en) * 2017-03-30 2018-10-04 本田技研工業株式会社 Vehicle control system, vehicle control method, and vehicle control program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5634046B2 (en) * 2009-09-25 2014-12-03 クラリオン株式会社 Sensor controller, navigation device, and sensor control method
JP2012195793A (en) * 2011-03-17 2012-10-11 Clarion Co Ltd Vehicle periphery monitoring device
WO2018230530A1 (en) * 2017-06-16 2018-12-20 本田技研工業株式会社 Vehicle control system, vehicle control method, and program
JP6990248B2 (en) * 2017-08-25 2022-01-12 本田技研工業株式会社 Display control device, display control method and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102470832A (en) * 2009-07-15 2012-05-23 日产自动车株式会社 Vehicle-driving support system and vehicle-driving support method
JP2011030140A (en) * 2009-07-29 2011-02-10 Hitachi Automotive Systems Ltd External world recognition device
CN103299617A (en) * 2011-01-11 2013-09-11 爱信精机株式会社 Image generating device
CN104660977A (en) * 2013-11-15 2015-05-27 铃木株式会社 Bird's eye view image generating device
CN105416394A (en) * 2014-09-12 2016-03-23 爱信精机株式会社 Control device and control method for vehicle
CN104859542A (en) * 2015-05-26 2015-08-26 寅家电子科技(上海)有限公司 Vehicle monitoring system and vehicle monitoring processing method
CN107274719A (en) * 2016-03-31 2017-10-20 株式会社斯巴鲁 Display device
WO2018179275A1 (en) * 2017-03-30 2018-10-04 本田技研工業株式会社 Vehicle control system, vehicle control method, and vehicle control program

Also Published As

Publication number Publication date
CN111717201B (en) 2024-04-02
JP2020154708A (en) 2020-09-24
JP7160730B2 (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN111201170B (en) Vehicle control device and vehicle control method
CN110281941B (en) Vehicle control device, vehicle control method, and storage medium
CN109693667B (en) Vehicle control device, vehicle control method, and storage medium
CN110341704B (en) Vehicle control device, vehicle control method, and storage medium
CN110126822B (en) Vehicle control system, vehicle control method, and storage medium
CN109624973B (en) Vehicle control device, vehicle control method, and storage medium
CN110194166B (en) Vehicle control system, vehicle control method, and storage medium
CN111824141B (en) Display control device, display control method, and storage medium
CN110217231B (en) Vehicle control device, vehicle control method, and storage medium
CN111688692A (en) Vehicle control device, vehicle control method, and storage medium
CN112677966A (en) Vehicle control device, vehicle control method, and storage medium
CN112124311A (en) Vehicle control device, vehicle control method, and storage medium
CN111273651A (en) Vehicle control device, vehicle control method, and storage medium
CN111731304B (en) Vehicle control device, vehicle control method, and storage medium
CN114537386A (en) Vehicle control device, vehicle control method, and computer-readable storage medium
CN109559540B (en) Periphery monitoring device, periphery monitoring method, and storage medium
CN110816524B (en) Vehicle control device, vehicle control method, and storage medium
CN110194153B (en) Vehicle control device, vehicle control method, and storage medium
CN110341703B (en) Vehicle control device, vehicle control method, and storage medium
CN113525378B (en) Vehicle control device, vehicle control method, and storage medium
CN114954511A (en) Vehicle control device, vehicle control method, and storage medium
CN115158347A (en) Mobile object control device, mobile object control method, and storage medium
CN115158348A (en) Mobile object control device, mobile object control method, and storage medium
CN115123206A (en) Control device for moving body, control method for moving body, and storage medium
JP7028838B2 (en) Peripheral recognition device, peripheral recognition method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant