WO2020121810A1 - Display control device, display control program, and tangible, non-transitory computer-readable recording medium - Google Patents

Display control device, display control program, and tangible, non-transitory computer-readable recording medium Download PDF

Info

Publication number
WO2020121810A1
WO2020121810A1 PCT/JP2019/046318 JP2019046318W WO2020121810A1 WO 2020121810 A1 WO2020121810 A1 WO 2020121810A1 JP 2019046318 W JP2019046318 W JP 2019046318W WO 2020121810 A1 WO2020121810 A1 WO 2020121810A1
Authority
WO
WIPO (PCT)
Prior art keywords
map information
display
information
precision map
display mode
Prior art date
Application number
PCT/JP2019/046318
Other languages
French (fr)
Japanese (ja)
Inventor
智 堀畑
祐介 近藤
猛 羽藤
一輝 小島
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019196468A external-priority patent/JP7052786B2/en
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Priority to DE112019006171.2T priority Critical patent/DE112019006171T5/en
Publication of WO2020121810A1 publication Critical patent/WO2020121810A1/en
Priority to US17/222,259 priority patent/US20210223058A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays

Definitions

  • the present disclosure relates to a display control device for displaying a virtual image, a display control program, and a computer-readable persistent tangible recording medium.
  • Patent Document 1 discloses a head-up display device that uses map information for display control of a virtual image. This device displays the shape of the road ahead of the vehicle as a virtual image based on the current position of the vehicle and map information.
  • map information includes high-precision map information and low-precision map information, which is relatively less accurate than high-precision map information.
  • Patent Document 1 does not assume that the map information is effectively used.
  • the present disclosure aims to provide a display control device, a display control program, and a computer-readable persistent tangible recording medium that can effectively use map information.
  • a display control device that is used in a vehicle and that controls the display of a virtual image that is superimposed on the foreground of an occupant includes a vehicle position acquisition unit that acquires the position of the vehicle and a height corresponding to the position.
  • Accuracy map information or a low accuracy map information that is less accurate than the high accuracy map information; a map information acquisition unit; and if the high accuracy map information can be acquired, the first based on the high accuracy map information.
  • the virtual image is generated in the display mode and the high-accuracy map information cannot be acquired, the virtual image is generated in the second display mode different from the first display mode based on the low-accuracy map information.
  • a display generation unit When the virtual image is generated in the display mode and the high-accuracy map information cannot be acquired, the virtual image is generated in the second display mode different from the first display mode based on the low-accuracy map information.
  • a display control program which is used in a vehicle and controls display of a virtual image superimposed on a foreground of an occupant, includes at least one processing unit and a vehicle position acquisition unit that acquires the position of the vehicle.
  • the virtual image is generated in the first display mode based on the accuracy map information and the high accuracy map information cannot be acquired, the second display different from the first display mode based on the low accuracy map information.
  • the second display different from the first display mode based on the low accuracy map information.
  • function as a display generation unit that generates the virtual image.
  • a computer-readable persistent tangible recording medium including computer-implemented instructions, the instructions being used in a vehicle to control the display of a virtual image superimposed on the occupant's foreground.
  • the instruction is to acquire the position of the vehicle, to acquire high-precision map information corresponding to the position or low-precision map information having a lower accuracy than the high-precision map information, and the high-precision map information.
  • the precision map information can be acquired, the virtual image is generated in the first display mode based on the high precision map information, and when the high precision map information cannot be acquired, the low precision map information is generated. And generating the virtual image in a second display mode different from the first display mode.
  • a display control device that is used in a vehicle and that controls the display of a virtual image that is superimposed on the occupant's foreground includes at least one processing unit, and the at least one processing unit is the vehicle.
  • the position is acquired, high-precision map information corresponding to the position or low-precision map information having a lower accuracy than the high-precision map information is acquired, and when the high-precision map information can be acquired, the high-precision map information is acquired.
  • the second display mode based on the low-precision map information, which is different from the first display mode. To generate the virtual image.
  • the high-precision map information when the high-precision map information can be acquired, the high-precision map information is used to generate the virtual image, and when the high-precision map information cannot be acquired, the virtual image is generated. Low precision map information is used. As described above, it is possible to display the virtual image by selectively using the high-precision map information and the low-precision map information. Therefore, it is possible to provide a display control device, a display control program, and a computer-readable persistent tangible recording medium that can effectively use map information.
  • the drawing is 1 is a schematic diagram of a vehicle system including an HCU according to a first embodiment, It is a figure which shows the example of mounting in a vehicle of HUD, It is a block diagram showing a schematic structure of HCU, It is a figure showing an example of a superimposed display, It is a figure showing an example of a superimposed display, It is a figure showing an example of non-overlap display, It is a figure which shows the mode of the display gap by the superimposed display of a modification, It is a conceptual diagram showing an example of a display switching timing, It is a flow chart which shows an example of processing which HCU performs, It is a schematic diagram of a vehicle system containing HCU concerning a 2nd embodiment, It is a block diagram which shows the schematic structure of HCU of 2nd Embodiment.
  • the display control device of the first embodiment will be described with reference to FIGS. 1 to 9.
  • the display control device of the first embodiment is provided as an HCU (Human Machine Interface Control Unit) 20 used in the vehicle system 1.
  • the vehicle system 1 is used in a vehicle A that travels on the road such as an automobile.
  • the vehicle system 1 includes, for example, an HMI (Human Machine Interface) system 2, a locator 3, a periphery monitoring sensor 4, a driving support ECU 6, and a navigation device 7.
  • the HMI system 2, the locator 3, the peripheral monitoring sensor 4, the driving support ECU 6, and the navigation device 7 are connected to, for example, an in-vehicle LAN.
  • the locator 3 includes a GNSS (Global Navigation Satellite System) receiver 30, an inertial sensor 31, a high precision map database (hereinafter, high precision map DB) 32, and a locator ECU 33.
  • the GNSS receiver 30 receives positioning signals from a plurality of artificial satellites.
  • the inertial sensor 31 includes, for example, a gyro sensor and an acceleration sensor.
  • the high-precision map DB 32 is a non-volatile memory and stores high-precision map data (high-precision map information).
  • the high precision map DB 32 is provided by a memory device of a locator ECU 33 described later.
  • the high-precision map data has information about roads, information about marking lines such as white lines and road markings, information about structures, and the like.
  • the information about roads includes, for example, position information for each point, curve curvature and slope, and shape information such as connection relationship with other roads.
  • the information about the lane markings and road markings includes, for example, type information of lane markings and road markings, position information, and three-dimensional shape information.
  • the information about the structure includes, for example, type information, position information, and shape information of each structure.
  • the structures are road signs, traffic lights, street lights, tunnels, overpasses, buildings facing roads, and the like.
  • the high-precision map data has the above-mentioned various position information and shape information as point cloud data and vector data of feature points represented by three-dimensional coordinates. That is, it can be said that the high-precision map data is a three-dimensional map that includes altitude in addition to latitude and longitude with respect to position information.
  • the high-precision map data has such positional information with a relatively small error (for example, on the order of centimeters).
  • the high-precision map data is high-precision map data in that it has position information based on three-dimensional coordinates including height information, and is also accurate in that the error in the position information is relatively small. High map data.
  • High-precision map data is created based on the information collected by surveying vehicles that actually travel on the road. Therefore, the high-precision map data is created for the area where the information is collected, and is out of the range for the area where the information is not collected.
  • high-precision map data is currently prepared with a relatively wide coverage for highways and motorways, and with a relatively narrow coverage for general roads.
  • the locator ECU 33 is mainly composed of a microcomputer including a processor, a RAM, a memory device, an I/O, and a bus connecting these.
  • the locator ECU 33 is connected to the GNSS receiver 30, the inertial sensor 31, and the in-vehicle LAN.
  • the locator ECU 33 sequentially measures the position of the vehicle A of the vehicle A by combining the positioning signal received by the GNSS receiver 30 and the measurement result of the inertial sensor 31.
  • the locator ECU 33 may use the traveling distance or the like obtained from the detection result sequentially output from the vehicle speed sensor mounted in the own vehicle for positioning the own vehicle position. In addition, the locator ECU 33 identifies the vehicle position using the high-precision map data described below and the detection result of the peripheral monitoring sensor 4 such as LIDAR that detects the point group of the road shape and the feature points of the structure. May be. Locator ECU 33 outputs the vehicle position information to the in-vehicle LAN.
  • the locator ECU 33 has a map notification unit 301 as a functional block, as shown in FIG.
  • the map notification unit 301 is highly accurate in information about the current vehicle position of the vehicle A, which is information corresponding to the vehicle position, based on the measured vehicle position information and the high accuracy map data of the high accuracy map DB 32. It is determined whether it is included in the map data.
  • the map notification unit 301 executes, for example, so-called map matching processing in which the traveling locus of the vehicle A is calculated from the vehicle position information and is superimposed on the road shape of the high precision map data.
  • the map notification unit 301 determines whether the current vehicle position is included in the high-precision map data based on the result of the map matching process.
  • the map notification unit 301 uses the two-dimensional position information (for example, longitude and latitude) of the vehicle A as well as the height information based on the own vehicle position information, and the information about the current own vehicle position is highly accurate. It is determined whether it is included in the map data.
  • the map notification unit 301 uses the above-described map matching process or the process using height information to determine which road is to be used for vehicles even when roads with different heights (for example, an elevated road and a ground road) are close to each other. It is possible to determine whether A is traveling. Accordingly, the map notification unit 301 can improve the determination accuracy.
  • the map notification unit 301 outputs, to the HCU 20, notification information indicating that information about the vehicle position is included or not included in the high-precision map data based on the determination result.
  • the peripheral monitoring sensor 4 is an autonomous sensor that monitors the surrounding environment of the vehicle.
  • the perimeter monitoring sensor 4 is a moving dynamic target such as a pedestrian, an animal other than a human being, a vehicle other than the own vehicle, a road surface display such as a falling object on the road, a guardrail, a curbstone, a lane marking, or a tree. It detects objects around the vehicle, such as stationary static targets. ..
  • the peripheral monitoring sensor 4 there are a peripheral monitoring camera that images a predetermined range around the own vehicle, a millimeter wave radar that transmits a search wave to a predetermined range around the own vehicle, a sonar, a search wave sensor such as LIDAR.
  • the perimeter monitoring camera sequentially outputs captured images that are sequentially captured as sensing information to the in-vehicle LAN.
  • the exploration wave sensor sequentially outputs the scanning result based on the received signal obtained when the reflected wave reflected by the object is received, to the in-vehicle LAN as sensing information.
  • the perimeter monitoring sensor 4 of the first embodiment includes at least a front camera 41 whose imaging range is a predetermined range in front of the vehicle.
  • the front camera 41 is provided, for example, on the rearview mirror of the own vehicle, the upper surface of the instrument panel, or the like.
  • the driving support ECU 6 executes an automatic driving function that acts on behalf of a passenger.
  • the driving support ECU 6 recognizes the traveling environment of the own vehicle based on the vehicle position and map data of the own vehicle acquired from the locator 3, and the sensing information from the surroundings monitoring sensor 4.
  • An example of the automatic driving function executed by the driving support ECU 6 is to adjust the driving force and the braking force to control the traveling speed of the own vehicle so as to maintain the target distance between the preceding vehicle and ACC (Adaptive Cruise Control). ) There is a function. In addition, there is an AEB (Autonomous Energy Breaking) function that forcibly decelerates the vehicle by generating a braking force based on the front sensing information.
  • the driving support ECU 6 may have other functions as a function of automatic driving.
  • the navigation device 7 includes a navigation map database (hereinafter, navigation map DB) 70 that stores navigation map data.
  • the navigation device 7 searches for a route that satisfies conditions such as time priority and distance priority to the set destination, and provides route guidance according to the searched route.
  • the navigation device 7 outputs the searched route to the in-vehicle LAN as scheduled route information.
  • the navigation map DB 70 is a non-volatile memory and stores navigation map data such as link data, node data, and road shapes.
  • Navigation map data is prepared in a relatively wider area than high-precision map data.
  • the link data is composed of a link ID for identifying the link, a link length indicating the length of the link, a link azimuth, a link travel time, node coordinates of the start and end of the link, road attributes, and the like.
  • the node data is each data such as a node ID given a unique number for each node on the map, a node coordinate, a node name, a node type, a connection link ID in which a link ID of a link connecting to the node is described, an intersection type, etc. Composed of.
  • the navigation map data has node coordinates as two-dimensional position coordinate information. That is, it can be said that the navigation map data is a two-dimensional map including the latitude and longitude with respect to the position information.
  • the navigation map data is a map data having a relatively lower accuracy than the high accuracy map data in that it does not have height information regarding the position information, and also has a low accuracy in that the error of the position information is relatively large. It is map data.
  • the navigation map data is an example of low-precision map information.
  • the HMI system 2 includes an operation device 21, a display device 23, and an HCU 20, and receives an input operation from an occupant who is a user of the own vehicle and presents information to the occupant of the own vehicle.
  • the operation device 21 is a switch group operated by an occupant of the vehicle.
  • the operation device 21 is used to make various settings. For example, as the operation device 21, there is a steering switch or the like provided on the spoke portion of the steering of the vehicle.
  • the display device 23 includes, for example, a head-up display (hereinafter referred to as HUD) 230, a multi-information display (MID) 231 provided on the meter, and a center information display (CID) 232.
  • HUD head-up display
  • MID multi-information display
  • CID center information display
  • the HUD 230 is provided on the instrument panel 12 of the own vehicle.
  • the HUD 230 forms a display image based on the image data output from the HCU 20 by a liquid crystal type or scanning type projector 230a, for example.
  • the navigation device 7 displays navigation map data, route information for the destination, and the like.
  • the HUD 230 projects the display image formed by the projector 230a onto a projection area PA defined by the front windshield WS as a projection member through an optical system 230b such as a concave mirror.
  • the projection area PA is assumed to be located in front of the driver's seat.
  • the luminous flux of the display image reflected by the front windshield WS toward the vehicle interior is perceived by an occupant sitting in the driver's seat.
  • the light flux from the foreground as a landscape existing in front of the vehicle, which is transmitted through the front windshield WS formed of translucent glass, is also perceived by the occupant sitting in the driver's seat.
  • the occupant can visually recognize the virtual image Vi of the display image formed in front of the front windshield WS, overlapping a part of the foreground.
  • the HUD 230 superimposes and displays the virtual image Vi on the foreground of the vehicle A.
  • the HUD 230 superimposes the virtual image Vi on a specific superimposition target in the foreground to realize so-called AR (Augmented Reality) display.
  • AR Augmented Reality
  • the HUD 230 realizes a non-AR display in which the virtual image Vi is not superimposed on a specific superimposition target, but simply superimposed and displayed on the foreground.
  • the projection member on which the HUD 230 projects the display image is not limited to the front windshield WS and may be a translucent combiner.
  • the HCU 20 is mainly composed of a microcomputer having a processor 20a, a RAM 20b, a memory device 20c, an I/O 20d, and a bus connecting these, and is connected to the HUD 230 and the in-vehicle LAN.
  • the HCU 20 controls the display by the HUD 230 by executing the display control program stored in the memory device 20c.
  • the HCU 20 is an example of a display control device, and the processor 20a is an example of a processing unit.
  • the memory device 20c is a non-transitional tangible storage medium that non-temporarily stores computer-readable programs and data.
  • the non-transitional physical storage medium is realized by a semiconductor memory or a magnetic disk.
  • the HCU 20 generates an image of the content displayed as the virtual image Vi on the HUD 230 and outputs it to the HUD 230.
  • the HCU 20 generates a route guidance image that guides the scheduled traveling route of the vehicle A to the occupants, as shown in FIGS. 4 to 6.
  • the HCU 20 generates an AR guide image Gi1 superimposed on the road surface as shown in FIGS. 4 and 5.
  • the AR guidance image Gi1 is generated, for example, in a three-dimensional display mode (hereinafter, three-dimensional display mode) arranged continuously on the road surface along the planned travel route.
  • FIG. 4 is an example in which the AR guidance image Gi1 is displayed in a superimposed manner on a road with a slope.
  • FIG. 5 is an example in which the AR guidance image Gi1 is displayed in a superimposed manner along the road shape in which the number of lanes is increasing at the destination.
  • the HCU 20 generates a non-AR guidance image Gi2 simply displayed in the foreground as a route guidance image as shown in FIG.
  • the non-AR guidance image Gi2 is a two-dimensional display mode that is fixed with respect to the front windshield WS, such as an image that highlights the lane to be traveled and an image of an intersection where the traveling route is shown (hereinafter, 2D display mode). That is, the non-AR guide image Gi2 is a virtual image Vi that is not superimposed on a specific superimposition target in the foreground but is simply superimposed on the foreground.
  • the three-dimensional display mode is an example of the first display mode
  • the two-dimensional display mode is an example of the second display mode.
  • the HCU 20 has a vehicle position acquisition unit 201, a map determination unit 202, a map information acquisition unit 203, a sensor information acquisition unit 204, and a display mode determination unit 205 as functional blocks related to the generation of the AR guidance image Gi1 and the non-AR guidance image Gi2. , And a display generation unit 206.
  • the vehicle position acquisition unit 201 acquires vehicle position information from the locator 3.
  • the vehicle position acquisition unit 201 is an example of a vehicle position acquisition unit.
  • the map determination unit 202 determines, based on the notification information or the like acquired from the locator 3, which of high-precision map data and navigation map data is to be acquired as the map information used to generate the virtual image Vi.
  • the map determination unit 202 determines whether or not high precision map data can be acquired.
  • the map determination unit 202 determines that the high-precision map data can be acquired when the current vehicle position of the vehicle A is included in the high-precision map data.
  • the map determination unit 202 performs this determination process based on the notification information output from the locator ECU 33.
  • the vehicle position used in the determination processing here may include an area around the vehicle A on which the virtual image Vi can be superimposed.
  • the map determination unit 202 determines whether or not the high-accuracy map data can be acquired by itself, based on the own vehicle position information and the high-accuracy map data acquired from the locator 3, regardless of the notification information from the locator 3. You may.
  • the map determination unit 202 may continuously perform the above-described determination processing during traveling, or may intermittently perform the determination processing every predetermined traveling section.
  • the map determination unit 202 also determines whether or not the high-precision map data includes information about the future traveling section GS of the vehicle A (section determination processing).
  • the future traveling section GS is, for example, the latest traveling section of the planned traveling route of the vehicle A for which the route guidance image needs to be displayed.
  • the display section in which the route guidance image needs to be displayed is, for example, a section including a point where a plurality of roads are connected, such as an intersection, or a section in which the lane needs to be changed.
  • the map determination unit 202 determines whether or not the entire range of the future traveling section GS as shown in FIG. 8 is included in the high precision map data.
  • FIG. 8 shows a situation in which vehicle A tries to enter a general road from a highway through a rampway. In FIG. 8, it is assumed that the vehicle A turns left at the intersection CP where the rampway and the general road are connected.
  • the road in FIG. 8 is divided into an area where both high-precision map data and navigation map data are provided and an area where only navigation map data is provided, with the two-dot chain line shown on the rampway as the boundary line.
  • the section from the start point ps for example, a point 300 m before the intersection CP
  • the section from the boundary line to the end point pf for example, the exit point of the intersection
  • the map determination unit 202 determines that the high-precision map data does not include information about the future traveling section GS of the vehicle A.
  • the map determination unit 202 executes this section determination processing based on, for example, the planned route information provided by the navigation device 7 and the high-precision map data provided by the locator 3.
  • the map determination unit 202 executes this section determination processing at the timing when the vehicle A reaches the starting point ps or when the vehicle A approaches the starting point ps.
  • the map determination unit 202 may be configured to acquire the determination result of the above-described section determination processing executed by the locator ECU 33.
  • the map determination unit 202 determines whether or not the shape condition for which the generation of the AR guidance image Gi1 is unnecessary for the road shape on which the vehicle A is traveling, that is, the shape condition for stopping the generation of the AR guidance image Gi1 is satisfied. (Shape determination process).
  • the shape condition is satisfied, for example, when the route is evaluated to be a road shape that can accurately transmit the planned traveling route to the occupant by the non-AR guidance image Gi2. Then, if it is evaluated that the occupant may misidentify the planned traveling route when the non-AR guidance image Gi2 is displayed instead of the AR guidance image Gi1, the shape condition is not satisfied.
  • the road shape includes the number of lanes the road has, the gradient and the curvature, the connection relationship with other roads, and the like.
  • the lane to which the vehicle travels is uniquely determined, so that the planned traveling route can be accurately transmitted by the non-AR guidance image Gi2, and the shape condition is satisfied.
  • the intersection for performing the right/left turn guidance and the vehicle A the intersection for performing the right/left turn is uniquely determined. Therefore, the planned traveling route is accurately determined by the non-AR guidance image Gi2.
  • the shape condition is satisfied.
  • the road is a flat road having substantially no slope, it is possible to see the traveling destination of the vehicle A, so that the planned traveling route can be accurately transmitted by the non-AR guidance image Gi2, and the shape condition is To establish.
  • the establishment of the shape condition may be determined by a combination of a plurality of cases described above, for example, when the road is a flat road and has only one lane.
  • the map determination unit 202 determines whether or not the shape condition is satisfied, based on the high-precision map data provided from the locator 3, the detection information from the perimeter monitoring sensor 4, and the like. Alternatively, the map determination unit 202 may be configured to acquire the determination result of the above-described shape determination process executed by the locator ECU 33.
  • the map determination unit 202 determines to acquire the high-precision map data when the high-precision map data can be acquired at the current vehicle position. However, the map determination unit 202 can acquire the high-precision map data at the current vehicle position when the high-precision map data does not include information about the future traveling section GS or when the shape condition is satisfied. Even if there is, it is determined to acquire the navigation map data.
  • the map information acquisition unit 203 acquires either high-accuracy map data or navigation map data based on the determination result of the map determination unit 202.
  • the map information acquisition unit 203 acquires high-accuracy map data when it is determined that high-accuracy map data can be acquired.
  • the map information acquisition unit 203 acquires the navigation map data instead of the high accuracy map data when it is determined that the high accuracy map data cannot be acquired.
  • the map information acquisition unit 203 determines that the high-precision map data can be acquired. Also gets navigation map data. In addition, when it is determined that the shape condition is satisfied, the map information acquisition unit 203 acquires the navigation map data even when it is determined that the high-accuracy map data can be acquired. To do.
  • the map information acquisition unit 203 sequentially outputs the acquired map information to the display mode determination unit 205.
  • the sensor information acquisition unit 204 acquires detection information regarding a detected object in front of the vehicle A.
  • the detection information includes the height information of the road surface on which the AR guide image Gi1 is to be superimposed, or the height information of the detected object from which the height information can be estimated.
  • the detected objects include road markings such as stop lines, center markings at intersections, and lane markings, road markings, curbs, and road installations such as traffic lights.
  • the detection information is information for correcting the superimposed position of the navigation map data or the AR guide image Gi1 when the AR guide image Gi1 is generated using the navigation map data.
  • the detection information may include shape information of the traveling road, information about the number of lanes on the traveling road, information about the lane in which the vehicle A is currently traveling, and the like.
  • the sensor information acquisition unit 204 attempts to acquire the detection information, and when the detection information is acquired, sequentially outputs the detection information to the display mode determination unit 205.
  • the display mode determining unit 205 displays in which of the three-dimensional display mode and the two-dimensional display mode the route guidance image is generated, that is, which of the AR guidance image Gi1 and the non-AR guidance image Gi2 is displayed as the route guidance image.
  • the generation unit 206 determines whether to generate.
  • the AR guidance image Gi1 when the AR guidance image Gi1 is displayed based on the navigation map data, the AR guidance image Gi1 is displayed as if it is floating on the road surface as in the modification shown in FIG. 7, or on the road surface. It may be displayed as if it was filled.
  • Such a deviation of the superimposition position is such that the navigation map data has a particularly low accuracy of height information as compared with the high-precision map data or does not have height information, and the AR map reflects the gradient shape of the road. This occurs because the guide image Gi1 cannot be generated.
  • the display mode determination unit 205 In order to suppress the generation of the AR guidance image Gi1 with the superimposed position shifted, the display mode determination unit 205 generates the route guidance image based on the availability of the high-accuracy map data as the AR guidance image Gi1 and the non-AR guidance image Gi2. Select from.
  • the display mode determination unit 205 determines the display mode of the route guidance image to be the three-dimensional display mode when the high-precision map data is acquired by the map information acquisition unit 203.
  • the display mode determination unit 205 determines the display mode of the route guidance image to be the two-dimensional display mode.
  • the display mode of the route guidance image is determined to be a three-dimensional display mode.
  • the display mode determination unit 205 outputs the determined display mode to the display generation unit 206.
  • the display generation unit 206 generates a route guidance image in the display mode determined by the display mode determination unit 205 based on the acquired various information.
  • the display mode is determined to be the three-dimensional display mode
  • the display generation unit 206 determines the three-dimensional position coordinates of the road surface on which the AR guidance image Gi1 is superimposed, based on the information of the three-dimensional position coordinates of the high accuracy map data. Identify.
  • the display generation unit 206 specifies the relative three-dimensional position (relative position) of the road surface with respect to the vehicle A based on the position coordinates of the road surface and the vehicle position coordinates.
  • the display generation unit 206 also calculates or acquires road surface gradient information based on the high-precision map data.
  • the display generation unit 206 calculates the gradient information by, for example, a geometric calculation using position coordinates of two points that define a slope. Alternatively, the display generation unit 206 may calculate the gradient information based on the three-dimensional shape information of the lane markings. Alternatively, the display generation unit 206 may estimate the gradient information based on the information that can estimate the gradient information among the information included in the high-precision map data.
  • the display generation unit 206 performs AR guidance by geometric calculation based on the specified relative position, the occupant's viewpoint position acquired from the DSM 22, the position of the projection area PA, the road surface gradient at the relative position, and the like.
  • the projection position and projection shape of the image Gi1 are calculated.
  • the display generation unit 206 generates the AR guide image Gi1 based on the calculation result, outputs the data to the HUD 230, and displays the AR guide image Gi1 as the virtual image Vi.
  • the display generation unit 206 combines the two-dimensional position coordinates of the navigation map with the peripheral information when the three-dimensional display mode is determined based on the fact that the sensor information acquisition unit 204 has acquired the detection information. Then, the AR guidance image Gi1 is generated. For example, the display generation unit 206 specifies the three-dimensional position coordinates of the road surface on which the AR guide image Gi1 is superimposed, from the height information acquired or estimated from the detection information and the two-dimensional position coordinates of the navigation map. .. The display generation unit 206 calculates the projection position and the projection shape of the AR guidance image Gi1 using the specified position coordinates, as in the case of using the high-precision map data.
  • the display generation unit 206 includes these pieces of information in the AR guidance image Gi1. It may be used to correct the superimposed position of.
  • the display generation unit 206 acquires information on the two-dimensional position coordinates of the navigation map and generates a route guidance image.
  • the display generation unit 206 determines the superimposed position of the route guidance image with respect to the foreground to the preset position based on the acquisition of the two-dimensional position coordinates.
  • the display generation unit 206 determines the projected shape of the route guidance image based on the two-dimensional position coordinates and generates the route guidance image.
  • the display generation unit 206 outputs the generated data to the HUD 230 and displays the route guidance image as a virtual image Vi for non-AR display.
  • the display generation unit 206 generates a mode presentation image Ii for presenting the display mode of the displayed route guidance image to the occupant.
  • the display generation unit 206 generates, for example, the aspect presentation image Ii as a character image.
  • the display generation unit 206 when the AR guidance image Gi1 is displayed, the display generation unit 206 generates the mode presentation image Ii indicating the three-dimensional display mode with the character image of “3D”. ..
  • the display generation unit 206 has generated the character image of “2D” as the mode presentation image Ii indicating the two-dimensional display mode.
  • the display generation unit 206 may present the mode presentation image Ii as information other than character information such as symbols and designs. Further, the display generation unit 206 may display the aspect presentation image Ii on a display device other than the HUD 230 such as the CID 232 or the MID 231. In this case, the display generation unit 206 can reduce the amount of information in the projection area PA of the HUD 230 while presenting the display mode to the occupant, and can reduce the annoyance of the occupant.
  • the “display generation unit 206” is an example of the “mode presentation unit”.
  • the HCU 20 starts the process of FIG. 9 when the destination is set in the navigation device 7 and the planned travel route is set.
  • step S10 it is determined whether or not the route guidance display is started. For example, in step S10, when the distance between the guidance point and the vehicle A is less than a threshold value (for example, 300 m), it is determined to start the route guidance display. When it is determined that the route guidance display is started, the process proceeds to step S20, and the vehicle position information is acquired from the locator 3.
  • a threshold value for example, 300 m
  • step S30 notification information about the vehicle position and its surroundings is acquired from the locator 3, and the process proceeds to step S40.
  • step 40 it is determined based on the notification information or the like whether or not the high precision map data can be acquired. If it is determined that it can be acquired, the process proceeds to step S42.
  • step S42 based on the information from the locator 3, it is determined whether or not there is high-precision map data in the future traveling section GS. If it is determined that there is high-precision map data in the future traveling section GS, the process proceeds to step S44, and it is determined whether or not the shape condition is satisfied. If it is determined that the shape condition is not satisfied, the process proceeds to step S50.
  • step S50 the map information acquisition unit 203 acquires high precision map data.
  • step S60 a route guidance image in a three-dimensional display mode is generated based on the acquired three-dimensional coordinates of the high-precision map data, and the process proceeds to step S120.
  • step S120 the generated route guidance image is output to the HUD 230, and the HUD 230 is caused to generate the route guidance image as the virtual image Vi.
  • step S40 determines whether high-precision map data cannot be acquired. If it is determined in step S40 that high-precision map data cannot be acquired, the process proceeds to step S70.
  • step S70 it is determined whether the detection information can be acquired from the vehicle-mounted sensor. If it is determined that the detection information cannot be acquired, the process proceeds to step S80.
  • step S80 the navigation map data is acquired from the navigation device 7, and the process proceeds to step S90.
  • step S90 a route guidance image is generated in a two-dimensional display mode based on the navigation map data. After that, the process proceeds to step S120, and the generated route guidance image is output to the HUD 230.
  • step S42 determines whether the future traveling section GS is included in the high-precision map data. If it is determined in step S42 that the future traveling section GS is not included in the high-precision map data, the process proceeds to step S80. In addition, if it is determined in step S44 that the shape condition is satisfied, the process proceeds to step S80.
  • step S70 if it is determined in step S70 that the detection information can be acquired from the peripheral monitoring sensor 4, the process proceeds to step S100.
  • step S100 navigation map data and detection information are acquired.
  • step S110 a route guidance image in a three-dimensional display mode is generated based on the navigation map data and the detection information.
  • step S120 the generated image data is output to the HUD 230.
  • the HCU 20 includes a map information acquisition unit 203 that acquires map information relating to the superimposed position of the virtual image Vi in the foreground as high-precision map data or navigation map data, and a display generation unit 206 that generates the virtual image Vi based on the map information.
  • a map information acquisition unit 203 that acquires map information relating to the superimposed position of the virtual image Vi in the foreground as high-precision map data or navigation map data
  • a display generation unit 206 that generates the virtual image Vi based on the map information.
  • the high-accuracy map data when the high-accuracy map data can be acquired, the high-accuracy map data is used to generate the virtual image Vi, and when the high-accuracy map data cannot be acquired, the virtual image Vi is generated. Low precision map information is used. As a result, the virtual image Vi can be displayed by selectively using the high-precision map data and the low-precision map information. As described above, it is possible to provide the HCU 20 and the display control program that can effectively use the map information.
  • the display generation unit 206 does not superimpose the virtual image Vi on the road surface in the two-dimensional display mode and the virtual image Vi on the road surface in the foreground, which is a specific superimposition target, in the three-dimensional display mode.
  • the HCU 20 can avoid superimposing the virtual image Vi on the road surface based on the navigation map data of relatively low accuracy. Therefore, the HCU 20 can suppress the occurrence of the displacement of the display position due to the superimposed display of the virtual image Vi based on the map information with low accuracy.
  • the display generation unit 206 uses the two-dimensional display mode even if the high-precision map data can be acquired. A virtual image Vi is generated. According to this, even if the high-precision map data can be obtained at the current position, the virtual image Vi is not generated in the three-dimensional display mode if the high-precision map data does not exist at the guide point. Therefore, it is possible to avoid changing the display mode of the virtual image Vi from the three-dimensional display mode to the two-dimensional display mode near the guide point. Therefore, the HCU 20 can prevent the occupant from being bothered by changing the display mode of the virtual image Vi.
  • the display generation unit 206 can acquire high-accuracy map data even when the high-precision map data can be acquired when the shape condition for stopping the generation of the virtual image Vi in the three-dimensional display mode is satisfied with respect to the road shape on which the vehicle A is traveling.
  • the virtual image Vi is generated in a two-dimensional display mode.
  • the HCU 20 can generate the virtual image Vi in the two-dimensional display mode when the traveling road is the virtual image Vi in the two-dimensional display mode and the road shape is relatively easy to transmit information to the occupant.
  • the HCU 20 can suppress the complication of the processing due to the use of the high precision map data while transmitting the information of the virtual image Vi to the occupant. ..
  • the HCU 20 includes a sensor information acquisition unit 204 that acquires detection information from the peripheral monitoring sensor 4.
  • the display generation unit 206 determines a three-dimensional shape based on the combination of the navigation map data and the detection information.
  • the virtual image Vi is generated in the display mode. According to this, even when the HCU 20 cannot acquire the high-accuracy map data, the HCU 20 combines the detection information with the navigation map data to display the same display mode as the high-accuracy map data.
  • the virtual image Vi can be generated.
  • the display generation unit 206 indicates to the occupant which of the three-dimensional display mode and the two-dimensional display mode the virtual image Vi is generated. According to this, the HCU 20 can more directly present the display mode of the virtual image Vi to the occupant. Therefore, the HCU 20 can facilitate the occupant to understand the information indicated by the virtual image Vi.
  • the map information acquisition unit 203 acquires map information including at least one of road gradient information, lane marking three-dimensional shape information, and road gradient estimation information as high-precision map data. According to this, the HCU 20 can obtain or estimate the road gradient information and generate the virtual image Vi in the three-dimensional display mode. Therefore, the HCU 20 can more reliably suppress the shift in the display position of the virtual image Vi in the three-dimensional display mode.
  • the HCU 20 acquires the high precision map data stored in the locator 3. Instead of this, the HCU 20 may acquire the probe map data as high precision map information.
  • the center 9 receives the probe information transmitted from the plurality of probe vehicles M at the communication unit 91 and stores it in the control unit 90.
  • the probe information is information acquired by the perimeter monitoring sensor 4 in each probe vehicle M, the locator 3, or the like, and is information in which the traveling locus of the probe vehicle M, the road shape information, and the like are represented by three-dimensional position coordinates. Included as.
  • the control unit 90 is mainly composed of a microcomputer including a processor, a RAM, a memory device, an I/O, and a bus connecting these.
  • the control unit 90 includes a map generation unit 90a as a functional block.
  • the map generator 90a generates probe map data based on the acquired probe information. Since the probe information is data including three-dimensional position coordinates, the generated probe map data is three-dimensional map data including height information of each point.
  • the vehicle system 1 communicates with the center 9 via the wireless communication network at the communication unit 8 and acquires probe map data.
  • the communication unit 8 stores the acquired probe map data in the driving support ECU 6.
  • the driving support ECU 6 has a map notification unit 601 as a functional block. Similar to the map notification unit 301 of the locator 3 in the first embodiment, the map notification unit 601 provides information regarding the own vehicle position and the surrounding area based on the measured vehicle position and the information acquired from the navigation device 7. It is determined whether or not it is included in the probe map data. When the map notification unit 601 determines that the probe map data includes information about the vehicle position and the area around it, the map notification unit 601 outputs that to the HCU 20 as notification information.
  • the map information acquisition unit 203 of the HCU 20 acquires the probe map data from the driving support ECU 6 when the map determination unit 202 determines that the probe map data that is the high-accuracy map information can be acquired.
  • the display generation unit 206 generates the AR guide image Gi1 based on the probe map data.
  • the HCU 20 of the third embodiment causes the route guidance image to be superimposed and displayed on the road surface at the superimposition position based on the high-precision map data in the first display mode, and the superimposition position based on the navigation map data in the second display mode.
  • the route guidance image is superimposed and displayed on the road surface.
  • the route guidance image in the first display mode will be referred to as a first AR guidance image CT1
  • the route guidance image in the second display mode will be referred to as a second AR guidance image CT2.
  • the display mode determination unit 205 determines the display of the first AR guide image CT1, and when the high-accuracy map data cannot be acquired and the navigation map data can be acquired. Determines to display the second AR guide image CT2.
  • the display mode determination unit 205 can obtain the high-precision map data even if the high-precision map data can be acquired. Determine the display.
  • the freshness condition is satisfied, for example, when the high-precision map data is older than the navigation map data.
  • the display mode determination unit 205 evaluates the magnitude of the superimposed position shift when displaying in the second display mode based on the acquired various information.
  • the display mode determination unit 205 evaluates the magnitude of the superimposed position shift, for example, based on the positioning accuracy of the vehicle position and the presence/absence of feature recognition information.
  • the display mode determination unit 205 determines whether or not the positioning accuracy of the vehicle position is at a predetermined level or higher. Specifically, the display mode determination unit 205 evaluates the vehicle position acquired from the locator 3 based on the detection information acquired from the surroundings monitoring sensor 4. For example, the display mode determination unit 205 detects the intersection CP from the image captured by the front camera 41 and analyzes the relative position of the vehicle A with respect to the intersection CP. Then, the display mode determination unit 205 determines whether or not the magnitude of the deviation between the position of the vehicle A specified from the relative position and the map data and the own vehicle position acquired from the locator 3 is equal to or higher than a predetermined level. judge.
  • the display mode determination unit 205 may detect an object other than the intersection CP capable of specifying the position of the vehicle A from the captured image and perform the above-described processing.
  • the display mode determination unit 205 may acquire the analysis result of the captured image from another ECU such as the driving assistance ECU 6.
  • the display mode determination unit 205 determines that the evaluation value of the positioning accuracy based on the residual of the pseudo distance, the number of positioning satellites captured by the locator 3, the S/N ratio of the positioning signal, or the like is equal to or higher than a predetermined level. May be determined.
  • the display mode determination unit 205 determines whether the feature recognition information is acquired from the surroundings monitoring sensor 4.
  • the feature recognition information is recognition information of the feature by the peripheral monitoring sensor 4, and is information that can be used to correct the overlapping position of the vehicle A in the front, rear, left, and right directions.
  • the features include, for example, road markings such as stop lines, intersection central markings, and lane markings. By correcting the own vehicle position on the map data based on the relative positions of these features with respect to the vehicle A, it is possible to correct the overlapping position of the second AR guide image CT2 in the front-rear and left-right directions.
  • road boundaries such as curbs and road installations such as signs may be included in the features that can be used to correct the vehicle position.
  • the display mode determination unit 205 determines the superimposed position shift of the displayed second AR guidance image CT2 based on the combination of the above various types of information, that is, the combination of the positioning accuracy of the vehicle position and the presence or absence of the feature recognition information. Evaluate the size. For example, the display mode determination unit 205 classifies the magnitude of the superposition positional deviation into three levels of “small”, “medium”, and “large” according to the combination.
  • the display mode determination unit 205 determines that the deviation is small.
  • the display mode determination unit 205 determines that the deviation is in the middle.
  • the display mode determination unit 205 determines that the deviation is in the middle even when the positioning accuracy is less than the predetermined level and the feature recognition information is present.
  • the display mode determination unit 205 determines that the magnitude of the deviation is large when the positioning accuracy is less than the predetermined level and there is no feature recognition information.
  • the display mode determination unit 205 provides the display generation unit 206 with the determination result of the display mode and the magnitude of the deviation evaluated in the case of the second display mode together with the information necessary for generating the route guidance image.
  • the display generation unit 206 generates either the first AR guidance image CT1 or the second AR guidance image CT2 based on the information provided by the display mode determination unit 205.
  • Each AR guidance image CT1, CT2 shows the planned traveling route of the vehicle A at the guidance point by AR display.
  • Each of the AR guide images CT1 and CT2 is an AR virtual image in which the road surface is to be superimposed, as in the first embodiment.
  • each AR guidance image CT1, CT2 includes an entry route content CTa indicating an approach route to the intersection CP.
  • an exit route content CTe indicating an exit route from the intersection CP.
  • the approach route content CTa is, for example, a plurality of triangular objects arranged along the planned travel route.
  • the exit route content CTe is a plurality of arrow-shaped objects arranged along the planned travel route.
  • the display generation unit 206 determines the superposition position and the superimposition shape of the first AR guidance image CT1 using the high-precision map data. Specifically, the display generation unit 206 provides various position information such as the road surface position based on the high-precision map data, the own vehicle position by the locator 3, the occupant's viewpoint position by the DSM 22, and the positional relationship of the set projection area PA. To use. The display generation unit 206 calculates the superimposed position and the superimposed shape of the first AR guide image CT1 by geometrical calculation based on the various position information.
  • the display generation unit 206 reproduces the current traveling environment of the vehicle A in the virtual space based on the vehicle position information based on the high precision map data, the high precision map data, the detection information, and the like. Specifically, as shown in FIG. 12, the display generation unit 206 sets the own vehicle object AO at the reference position in the virtual three-dimensional space. The display generation unit 206 maps the road model having the shape indicated by the map data in the three-dimensional space in association with the own vehicle object AO based on the own vehicle position information.
  • the display generation unit 206 sets the virtual camera position VP and the superposition range SA in association with the own vehicle object AO.
  • the virtual camera position VP is a virtual position corresponding to the viewpoint position of the occupant.
  • the display generation unit 206 sequentially corrects the virtual camera position VP for the own vehicle object AO based on the latest viewpoint position coordinates acquired from the DSM 22.
  • the superposition range SA is a range in which the virtual image Vi can be superposed and displayed. When the display generation unit 206 looks forward from the virtual camera position VP based on the virtual camera position VP and the outer edge position (coordinates) information of the projection area PA stored in advance in the storage unit 13 (see FIG. 1) and the like.
  • the front area inside the image plane is set as the superposition area SA.
  • the superposition range SA corresponds to the projection area PA and the angle of view of the HUD 230.
  • the display generation unit 206 arranges a virtual object VO imitating the first AR guide image CT1 in the virtual space.
  • the virtual object VO is arranged along the planned traveling route on the road surface of the road model in the three-dimensional space.
  • the virtual object VO is set in the virtual space when displaying the first AR guide image CT1 as a virtual image.
  • the virtual object VO defines the position and shape of the first AR guide image CT1. That is, the shape of the virtual object VO viewed from the virtual camera position VP becomes the virtual image shape of the first AR guide image CT1 visually recognized from the viewpoint position.
  • the display generation unit 206 arranges the virtual object VO on the own lane Lns in the central portion Lc of the own lane Lns in the lane width direction.
  • the central portion Lc is, for example, a midway point between the lane boundary lines on both sides defined by the lane markings of the own lane Lns or the road edges.
  • the superposition position of the approach route content CTa is set to the substantial center portion Lc of the own lane Lns (see FIG. 3).
  • the approach route content CTa moves from the center of the own lane Lns to the center of the approach lane, It may be displayed so as to extend along the central portion.
  • the exit route content CTe is arranged so as to be lined up following the approach route content CTa along the planned travel route.
  • the exit route content CTe is superimposed on the intersection CP and a position floating from the road surface in the center of the exit lane. Note that, as shown in FIG. 13, when the road surface to be superimposed is invisible, the exit route content CTe determines the overlapping position so as to be floated above the upper end of the road surface within the angle of view and visually recognized. To be done.
  • the display generation unit 206 starts displaying the above first AR guidance image CT1 when the remaining distance to the intersection CP is below a threshold value (for example, 300 m).
  • the display generation unit 206 sequentially updates the overlapping position and the overlapping shape of the first AR guide image CT1 so that the first AR guide image CT1 is displayed so as to be relatively fixed to the road surface. That is, the display generation unit 206 displays the first AR guide image CT1 so that the occupant can visually move the first AR guide image CT1 so as to follow the road surface that relatively moves as the vehicle A travels.
  • the display generation unit 206 determines the superimposed position and the superimposed shape of the second AR guidance image CT2 using the navigation map data instead of the high accuracy map data.
  • the display generation unit 206 sets the road surface position on the assumption that the road surface to be superimposed is a flat road surface without undulations.
  • the display generation unit 206 sets a horizontal road surface as a virtual road surface to be superimposed, and performs geometrical calculation based on the virtual road surface position and other various position information, and thereby the superimposed position of the second AR guide image CT2. And calculate the overlapping shape.
  • the virtual road surface set by the display generation unit 206 can be more inaccurate as compared with the virtual road surface set based on the high-precision map data.
  • the virtual road surface at the intersection CP portion may be displaced from the actual road surface.
  • the shape of the virtual road surface reflects the upward slope in order to clearly show the deviation of the virtual road surface at the intersection CP portion, but in reality, the upward slope is reflected on the virtual road surface. Not exclusively.
  • the display generation unit 206 determines the horizontal position of the virtual object VO in the virtual space based on the size of the shift. Specifically, when the magnitude of the shift is at a small level, the display generation unit 206 arranges the virtual object VO at the vehicle center position Vc, which is a position within the superposition range SA corresponding to the center of the vehicle A.
  • the vehicle center position Vc is the position of the straight line within the overlapping range SA when a virtual straight line passing through the center of the vehicle A in the vehicle width direction and extending in the front-rear direction of the vehicle A is assumed on the virtual road surface. Is.
  • the entry route contents CTa are arranged obliquely with respect to the vertical direction of the projection area PA, as shown in FIG.
  • the display generation unit 206 arranges the second AR guide image CT2 in the central portion Ac in the left-right direction of the projection area PA.
  • the approach route contents CTa are displayed in a state of being arranged side by side in the vertical direction of the projection area PA, as shown in FIG.
  • the display generation unit 206 corrects the superimposed position based on the feature recognition information. For example, the display generation unit 206 corrects the vehicle position in the front-rear, left-right direction on the virtual road surface set based on the navigation map data based on the feature recognition information, and then determines the overlapping position of the second AR guidance image CT2 and Calculate the overlapping shape.
  • the display generation unit 206 corrects the superposition position based on the height correction information.
  • the height correction information is, for example, three-dimensional position information of the roadside device acquired by road-to-vehicle communication.
  • the display generation unit 206 may acquire the information via the V2X communication device mounted on the vehicle A.
  • the display generation unit 206 or the height correction information may be height information of an object detected by the periphery monitoring sensor 4. That is, when the three-dimensional position information such as a road installation or a road marking can be specified by analyzing the detection information of the perimeter monitoring sensor 4, the height information included in the three-dimensional position information is included in the height correction information. May be.
  • the display generation unit 206 changes the position and shape of the virtual road surface from the horizontal road surface based on the height correction information, so that the height direction of the second AR guide image CT2 virtually arranged on the virtual road surface, for example. Correct the superimposed position of.
  • the display generation unit 206 limits the superimposed display of the second AR guidance image CT2 to the front side of the planned traveling route with respect to the first AR guidance image CT1. Specifically, the display generation unit 206 hides a portion of the exit route content CTe of the second AR guide image CT2 that is superimposed on the side of the planned traveling route farther from the vehicle A than the first AR guide image CT1. Only the part that is superimposed on the front side is displayed. In the example shown in FIGS. 15 and 16, three exit route contents CTe are displayed when the first AR guidance image CT1 is displayed, whereas when the second AR guidance image CT2 is displayed, The exit route content CTe is limited to only one on the near side. That is, the second AR guide image CT2 is a content that presents the exit direction from the intersection CP and does not present the route of the exit route, and is simpler than the first AR guide image CT1.
  • the display generation unit 206 starts displaying the second AR guidance image CT2 described above at a timing different from that of the first AR guidance image CT1. Specifically, the display generation unit 206 displays the non-AR guidance image Gi2 instead of the second AR guidance image CT2 when the remaining distance to the intersection CP is below the first threshold. Then, the display generation unit 206 switches the display from the non-AR guidance image Gi2 to the second AR guidance image CT2 when the remaining distance is less than the second threshold (for example, 100 m) smaller than the first threshold. That is, the display generation unit 206 starts displaying the second AR guide image CT2 at a stage closer to the intersection CP than when displaying the first AR guide image CT1.
  • the threshold value for displaying the non-AR guidance image Gi2 may not be the first threshold value as long as it is a value larger than the second threshold value.
  • step S44 determines whether the shape condition is not satisfied. If it is determined in step S44 that the shape condition is not satisfied, the HCU 20 proceeds to step S46.
  • step S46 the display mode determination unit 205 determines the freshness condition of the high precision map data. If it is determined that the freshness condition is not satisfied, the process proceeds to step S50, and if it is determined that the freshness condition is satisfied, the process proceeds to step S80.
  • step S50 When the high-precision map data is acquired in step S50, the process proceeds to step S65, and the display generation unit 206 generates the first AR guidance image CT1. On the other hand, when the navigation map data is acquired in step S80, the process proceeds to step S81.
  • step S81 the display generation unit 206 determines whether the remaining distance to the intersection CP is less than the second threshold value. When it is determined that it is not below the second threshold value, the process proceeds to step S82, the non-AR guidance image Gi2 is generated, and then the process proceeds to step S120. On the other hand, when it is determined in step S81 that the value is below the second threshold, the process proceeds to step S83.
  • step S83 the display mode determination unit 205 or the like acquires the correction information of the superimposed position via the sensor information acquisition unit 204. If there is no correction information that can be acquired, step S83 is skipped.
  • step S84 the display mode determination unit 205 evaluates the magnitude of the positional deviation of the second AR guide image CT2, and proceeds to step S95.
  • step S95 the display generation unit 206 generates the second AR guide image CT2 based on the acquired navigation map data, the correction information, the information regarding the magnitude of the positional deviation, and the like.
  • the HCU 20 can superimpose and display the virtual image Vi on the specific superimposition target while properly using the map data in the area where the high-precision map data can be used and the area where the high-precision map data cannot be used.
  • the display generation unit 206 starts displaying the second AR guidance image CT2 when the remaining distance to the intersection CP reaches a second threshold shorter than the first threshold at which the first AR guidance image CT1 is displayed. Since the intersection CP is often a relatively flat terrain, the display generation unit 206 starts displaying the second AR guidance image CT2 at a stage closer to the intersection CP than the display scene of the first AR guidance image CT1. The magnitude of the positional deviation of the second AR guide image CT2 can be suppressed. Alternatively, the display generation unit 206 can shorten the traveling section in which the positional deviation of the second AR guide image CT2 becomes large.
  • the disclosure herein is not limited to the illustrated embodiments.
  • the disclosure encompasses the illustrated embodiments and variations on them based on them.
  • the disclosure is not limited to the combination of parts and/or elements shown in the embodiments.
  • the disclosure can be implemented in various combinations.
  • the disclosure may have additional parts that may be added to the embodiments.
  • the disclosure includes omissions of parts and/or elements of the embodiments.
  • the disclosure includes replacements or combinations of parts and/or elements between one embodiment and another.
  • the disclosed technical scope is not limited to the description of the embodiments.
  • the display generation unit 206 generates the AR guidance image Gi1 as the route guidance image based on the high-precision map information and the non-AR guidance image Gi2 as the route guidance image based on the navigation map data.
  • the display generation unit 206 may be configured to generate a virtual image Vi other than the route guidance image in a different display mode depending on the map information to be acquired. For example, the display generation unit 206 sets an image prompting the occupant to gaze at an object to be watched (for example, a preceding vehicle, a pedestrian, a road sign, etc.) when the high-accuracy map information can be acquired. Superimposition may be performed, and if acquisition is not possible, superimposition on the object may be stopped.
  • the display generation unit 206 displays the mode presentation image Ii together with the route guidance image, but the mode presentation image Ii may be displayed before the route guidance image is displayed.
  • the HCU 20 is supposed to display the non-AR guidance image Gi2 based on the navigation map data when the shape condition is satisfied.
  • the HCU 20 may display the non-AR guidance image Gi2 based on the high-precision map data when the shape condition is satisfied and the high-precision map data can be acquired.
  • the display generation unit 206 sets the overlapping position of the second AR guide image CT2 to the vehicle center position Vc and the center part Ac of the projection area PA in accordance with the size of the overlapping position shift of the second AR guide image CT2. I decided to choose one. Instead of this, the display generation unit 206 may be configured to be superimposed on only one of them.
  • the display generation unit 206 performs switching from the non-AR guidance image Gi2 to the second AR guidance image CT2 based on the remaining distance to the intersection CP, but the condition for switching is not limited to this. ..
  • the display generation unit 206 may be configured to switch at the time when the correction information regarding the superimposed position of the second AR guide image CT2 can be acquired.
  • the correction information is information that can be used to correct the superimposed position of the second AR guide image CT2, and includes, for example, the stop line of the intersection CP, the center marking of the intersection CP, and the road marking of another own vehicle lane Lns. It is the position information of.
  • the correction information is acquired as an analysis result of the detection information of the peripheral monitoring sensor 4.
  • the display generation unit 206 generates the route guidance image in the second display mode when the high-precision map data does not include the information about the future traveling section GS. Then, the display generation unit 206 may be configured to generate the route guidance image in the first display mode as long as the high-precision map data corresponding to the current position of the vehicle can be acquired. In this case, the display generation unit 206 may switch from the first display mode to the second display mode when the high-precision map data corresponding to the current position of the vehicle cannot be acquired.
  • the display generation unit 206 of the third embodiment continuously displays the route guidance image from the superimposing position of the first AR guide image CT1 to the superimposing position of the second AR guide image CT2. You may display so that it may move to. As a result, the display generation unit 206 can reduce the occupant's discomfort due to the instantaneous switching of the overlapping position. It should be noted that the moving speed of the route guidance image at this time is preferably slow enough not to guide the occupant's consciousness to the movement itself of the route guidance image.
  • the display generation unit 206 displays the entry route content CTa and the exit route content CTe of the first AR guidance image CT1 in different shapes.
  • the display generation unit 206 may set the contents CTa and CTe as contents having substantially the same shape, as shown in FIG. In the example shown in FIG. 18, each of the contents CTa and CTe is in the shape of a plurality of triangles arranged along the planned travel route.
  • the display generation unit 206 may change the exit route content CTe to an arrow-shaped image indicating the exit direction in the display of the second AR guide image CT2 (see FIG. 19).
  • the display generation unit 206 may display the route guidance image as a strip of content that extends continuously along the planned travel route.
  • the second AR guide image CT2 may be displayed in a mode in which the length of the second AR guide image CT2 to the front side of the planned traveling route is limited to that of the first AR guide image CT1.
  • the processor of the above-described embodiment is a processing unit including one or more CPUs (Central Processing Units).
  • a processor may be a processing unit including a GPU (Graphics Processing Unit) and a DFP (Data Flow Processor) in addition to the CPU.
  • the processor may be a processing unit including an FPGA (Field-Programmable Gate Array) and an IP core specialized in specific processing such as learning and inference of AI.
  • Each arithmetic circuit unit of such a processor may be mounted individually on a printed circuit board, or may be mounted on an ASIC (Application Specific Integrated Circuit) and FPGA.
  • ASIC Application Specific Integrated Circuit
  • non-transitional tangible storage mediums such as a flash memory and a hard disk can be adopted.
  • the form of such a storage medium may be appropriately changed.
  • the storage medium may be in the form of a memory card or the like, and may be configured to be inserted into a slot portion provided in the vehicle-mounted ECU and electrically connected to the control circuit.
  • control unit and the method thereof described in the present disclosure may be realized by a dedicated computer that configures a processor programmed to execute one or more functions embodied by a computer program.
  • apparatus and method described in the present disclosure may be realized by a dedicated hardware logic circuit.
  • device and method described in the present disclosure may be implemented by one or more dedicated computers configured by a combination of a processor that executes a computer program and one or more hardware logic circuits.
  • the computer program may be stored in a computer-readable non-transition tangible recording medium as an instruction executed by a computer.
  • control unit and the method thereof described in the present disclosure are realized by a dedicated computer provided by configuring a processor and a memory programmed to execute one or more functions embodied by a computer program. May be done.
  • control unit and the method described in the present disclosure may be realized by a dedicated computer provided by configuring a processor with one or more dedicated hardware logic circuits.
  • control unit and the method thereof described in the present disclosure are based on a combination of a processor and a memory programmed to execute one or more functions and a processor configured by one or more hardware logic circuits. It may be realized by one or more configured dedicated computers.
  • the computer program may be stored in a computer-readable non-transition tangible recording medium as an instruction executed by a computer.
  • each section is expressed as, for example, S10. Further, each section can be divided into multiple subsections, while multiple sections can be combined into one section. Further, each section thus configured can be referred to as a device, module, means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Optics & Photonics (AREA)
  • Transportation (AREA)
  • Combustion & Propulsion (AREA)
  • Automation & Control Theory (AREA)
  • Instrument Panels (AREA)
  • Navigation (AREA)

Abstract

This invention controls the display of a virtual image (Vi) superimposed on the scenery in front of an occupant of a vehicle (A). The invention acquires the position of the vehicle, acquires high-accuracy map information corresponding to the position or low-accuracy map information corresponding to the position and having a lower accuracy than the high-accuracy map information, generates the virtual image in a first display mode based on the high-accuracy map information if the high-accuracy map information can be acquired, and generates the virtual image in a second display mode that is different than the first display mode and is based on the low-accuracy map information if the high-accuracy map information cannot be acquired.

Description

表示制御装置、表示制御プログラムおよびコンピュータ読み出し可能持続的有形記録媒体Display control device, display control program, and computer-readable persistent tangible recording medium 関連出願の相互参照Cross-reference of related applications
 本出願は、2018年12月14日に出願された日本特許出願番号2018-234566号および2019年10月29日に出願された日本特許出願番号2019-196468号に基づくもので、ここにその記載内容を援用する。 This application is based on Japanese Patent Application No. 2018-234566 filed on December 14, 2018 and Japanese Patent Application No. 2019-196468 filed on October 29, 2019, and the description thereof is hereby given. Incorporate the contents.
 本開示は、虚像を表示させる表示制御装置、表示制御プログラムおよびコンピュータ読み出し可能持続的有形記録媒体に関するものである。 The present disclosure relates to a display control device for displaying a virtual image, a display control program, and a computer-readable persistent tangible recording medium.
 特許文献1には、虚像の表示制御に地図情報を利用するヘッドアップディスプレイ装置が開示されている。この装置は、車両の現在位置と地図情報とに基づいて、車両前方の走路形状を虚像として表示する。 Patent Document 1 discloses a head-up display device that uses map information for display control of a virtual image. This device displays the shape of the road ahead of the vehicle as a virtual image based on the current position of the vehicle and map information.
 ところで、地図情報には、高精度地図情報と、高精度地図情報よりも比較的精度の低い低精度地図情報とが存在する。特許文献1には、これらの地図情報を有効利用することは想定されていない。 By the way, map information includes high-precision map information and low-precision map information, which is relatively less accurate than high-precision map information. Patent Document 1 does not assume that the map information is effectively used.
特許第4379600号公報Japanese Patent No. 4379600
 本開示は、地図情報を有効利用可能な表示制御装置、表示制御プログラムおよびコンピュータ読み出し可能持続的有形記録媒体を提供することを目的とする。 The present disclosure aims to provide a display control device, a display control program, and a computer-readable persistent tangible recording medium that can effectively use map information.
 本開示のある態様にしたがって、車両において用いられ、乗員の前景に重畳される虚像の表示を制御する表示制御装置は、前記車両の位置を取得する車両位置取得部と、前記位置に対応する高精度地図情報または前記高精度地図情報よりも精度の低い低精度地図情報を取得する地図情報取得部と、前記高精度地図情報を取得可能である場合には、前記高精度地図情報に基づく第1表示態様で前記虚像を生成し、前記高精度地図情報を取得不可能である場合には、前記低精度地図情報に基づく、前記第1表示態様とは異なる第2表示態様で前記虚像を生成する表示生成部とを備える。 According to an aspect of the present disclosure, a display control device that is used in a vehicle and that controls the display of a virtual image that is superimposed on the foreground of an occupant includes a vehicle position acquisition unit that acquires the position of the vehicle and a height corresponding to the position. Accuracy map information or a low accuracy map information that is less accurate than the high accuracy map information; a map information acquisition unit; and if the high accuracy map information can be acquired, the first based on the high accuracy map information. When the virtual image is generated in the display mode and the high-accuracy map information cannot be acquired, the virtual image is generated in the second display mode different from the first display mode based on the low-accuracy map information. And a display generation unit.
 本開示のある態様にしたがって、車両において用いられ、乗員の前景に重畳される虚像の表示を制御する表示制御プログラムは、少なくとも1つの処理部を、前記車両の位置を取得する車両位置取得部と、前記位置に対応する高精度地図情報または前記高精度地図情報よりも精度の低い低精度地図情報を取得する地図情報取得部と、前記高精度地図情報を取得可能である場合には、前記高精度地図情報に基づく第1表示態様で前記虚像を生成し、前記高精度地図情報を取得不可能である場合には、前記低精度地図情報に基づく、前記第1表示態様とは異なる第2表示態様で前記虚像を生成する表示生成部、として機能させる。 According to an aspect of the present disclosure, a display control program, which is used in a vehicle and controls display of a virtual image superimposed on a foreground of an occupant, includes at least one processing unit and a vehicle position acquisition unit that acquires the position of the vehicle. A high-precision map information corresponding to the position or a low-precision map information having a lower precision than the high-precision map information, and a high-precision map information if the high-precision map information can be obtained. When the virtual image is generated in the first display mode based on the accuracy map information and the high accuracy map information cannot be acquired, the second display different from the first display mode based on the low accuracy map information. And function as a display generation unit that generates the virtual image.
 本開示のある態様にしたがって、コンピュータにより実行されるインストラクションを含むコンピュータ読み出し可能持続的有形記録媒体であって、当該インストラクションは、車両において用いられ、乗員の前景に重畳される虚像の表示を制御するものであり、当該インストラクションは、前記車両の位置を取得することと、前記位置に対応する高精度地図情報または前記高精度地図情報よりも精度の低い低精度地図情報を取得することと、前記高精度地図情報を取得可能である場合には、前記高精度地図情報に基づく第1表示態様で前記虚像を生成し、前記高精度地図情報を取得不可能である場合には、前記低精度地図情報に基づく、前記第1表示態様とは異なる第2表示態様で前記虚像を生成することとを備える。 According to an aspect of the present disclosure, a computer-readable persistent tangible recording medium including computer-implemented instructions, the instructions being used in a vehicle to control the display of a virtual image superimposed on the occupant's foreground. The instruction is to acquire the position of the vehicle, to acquire high-precision map information corresponding to the position or low-precision map information having a lower accuracy than the high-precision map information, and the high-precision map information. When the precision map information can be acquired, the virtual image is generated in the first display mode based on the high precision map information, and when the high precision map information cannot be acquired, the low precision map information is generated. And generating the virtual image in a second display mode different from the first display mode.
 本開示のある態様にしたがって、車両において用いられ、乗員の前景に重畳される虚像の表示を制御する表示制御装置は、少なくとも1つの処理部を備え、当該少なくとも一つの処理部は、前記車両の位置を取得し、前記位置に対応する高精度地図情報または前記高精度地図情報よりも精度の低い低精度地図情報を取得し、前記高精度地図情報を取得可能である場合には、前記高精度地図情報に基づく第1表示態様で前記虚像を生成し、前記高精度地図情報を取得不可能である場合には、前記低精度地図情報に基づく、前記第1表示態様とは異なる第2表示態様で前記虚像を生成する。 According to an aspect of the present disclosure, a display control device that is used in a vehicle and that controls the display of a virtual image that is superimposed on the occupant's foreground includes at least one processing unit, and the at least one processing unit is the vehicle. The position is acquired, high-precision map information corresponding to the position or low-precision map information having a lower accuracy than the high-precision map information is acquired, and when the high-precision map information can be acquired, the high-precision map information is acquired. When the virtual image is generated in the first display mode based on the map information and the high-precision map information cannot be acquired, the second display mode based on the low-precision map information, which is different from the first display mode. To generate the virtual image.
 これらの開示によれば、高精度地図情報を取得可能である場合には、虚像の生成に高精度地図情報が用いられ、高精度地図情報を取得不可能である場合には、虚像の生成に低精度地図情報が用いられる。以上により、高精度地図情報と低精度地図情報とを使い分けての虚像表示が可能となる。したがって、地図情報を有効利用可能な表示制御装置、表示制御プログラムおよびコンピュータ読み出し可能持続的有形記録媒体を提供することができる。 According to these disclosures, when the high-precision map information can be acquired, the high-precision map information is used to generate the virtual image, and when the high-precision map information cannot be acquired, the virtual image is generated. Low precision map information is used. As described above, it is possible to display the virtual image by selectively using the high-precision map information and the low-precision map information. Therefore, it is possible to provide a display control device, a display control program, and a computer-readable persistent tangible recording medium that can effectively use map information.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
第1実施形態に係るHCUを含む車両システムの概略図であり、 HUDの車両への搭載例を示す図であり、 HCUの概略的な構成を示すブロック図であり、 重畳表示の一例を示す図であり、 重畳表示の一例を示す図であり、 非重畳表示の一例を示す図であり、 変形例の重畳表示による表示ずれの様子を示す図であり、 表示の切替タイミングの一例を示す概念図であり、 HCUの実行する処理の一例を示すフローチャートであり、 第2実施形態に係るHCUを含む車両システムの概略図であり、 第2実施形態のHCUの概略的な構成を示すブロック図であり、 第3実施形態にて表示生成部が実行する表示レイアウトのシミュレーションの一例を可視化して示す図であり、 第3実施形態における第1表示態様での虚像表示の一例を示す図であり、 第3実施形態にて表示生成部が実行する表示レイアウトのシミュレーションの一例を可視化して示す図であり、 第3実施形態における第2表示態様での虚像表示の一例を示す図であり、 第3実施形態における第2表示態様での虚像表示の一例を示す図であり、 HCUの実行する処理の一例を示すフローチャートであり、 他の実施形態における第1表示態様での虚像表示の一例を示す図であり、 他の実施形態における第2表示態様での虚像表示の一例を示す図である。
The above and other objects, features and advantages of the present disclosure will become more apparent by the following detailed description with reference to the accompanying drawings. The drawing is
1 is a schematic diagram of a vehicle system including an HCU according to a first embodiment, It is a figure which shows the example of mounting in a vehicle of HUD, It is a block diagram showing a schematic structure of HCU, It is a figure showing an example of a superimposed display, It is a figure showing an example of a superimposed display, It is a figure showing an example of non-overlap display, It is a figure which shows the mode of the display gap by the superimposed display of a modification, It is a conceptual diagram showing an example of a display switching timing, It is a flow chart which shows an example of processing which HCU performs, It is a schematic diagram of a vehicle system containing HCU concerning a 2nd embodiment, It is a block diagram which shows the schematic structure of HCU of 2nd Embodiment. It is a figure which visualizes and shows an example of the simulation of the display layout which a display generation part performs in a 3rd embodiment. It is a figure which shows an example of the virtual image display in the 1st display mode in 3rd Embodiment. It is a figure which visualizes and shows an example of the simulation of the display layout which a display generation part performs in a 3rd embodiment. It is a figure which shows an example of the virtual image display in the 2nd display mode in 3rd Embodiment. It is a figure which shows an example of the virtual image display in the 2nd display mode in 3rd Embodiment. It is a flow chart which shows an example of processing which HCU performs, It is a figure which shows an example of the virtual image display in the 1st display mode in other embodiment, It is a figure which shows an example of the virtual image display in the 2nd display mode in other embodiment.
 (第1実施形態)
 第1実施形態の表示制御装置について、図1~図9を参照しながら説明する。第1実施形態の表示制御装置は、車両システム1に用いられるHCU(Human Machine Interface Control Unit)20として提供される。車両システム1は、自動車といった路上を走行する車両Aで用いられるものである。車両システム1は、一例として、図1に示すように、HMI(Human Machine Interface)システム2、ロケータ3、周辺監視センサ4、運転支援ECU6、およびナビゲーション装置7を含んでいる。HMIシステム2、ロケータ3、周辺監視センサ4、運転支援ECU6、およびナビゲーション装置7は、例えば車内LANに接続されている。
(First embodiment)
The display control device of the first embodiment will be described with reference to FIGS. 1 to 9. The display control device of the first embodiment is provided as an HCU (Human Machine Interface Control Unit) 20 used in the vehicle system 1. The vehicle system 1 is used in a vehicle A that travels on the road such as an automobile. As shown in FIG. 1, the vehicle system 1 includes, for example, an HMI (Human Machine Interface) system 2, a locator 3, a periphery monitoring sensor 4, a driving support ECU 6, and a navigation device 7. The HMI system 2, the locator 3, the peripheral monitoring sensor 4, the driving support ECU 6, and the navigation device 7 are connected to, for example, an in-vehicle LAN.
 ロケータ3は、図1に示すように、GNSS(Global Navigation Satellite System)受信機30、慣性センサ31、高精度地図データベース(以下、高精度地図DB)32、およびロケータECU33を備えている。GNSS受信機30は、複数の人工衛星からの測位信号を受信する。慣性センサ31は、例えばジャイロセンサおよび加速度センサを備える。 As shown in FIG. 1, the locator 3 includes a GNSS (Global Navigation Satellite System) receiver 30, an inertial sensor 31, a high precision map database (hereinafter, high precision map DB) 32, and a locator ECU 33. The GNSS receiver 30 receives positioning signals from a plurality of artificial satellites. The inertial sensor 31 includes, for example, a gyro sensor and an acceleration sensor.
 高精度地図DB32は、不揮発性メモリであって、高精度地図データ(高精度地図情報)を格納している。高精度地図DB32は、後述のロケータECU33のメモリ装置により提供されている。高精度地図データは、道路に関する情報、白線等の区画線および道路標示に関する情報、構造物に関する情報等を有している。道路に関する情報には、例えば地点別の位置情報、カーブ曲率や勾配、他の道路との接続関係といった形状情報が含まれている。区画線や道路標示に関する情報には、例えば区画線および道路標示の種別情報、位置情報、および3次元形状情報が含まれている。構造物に関する情報には、例えば各構造物の種別情報、位置情報、および形状情報が含まれている。ここで構造物は、道路標識、信号機、街灯、トンネル、陸橋および道路に面する建物等である。 The high-precision map DB 32 is a non-volatile memory and stores high-precision map data (high-precision map information). The high precision map DB 32 is provided by a memory device of a locator ECU 33 described later. The high-precision map data has information about roads, information about marking lines such as white lines and road markings, information about structures, and the like. The information about roads includes, for example, position information for each point, curve curvature and slope, and shape information such as connection relationship with other roads. The information about the lane markings and road markings includes, for example, type information of lane markings and road markings, position information, and three-dimensional shape information. The information about the structure includes, for example, type information, position information, and shape information of each structure. Here, the structures are road signs, traffic lights, street lights, tunnels, overpasses, buildings facing roads, and the like.
 高精度地図データは、上述の各種位置情報および形状情報を、3次元座標により表される特徴点の点群データやベクトルデータ等として有している。すなわち高精度地図データは、位置情報に関して経緯度に加えて高度を含んだ3次元地図であるということもできる。高精度地図データは、これらの位置情報を比較的小さい(例えばセンチメートルオーダーの)誤差で有している。高精度地図データは、高さ情報まで含んだ3次元座標による位置情報を有しているという点で精度の高い地図データであり、また、その位置情報の誤差が比較的小さいという点でも精度の高い地図データである。 The high-precision map data has the above-mentioned various position information and shape information as point cloud data and vector data of feature points represented by three-dimensional coordinates. That is, it can be said that the high-precision map data is a three-dimensional map that includes altitude in addition to latitude and longitude with respect to position information. The high-precision map data has such positional information with a relatively small error (for example, on the order of centimeters). The high-precision map data is high-precision map data in that it has position information based on three-dimensional coordinates including height information, and is also accurate in that the error in the position information is relatively small. High map data.
 高精度地図データは、実際の道路上を走行する測量車の収集した情報に基づき作成されている。したがって、高精度地図データは、情報の収集されたエリアに関して作成され、情報の収集されていないエリアに関しては範囲外となっている。一般的に、高精度地図データは、現状で高速道路、自動車専用道路について比較的広いカバレッジで整備され、一般道路について比較的狭いカバレッジで整備されている。 High-precision map data is created based on the information collected by surveying vehicles that actually travel on the road. Therefore, the high-precision map data is created for the area where the information is collected, and is out of the range for the area where the information is not collected. Generally, high-precision map data is currently prepared with a relatively wide coverage for highways and motorways, and with a relatively narrow coverage for general roads.
 ロケータECU33は、プロセッサ、RAM、メモリ装置、I/O、およびこれらを接続するバスを備えるマイクロコンピュータを主体として構成されている。ロケータECU33は、GNSS受信機30、慣性センサ31、および車内LANに接続されている。ロケータECU33は、GNSS受信機30で受信する測位信号と、慣性センサ31の計測結果とを組み合わせることにより、車両Aの自車位置を逐次測位する。 The locator ECU 33 is mainly composed of a microcomputer including a processor, a RAM, a memory device, an I/O, and a bus connecting these. The locator ECU 33 is connected to the GNSS receiver 30, the inertial sensor 31, and the in-vehicle LAN. The locator ECU 33 sequentially measures the position of the vehicle A of the vehicle A by combining the positioning signal received by the GNSS receiver 30 and the measurement result of the inertial sensor 31.
 なお、ロケータECU33は、自車位置の測位に、自車に搭載された車速センサから逐次出力される検出結果から求めた走行距離等を用いてもよい。また、ロケータECU33は、後述の高精度地図データと、道路形状および構造物の特徴点の点群を検出するLIDAR等の周辺監視センサ4での検出結果とを用いて、自車位置を特定してもよい。ロケータECU33は、自車位置情報を車内LANへ出力する。 Note that the locator ECU 33 may use the traveling distance or the like obtained from the detection result sequentially output from the vehicle speed sensor mounted in the own vehicle for positioning the own vehicle position. In addition, the locator ECU 33 identifies the vehicle position using the high-precision map data described below and the detection result of the peripheral monitoring sensor 4 such as LIDAR that detects the point group of the road shape and the feature points of the structure. May be. Locator ECU 33 outputs the vehicle position information to the in-vehicle LAN.
 またロケータECU33は、図3に示すように、機能ブロックとして地図通知部301を有する。地図通知部301は、測位した自車位置情報と、高精度地図DB32の高精度地図データに基づいて、自車位置に対応する情報である車両Aの現在の自車位置についての情報が高精度地図データに含まれているか否かを判定する。地図通知部301は、例えば自車位置情報から車両Aの走行軌跡を算出し、高精度地図データの道路形状と重ね合わせる所謂マップマッチング処理を実行する。地図通知部301は、このマップマッチング処理の結果から現在の自車位置が高精度地図データに含まれているか否かを判定する。または、地図通知部301は、自車位置情報に基づき、車両Aの2次元位置情報(例えば経度、緯度)だけでなく高さ情報も利用して、現在の自車位置についての情報が高精度地図データに含まれているか否かを判定する。地図通知部301は、上述のマップマッチング処理または高さ情報を用いた処理により、高低差の異なる道路(例えば高架道路と地上道路)が近接している場合であっても、いずれの道路を車両Aが走行中であるかを判別することができる。これにより地図通知部301は、判定精度を向上させることができる。地図通知部301は、判定結果に基づいて、自車位置についての情報が高精度地図データに含まれているまたは含まれていない旨を示す通知情報を、HCU20へと出力する。 Further, the locator ECU 33 has a map notification unit 301 as a functional block, as shown in FIG. The map notification unit 301 is highly accurate in information about the current vehicle position of the vehicle A, which is information corresponding to the vehicle position, based on the measured vehicle position information and the high accuracy map data of the high accuracy map DB 32. It is determined whether it is included in the map data. The map notification unit 301 executes, for example, so-called map matching processing in which the traveling locus of the vehicle A is calculated from the vehicle position information and is superimposed on the road shape of the high precision map data. The map notification unit 301 determines whether the current vehicle position is included in the high-precision map data based on the result of the map matching process. Alternatively, the map notification unit 301 uses the two-dimensional position information (for example, longitude and latitude) of the vehicle A as well as the height information based on the own vehicle position information, and the information about the current own vehicle position is highly accurate. It is determined whether it is included in the map data. The map notification unit 301 uses the above-described map matching process or the process using height information to determine which road is to be used for vehicles even when roads with different heights (for example, an elevated road and a ground road) are close to each other. It is possible to determine whether A is traveling. Accordingly, the map notification unit 301 can improve the determination accuracy. The map notification unit 301 outputs, to the HCU 20, notification information indicating that information about the vehicle position is included or not included in the high-precision map data based on the determination result.
 図1に戻り、周辺監視センサ4は、自車の周辺環境を監視する自律センサである。周辺監視センサ4は、歩行者、人間以外の動物、自車以外の車両等の移動する動的物標、および路上の落下物、ガードレール、縁石、走行区画線等の路面表示、および樹木等の静止している静的物標といった自車周辺の対象物を検出する。  Returning to FIG. 1, the peripheral monitoring sensor 4 is an autonomous sensor that monitors the surrounding environment of the vehicle. The perimeter monitoring sensor 4 is a moving dynamic target such as a pedestrian, an animal other than a human being, a vehicle other than the own vehicle, a road surface display such as a falling object on the road, a guardrail, a curbstone, a lane marking, or a tree. It detects objects around the vehicle, such as stationary static targets. ‥
 例えば周辺監視センサ4としては、自車周囲の所定範囲を撮像する周辺監視カメラ、自車周囲の所定範囲に探査波を送信するミリ波レーダ、ソナー、LIDAR等の探査波センサがある。周辺監視カメラは、逐次撮像する撮像画像をセンシング情報として車内LANへ逐次出力する。探査波センサは、対象物によって反射された反射波を受信した場合に得られる受信信号に基づく走査結果をセンシング情報として車内LANへ逐次出力する。第1実施形態の周辺監視センサ4は、少なくとも、自車の前方の所定範囲を撮像範囲とする前方カメラ41を含む。前方カメラ41は、例えば、自車のルームミラー、インストルメントパネル上面等に設けられている。 For example, as the peripheral monitoring sensor 4, there are a peripheral monitoring camera that images a predetermined range around the own vehicle, a millimeter wave radar that transmits a search wave to a predetermined range around the own vehicle, a sonar, a search wave sensor such as LIDAR. The perimeter monitoring camera sequentially outputs captured images that are sequentially captured as sensing information to the in-vehicle LAN. The exploration wave sensor sequentially outputs the scanning result based on the received signal obtained when the reflected wave reflected by the object is received, to the in-vehicle LAN as sensing information. The perimeter monitoring sensor 4 of the first embodiment includes at least a front camera 41 whose imaging range is a predetermined range in front of the vehicle. The front camera 41 is provided, for example, on the rearview mirror of the own vehicle, the upper surface of the instrument panel, or the like.
 運転支援ECU6は、乗員による運転操作の代行を行う自動運転機能を実行する。運転支援ECU6は、ロケータ3から取得する自車の車両位置および地図データ、周辺監視センサ4でのセンシング情報をもとに、自車の走行環境を認識する。 The driving support ECU 6 executes an automatic driving function that acts on behalf of a passenger. The driving support ECU 6 recognizes the traveling environment of the own vehicle based on the vehicle position and map data of the own vehicle acquired from the locator 3, and the sensing information from the surroundings monitoring sensor 4.
 運転支援ECU6で実行する自動運転機能の一例としては、駆動力および制動力を調整することで、先行車との目標車間距離を維持するように自車の走行速度を制御するACC(Adaptive Cruise Control)機能がある。また、前方のセンシング情報をもとに制動力を発生させることで、自車を強制的に減速させるAEB(Autonomous Emergency Braking)機能がある。なお、運転支援ECU6は、自動運転の機能として他の機能を備えていてもよい。 An example of the automatic driving function executed by the driving support ECU 6 is to adjust the driving force and the braking force to control the traveling speed of the own vehicle so as to maintain the target distance between the preceding vehicle and ACC (Adaptive Cruise Control). ) There is a function. In addition, there is an AEB (Autonomous Energy Breaking) function that forcibly decelerates the vehicle by generating a braking force based on the front sensing information. The driving support ECU 6 may have other functions as a function of automatic driving.
 ナビゲーション装置7は、ナビゲーション地図データを格納したナビゲーション地図データベース(以下、ナビゲーション地図DB)70を備える。ナビゲーション装置7は、設定される目的地までの時間優先、距離優先等の条件を満たす経路を探索し、その探索した経路に従った経路案内を行う。ナビゲーション装置7は、探索した経路を予定経路情報として車内LANに出力する。 The navigation device 7 includes a navigation map database (hereinafter, navigation map DB) 70 that stores navigation map data. The navigation device 7 searches for a route that satisfies conditions such as time priority and distance priority to the set destination, and provides route guidance according to the searched route. The navigation device 7 outputs the searched route to the in-vehicle LAN as scheduled route information.
 ナビゲーション地図DB70は、不揮発性メモリであって、リンクデータ、ノードデータ、道路形状等のナビゲーション地図データを格納している。ナビゲーション地図データは、高精度地図データよりも比較的広範囲のエリアにて整備されている。リンクデータは、リンクを特定するリンクID、リンクの長さを示すリンク長、リンク方位、リンク旅行時間、リンクの始端と終端とのノード座標、および道路属性等の各データから構成される。ノードデータは、地図上のノード毎に固有の番号を付したノードID、ノード座標、ノード名称、ノード種別、ノードに接続するリンクのリンクIDが記述される接続リンクID、交差点種別等の各データから構成される。 The navigation map DB 70 is a non-volatile memory and stores navigation map data such as link data, node data, and road shapes. Navigation map data is prepared in a relatively wider area than high-precision map data. The link data is composed of a link ID for identifying the link, a link length indicating the length of the link, a link azimuth, a link travel time, node coordinates of the start and end of the link, road attributes, and the like. The node data is each data such as a node ID given a unique number for each node on the map, a node coordinate, a node name, a node type, a connection link ID in which a link ID of a link connecting to the node is described, an intersection type, etc. Composed of.
 ナビゲーション地図データは、2次元の位置座標情報としてノード座標を有している。すなわちナビゲーション地図データは、位置情報に関して経緯度を含んだ2次元地図であるということもできる。ナビゲーション地図データは、位置情報に関して高さ情報を有していない点で高精度地図データよりも比較的低い精度の地図データであり、また、位置情報の誤差が比較的大きいという点でも低い精度の地図データである。ナビゲーション地図データは、低精度地図情報の一例である。 The navigation map data has node coordinates as two-dimensional position coordinate information. That is, it can be said that the navigation map data is a two-dimensional map including the latitude and longitude with respect to the position information. The navigation map data is a map data having a relatively lower accuracy than the high accuracy map data in that it does not have height information regarding the position information, and also has a low accuracy in that the error of the position information is relatively large. It is map data. The navigation map data is an example of low-precision map information.
 HMIシステム2は、操作デバイス21、表示装置23、およびHCU20を備えており、自車のユーザである乗員からの入力操作を受け付けたり、自車の乗員に向けて情報を提示したりする。操作デバイス21は、自車の乗員が操作するスイッチ群である。操作デバイス21は、各種の設定を行うために用いられる。例えば、操作デバイス21としては、自車のステアリングのスポーク部に設けられたステアリングスイッチ等がある。 The HMI system 2 includes an operation device 21, a display device 23, and an HCU 20, and receives an input operation from an occupant who is a user of the own vehicle and presents information to the occupant of the own vehicle. The operation device 21 is a switch group operated by an occupant of the vehicle. The operation device 21 is used to make various settings. For example, as the operation device 21, there is a steering switch or the like provided on the spoke portion of the steering of the vehicle.
 表示装置23は、例えばヘッドアップディスプレイ(以下、HUDと表記)230、メータに設けられたマルチインフォメーションディスプレイ(MID)231、およびセンターインフォメーションディスプレイ(CID)232を含む。図2に示すようにHUD230は、自車のインストルメントパネル12に設けられている。HUD230は、例えば液晶式又は走査式等のプロジェクタ230aにより、HCU20から出力される画像データに基づく表示画像を形成する。CID232の表示画面には、ナビゲーション地図データおよび目的地へ向けたルート情報等がナビゲーション装置7によって表示される。 The display device 23 includes, for example, a head-up display (hereinafter referred to as HUD) 230, a multi-information display (MID) 231 provided on the meter, and a center information display (CID) 232. As shown in FIG. 2, the HUD 230 is provided on the instrument panel 12 of the own vehicle. The HUD 230 forms a display image based on the image data output from the HCU 20 by a liquid crystal type or scanning type projector 230a, for example. On the display screen of the CID 232, the navigation device 7 displays navigation map data, route information for the destination, and the like.
 HUD230は、プロジェクタ230aによって形成される表示画像を、例えば凹面鏡等の光学系230bを通じて、投影部材としてのフロントウインドシールドWSに規定された投影領域PAに投影する。投影領域PAは、運転席前方に位置するものとする。フロントウインドシールドWSによって車室内側に反射された表示画像の光束は、運転席に着座する乗員によって知覚される。また、透光性ガラスにより形成されるフロントウインドシールドWSを透過した、自車の前方に存在する風景としての前景からの光束も、運転席に着座する乗員によって知覚される。これにより、乗員は、フロントウインドシールドWSの前方にて結像される表示画像の虚像Viを、前景の一部と重ねて視認可能となる。 The HUD 230 projects the display image formed by the projector 230a onto a projection area PA defined by the front windshield WS as a projection member through an optical system 230b such as a concave mirror. The projection area PA is assumed to be located in front of the driver's seat. The luminous flux of the display image reflected by the front windshield WS toward the vehicle interior is perceived by an occupant sitting in the driver's seat. Further, the light flux from the foreground as a landscape existing in front of the vehicle, which is transmitted through the front windshield WS formed of translucent glass, is also perceived by the occupant sitting in the driver's seat. As a result, the occupant can visually recognize the virtual image Vi of the display image formed in front of the front windshield WS, overlapping a part of the foreground.
 以上によりHUD230は、車両Aの前景に虚像Viを重畳表示する。HUD230は、虚像Viを前景中の特定の重畳対象に重畳し、所謂AR(Augmented Reality)表示を実現する。加えてHUD230は、虚像Viを特定の重畳対象に重畳せず、単に前景に重畳表示する非AR表示を実現する。なお、HUD230が表示画像を投影する投影部材は、フロントウインドシールドWSに限られず、透光性コンバイナであってもよい。 Due to the above, the HUD 230 superimposes and displays the virtual image Vi on the foreground of the vehicle A. The HUD 230 superimposes the virtual image Vi on a specific superimposition target in the foreground to realize so-called AR (Augmented Reality) display. In addition, the HUD 230 realizes a non-AR display in which the virtual image Vi is not superimposed on a specific superimposition target, but simply superimposed and displayed on the foreground. The projection member on which the HUD 230 projects the display image is not limited to the front windshield WS and may be a translucent combiner.
 HCU20は、プロセッサ20a、RAM20b、メモリ装置20c、I/O20d、これらを接続するバスを備えるマイクロコンピュータを主体として構成され、HUD230と車内LANとに接続されている。HCU20は、メモリ装置20cに記憶された表示制御プログラムを実行することにより、HUD230による表示を制御する。HCU20は、表示制御装置の一例であり、プロセッサ20aは処理部の一例である。メモリ装置20cは、コンピュータによって読み取り可能なプログラムおよびデータを非一時的に格納する非遷移的実体的記憶媒体(non-transitory tangible storage medium)である。また、非遷移的実体的記憶媒体は、半導体メモリ又は磁気ディスクなどによって実現される。 The HCU 20 is mainly composed of a microcomputer having a processor 20a, a RAM 20b, a memory device 20c, an I/O 20d, and a bus connecting these, and is connected to the HUD 230 and the in-vehicle LAN. The HCU 20 controls the display by the HUD 230 by executing the display control program stored in the memory device 20c. The HCU 20 is an example of a display control device, and the processor 20a is an example of a processing unit. The memory device 20c is a non-transitional tangible storage medium that non-temporarily stores computer-readable programs and data. The non-transitional physical storage medium is realized by a semiconductor memory or a magnetic disk.
 HCU20は、HUD230にて虚像Viとして表示するコンテンツの画像を生成し、HUD230へと出力する。虚像Viの一例として、HCU20は、図4~図6に示すように、乗員に対して車両Aの走行予定経路を案内する経路案内画像を生成する。 The HCU 20 generates an image of the content displayed as the virtual image Vi on the HUD 230 and outputs it to the HUD 230. As an example of the virtual image Vi, the HCU 20 generates a route guidance image that guides the scheduled traveling route of the vehicle A to the occupants, as shown in FIGS. 4 to 6.
 HCU20は、図4および図5に示すような、路面に対して重畳されるAR案内画像Gi1を生成する。AR案内画像Gi1は、例えば走行予定経路に沿って路面上に連続して並ぶ3次元的な表示態様(以下、3次元表示態様)で生成される。図4は、勾配のある道路に対してAR案内画像Gi1を重畳表示している例である。図5は、進行先で車線が増加している道路形状に沿ってAR案内画像Gi1を重畳表示している例である。 The HCU 20 generates an AR guide image Gi1 superimposed on the road surface as shown in FIGS. 4 and 5. The AR guidance image Gi1 is generated, for example, in a three-dimensional display mode (hereinafter, three-dimensional display mode) arranged continuously on the road surface along the planned travel route. FIG. 4 is an example in which the AR guidance image Gi1 is displayed in a superimposed manner on a road with a slope. FIG. 5 is an example in which the AR guidance image Gi1 is displayed in a superimposed manner along the road shape in which the number of lanes is increasing at the destination.
 またはHCU20は、図6に示すような、単に前景に表示される非AR案内画像Gi2を経路案内画像として生成する。非AR案内画像Gi2は、例えば走行すべき車線をハイライト表示する画像、および進行ルートの示された交差点の画像等、フロントウインドシールドWSに対して固定された2次元的な表示態様(以下、2次元表示態様)で生成される。すなわち、非AR案内画像Gi2は、前景中の特定の重畳対象には重畳されず、単に前景に重畳される虚像Viである。3次元表示態様は、第1表示態様の一例であり、2次元表示態様は、第2表示態様の一例である。 Alternatively, the HCU 20 generates a non-AR guidance image Gi2 simply displayed in the foreground as a route guidance image as shown in FIG. The non-AR guidance image Gi2 is a two-dimensional display mode that is fixed with respect to the front windshield WS, such as an image that highlights the lane to be traveled and an image of an intersection where the traveling route is shown (hereinafter, 2D display mode). That is, the non-AR guide image Gi2 is a virtual image Vi that is not superimposed on a specific superimposition target in the foreground but is simply superimposed on the foreground. The three-dimensional display mode is an example of the first display mode, and the two-dimensional display mode is an example of the second display mode.
 HCU20は、AR案内画像Gi1および非AR案内画像Gi2の生成に関わる機能ブロックとして、自車位置取得部201、地図判定部202、地図情報取得部203、センサ情報取得部204、表示態様決定部205、および表示生成部206を備える。自車位置取得部201は、ロケータ3から自車位置情報を取得する。自車位置取得部201は、車両位置取得部の一例である。 The HCU 20 has a vehicle position acquisition unit 201, a map determination unit 202, a map information acquisition unit 203, a sensor information acquisition unit 204, and a display mode determination unit 205 as functional blocks related to the generation of the AR guidance image Gi1 and the non-AR guidance image Gi2. , And a display generation unit 206. The vehicle position acquisition unit 201 acquires vehicle position information from the locator 3. The vehicle position acquisition unit 201 is an example of a vehicle position acquisition unit.
 地図判定部202は、ロケータ3から取得した通知情報等に基づいて、虚像Viの生成に用いる地図情報として、高精度地図データおよびナビゲーション地図データのどちらを取得するかを判定する。 The map determination unit 202 determines, based on the notification information or the like acquired from the locator 3, which of high-precision map data and navigation map data is to be acquired as the map information used to generate the virtual image Vi.
 地図判定部202は、高精度地図データを取得可能か否か判定する。地図判定部202は、車両Aの現在の自車位置が高精度地図データに含まれている場合に、高精度地図データを取得可能であると判定する。地図判定部202は、ロケータECU33から出力された通知情報に基づいて、この判定処理を行う。ここでの判定処理に用いられる自車位置には、虚像Viを重畳し得る車両Aの周囲のエリアを含んでいてよい。また、地図判定部202は、ロケータ3からの通知情報によらず、ロケータ3から取得する自車位置情報と高精度地図データとに基づいて、自身で高精度地図データを取得可能か否か判定してもよい。地図判定部202は、上述の判定処理を走行中に継続的に実施してもよく、所定の走行区間ごとに断続的に実施してもよい。 The map determination unit 202 determines whether or not high precision map data can be acquired. The map determination unit 202 determines that the high-precision map data can be acquired when the current vehicle position of the vehicle A is included in the high-precision map data. The map determination unit 202 performs this determination process based on the notification information output from the locator ECU 33. The vehicle position used in the determination processing here may include an area around the vehicle A on which the virtual image Vi can be superimposed. Further, the map determination unit 202 determines whether or not the high-accuracy map data can be acquired by itself, based on the own vehicle position information and the high-accuracy map data acquired from the locator 3, regardless of the notification information from the locator 3. You may. The map determination unit 202 may continuously perform the above-described determination processing during traveling, or may intermittently perform the determination processing every predetermined traveling section.
 また、地図判定部202は、高精度地図データに、車両Aの将来走行区間GSについての情報が含まれているか否かを判定する(区間判定処理)。将来走行区間GSは、例えば、車両Aの走行予定経路のうち、経路案内画像を表示する必要のある直近の走行区間である。経路案内画像を表示する必要のある表示区間は、例えば、交差点等の複数の道路が接続されている地点を含む区間、車線変更が必要な区間等である。 The map determination unit 202 also determines whether or not the high-precision map data includes information about the future traveling section GS of the vehicle A (section determination processing). The future traveling section GS is, for example, the latest traveling section of the planned traveling route of the vehicle A for which the route guidance image needs to be displayed. The display section in which the route guidance image needs to be displayed is, for example, a section including a point where a plurality of roads are connected, such as an intersection, or a section in which the lane needs to be changed.
 例えば地図判定部202は、図8に示すような将来走行区間GSの全範囲が高精度地図データに含まれているか否か判定する。図8は、車両Aが高速道路からランプウェイを通過して一般道路へと進入しようとする状況を示している。図8においては、車両Aがランプウェイと一般道路とが接続される交差点CPにて左折する場合を想定している。 For example, the map determination unit 202 determines whether or not the entire range of the future traveling section GS as shown in FIG. 8 is included in the high precision map data. FIG. 8 shows a situation in which vehicle A tries to enter a general road from a highway through a rampway. In FIG. 8, it is assumed that the vehicle A turns left at the intersection CP where the rampway and the general road are connected.
 図8の道路は、ランプウェイ上に示す2点鎖線を境界線として、高精度地図データおよびナビゲーション地図データの両方が整備されているエリアと、ナビゲーション地図データのみ整備されているエリアとに分かれている。このため将来走行区間GSのうち、経路案内を開始する始点ps(例えば交差点CPから300m手前の地点)から境界線までの区間は、高精度地図データに含まれている。一方で、境界線から経路案内を終了する終点pf(例えば交差点の退出点)までの区間は、高精度地図データに含まれておらず、ナビゲーション地図データのみに含まれている。この場合、地図判定部202は、高精度地図データに車両Aの将来走行区間GSについての情報が含まれていないと判定する。 The road in FIG. 8 is divided into an area where both high-precision map data and navigation map data are provided and an area where only navigation map data is provided, with the two-dot chain line shown on the rampway as the boundary line. There is. Therefore, of the future traveling section GS, the section from the start point ps (for example, a point 300 m before the intersection CP) where the route guidance is started to the boundary line is included in the high-precision map data. On the other hand, the section from the boundary line to the end point pf (for example, the exit point of the intersection) at which the route guidance ends is not included in the high-precision map data, but included in the navigation map data only. In this case, the map determination unit 202 determines that the high-precision map data does not include information about the future traveling section GS of the vehicle A.
 地図判定部202は、例えばナビゲーション装置7から提供される予定経路情報およびロケータ3から提供される高精度地図データに基づいて、この区間判定処理を実施する。地図判定部202は、この区間判定処理を、車両Aが始点psに到達したタイミングまたは接近したタイミングで実施する。または、地図判定部202は、ロケータECU33にて実施された上述の区間判定処理の判定結果を取得する構成であってもよい。 The map determination unit 202 executes this section determination processing based on, for example, the planned route information provided by the navigation device 7 and the high-precision map data provided by the locator 3. The map determination unit 202 executes this section determination processing at the timing when the vehicle A reaches the starting point ps or when the vehicle A approaches the starting point ps. Alternatively, the map determination unit 202 may be configured to acquire the determination result of the above-described section determination processing executed by the locator ECU 33.
 加えて、地図判定部202は、車両Aの走行する道路形状に関してAR案内画像Gi1の生成が不要な形状条件、すなわちAR案内画像Gi1の生成を中止する形状条件が成立するか否かを判定する(形状判定処理)。形状条件は、例えば経路案内において、非AR案内画像Gi2によって乗員に対して走行予定経路を正確に伝達することが可能な道路形状であると評価される場合に、成立する。そして、AR案内画像Gi1ではなく非AR案内画像Gi2を表示した場合に乗員が走行予定経路を誤認し得ると評価される場合、形状条件が不成立となる。ここで道路形状とは、道路の備える車線数、勾配や曲率、他の道路との接続関係等である。例えば、経路案内を実施する区間が1車線である場合、進行先の車線が一意に決定されるため、非AR案内画像Gi2によって走行予定経路を正確に伝達可能であり、形状条件が成立する。また、右左折の案内を実施する交差点と車両Aとの間に他の交差点が存在しない場合、右左折を実施する交差点が一意に決定されるため、非AR案内画像Gi2によって走行予定経路を正確に伝達可能であり、形状条件が成立する。また、道路が実質的に勾配のない平坦路である場合、車両Aの進行先を見通すことが可能であるため、非AR案内画像Gi2によって走行予定経路を正確に伝達可能であり、形状条件が成立する。形状条件の成立は、例えば道路が平坦路であり且つ1車線のみである場合等、上述した複数の場合の組合せにより判定されてもよい。 In addition, the map determination unit 202 determines whether or not the shape condition for which the generation of the AR guidance image Gi1 is unnecessary for the road shape on which the vehicle A is traveling, that is, the shape condition for stopping the generation of the AR guidance image Gi1 is satisfied. (Shape determination process). The shape condition is satisfied, for example, when the route is evaluated to be a road shape that can accurately transmit the planned traveling route to the occupant by the non-AR guidance image Gi2. Then, if it is evaluated that the occupant may misidentify the planned traveling route when the non-AR guidance image Gi2 is displayed instead of the AR guidance image Gi1, the shape condition is not satisfied. Here, the road shape includes the number of lanes the road has, the gradient and the curvature, the connection relationship with other roads, and the like. For example, when the section for which route guidance is implemented is one lane, the lane to which the vehicle travels is uniquely determined, so that the planned traveling route can be accurately transmitted by the non-AR guidance image Gi2, and the shape condition is satisfied. In addition, when there is no other intersection between the intersection for performing the right/left turn guidance and the vehicle A, the intersection for performing the right/left turn is uniquely determined. Therefore, the planned traveling route is accurately determined by the non-AR guidance image Gi2. And the shape condition is satisfied. In addition, when the road is a flat road having substantially no slope, it is possible to see the traveling destination of the vehicle A, so that the planned traveling route can be accurately transmitted by the non-AR guidance image Gi2, and the shape condition is To establish. The establishment of the shape condition may be determined by a combination of a plurality of cases described above, for example, when the road is a flat road and has only one lane.
 地図判定部202は、ロケータ3から提供される高精度地図データ、周辺監視センサ4の検出情報等に基づいて、形状条件の成立有無を判定する。または、地図判定部202は、ロケータECU33にて実施された上述の形状判定処理の判定結果を取得する構成であってもよい。 The map determination unit 202 determines whether or not the shape condition is satisfied, based on the high-precision map data provided from the locator 3, the detection information from the perimeter monitoring sensor 4, and the like. Alternatively, the map determination unit 202 may be configured to acquire the determination result of the above-described shape determination process executed by the locator ECU 33.
 地図判定部202は、現在の自車位置において高精度地図データを取得可能な場合に、高精度地図データを取得すると判定する。ただし地図判定部202は、高精度地図データに将来走行区間GSについての情報が含まれていない場合、または形状条件が成立する場合には、現在の自車位置において高精度地図データを取得可能であっても、ナビゲーション地図データを取得すると判定する。 The map determination unit 202 determines to acquire the high-precision map data when the high-precision map data can be acquired at the current vehicle position. However, the map determination unit 202 can acquire the high-precision map data at the current vehicle position when the high-precision map data does not include information about the future traveling section GS or when the shape condition is satisfied. Even if there is, it is determined to acquire the navigation map data.
 地図情報取得部203は、地図判定部202における判定結果に基づいて、高精度地図データおよびナビゲーション地図データのいずれかを取得する。地図情報取得部203は、高精度地図データを取得可能であると判定されている場合に、高精度地図データを取得する。地図情報取得部203は、高精度地図データを取得不可能であると判定されている場合に、高精度地図データではなくナビゲーション地図データを取得する。 The map information acquisition unit 203 acquires either high-accuracy map data or navigation map data based on the determination result of the map determination unit 202. The map information acquisition unit 203 acquires high-accuracy map data when it is determined that high-accuracy map data can be acquired. The map information acquisition unit 203 acquires the navigation map data instead of the high accuracy map data when it is determined that the high accuracy map data cannot be acquired.
 ただし、地図情報取得部203は、案内地点に関する情報が高精度地図データに含まれていないと判定されている場合には、高精度地図データを取得可能であると判定されている場合であっても、ナビゲーション地図データを取得する。加えて地図情報取得部203は、形状条件が成立していると判定されている場合には、高精度地図データを取得可能であると判定されている場合であっても、ナビゲーション地図データを取得する。地図情報取得部203は、取得した地図情報を、表示態様決定部205へと逐次出力する。 However, when it is determined that the high-precision map data does not include the information about the guide point, the map information acquisition unit 203 determines that the high-precision map data can be acquired. Also gets navigation map data. In addition, when it is determined that the shape condition is satisfied, the map information acquisition unit 203 acquires the navigation map data even when it is determined that the high-accuracy map data can be acquired. To do. The map information acquisition unit 203 sequentially outputs the acquired map information to the display mode determination unit 205.
 センサ情報取得部204は、車両Aの前方の検出物体に関する検出情報を取得する。検出情報には、AR案内画像Gi1の重畳対象である路面の高さ情報、または高さ情報を推定できる検出物体の高さ情報が含まれる。検出物体には、停止線、交差点の中央標示、および走行区画線等の路面標示や、道路標識、縁石、および信号機等の路上設置物が含まれる。検出情報は、ナビゲーション地図データを用いてAR案内画像Gi1を生成する際に、ナビゲーション地図データまたはAR案内画像Gi1の重畳位置を補正するための情報である。検出情報は、走行道路の形状情報、走行道路の車線数情報、および車両Aが現在走行中の車線情報等を含んでいてもよい。センサ情報取得部204は、検出情報の取得を試み、取得できた場合に表示態様決定部205へと逐次出力する。 The sensor information acquisition unit 204 acquires detection information regarding a detected object in front of the vehicle A. The detection information includes the height information of the road surface on which the AR guide image Gi1 is to be superimposed, or the height information of the detected object from which the height information can be estimated. The detected objects include road markings such as stop lines, center markings at intersections, and lane markings, road markings, curbs, and road installations such as traffic lights. The detection information is information for correcting the superimposed position of the navigation map data or the AR guide image Gi1 when the AR guide image Gi1 is generated using the navigation map data. The detection information may include shape information of the traveling road, information about the number of lanes on the traveling road, information about the lane in which the vehicle A is currently traveling, and the like. The sensor information acquisition unit 204 attempts to acquire the detection information, and when the detection information is acquired, sequentially outputs the detection information to the display mode determination unit 205.
 表示態様決定部205は、経路案内画像を3次元表示態様および2次元表示態様のどちらの表示態様で生成するか、すなわち、経路案内画像としてAR案内画像Gi1および非AR案内画像Gi2のどちらを表示生成部206にて生成するか決定する。 The display mode determining unit 205 displays in which of the three-dimensional display mode and the two-dimensional display mode the route guidance image is generated, that is, which of the AR guidance image Gi1 and the non-AR guidance image Gi2 is displayed as the route guidance image. The generation unit 206 determines whether to generate.
 HUD230においては、ナビゲーション地図データに基づいてAR案内画像Gi1を表示しようとした場合、図7に示す変形例のように路面に対してAR案内画像Gi1が浮いたように表示されたり、または路面に対して埋まったように表示されたりし得る。このような重畳位置のずれは、ナビゲーション地図データが高精度地図データに比較して特に高さ情報の精度が低い、または高さ情報を有しておらず、道路の勾配形状を反映してAR案内画像Gi1を生成することができないために発生する。表示態様決定部205は、重畳位置のずれたAR案内画像Gi1の生成を抑制するため、高精度地図データの取得可否に基づき、生成する経路案内画像をAR案内画像Gi1と非AR案内画像Gi2とから選択する。 In the HUD 230, when the AR guidance image Gi1 is displayed based on the navigation map data, the AR guidance image Gi1 is displayed as if it is floating on the road surface as in the modification shown in FIG. 7, or on the road surface. It may be displayed as if it was filled. Such a deviation of the superimposition position is such that the navigation map data has a particularly low accuracy of height information as compared with the high-precision map data or does not have height information, and the AR map reflects the gradient shape of the road. This occurs because the guide image Gi1 cannot be generated. In order to suppress the generation of the AR guidance image Gi1 with the superimposed position shifted, the display mode determination unit 205 generates the route guidance image based on the availability of the high-accuracy map data as the AR guidance image Gi1 and the non-AR guidance image Gi2. Select from.
 表示態様決定部205は、地図情報取得部203にて高精度地図データが取得されている場合には、経路案内画像の表示態様を3次元表示態様に決定する。表示態様決定部205は、地図情報取得部203にて高精度地図データが取得不可能である場合には、経路案内画像の表示態様を2次元表示態様に決定する。ただし、表示態様決定部205は、地図情報取得部203にて高精度地図データが取得不可能である場合であっても、センサ情報取得部204にて検出情報が取得できている場合には、経路案内画像の表示態様を3次元表示態様に決定する。表示態様決定部205は、決定した表示態様を表示生成部206へと出力する。 The display mode determination unit 205 determines the display mode of the route guidance image to be the three-dimensional display mode when the high-precision map data is acquired by the map information acquisition unit 203. When the high-accuracy map data cannot be acquired by the map information acquisition unit 203, the display mode determination unit 205 determines the display mode of the route guidance image to be the two-dimensional display mode. However, if the display information determination unit 205 can acquire the detection information by the sensor information acquisition unit 204 even if the map information acquisition unit 203 cannot acquire the high-precision map data, The display mode of the route guidance image is determined to be a three-dimensional display mode. The display mode determination unit 205 outputs the determined display mode to the display generation unit 206.
 表示生成部206は、取得された各種情報に基づき、表示態様決定部205にて決定された表示態様にて経路案内画像を生成する。表示態様が3次元表示態様に決定されている場合、表示生成部206は、高精度地図データの3次元位置座標の情報に基づいて、AR案内画像Gi1を重畳する路面の3次元の位置座標を特定する。表示生成部206は、路面の位置座標と、自車位置座標とに基づき、車両Aに対する路面の相対的な3次元位置(相対位置)を特定する。また、表示生成部206は、高精度地図データに基づき路面の勾配情報を算出または取得する。表示生成部206は、例えば坂道を規定する2地点の位置座標を用いた幾何学的な演算により勾配情報を算出する。または、表示生成部206は、区画線の3次元形状情報に基づき勾配情報を算出してもよい。または、表示生成部206は、高精度地図データの含む情報のうち勾配情報を推定可能な情報に基づき勾配情報を推定してもよい。表示生成部206は、特定された相対位置、DSM22から取得される乗員の視点位置、および投影領域PAの位置の関係、および相対位置における路面の勾配等に基づき、幾何学的な演算によってAR案内画像Gi1の投影位置および投影形状を算出する。表示生成部206は、算出結果に基づいてAR案内画像Gi1を生成し、HUD230へデータを出力してAR案内画像Gi1を虚像Viとして表示させる。 The display generation unit 206 generates a route guidance image in the display mode determined by the display mode determination unit 205 based on the acquired various information. When the display mode is determined to be the three-dimensional display mode, the display generation unit 206 determines the three-dimensional position coordinates of the road surface on which the AR guidance image Gi1 is superimposed, based on the information of the three-dimensional position coordinates of the high accuracy map data. Identify. The display generation unit 206 specifies the relative three-dimensional position (relative position) of the road surface with respect to the vehicle A based on the position coordinates of the road surface and the vehicle position coordinates. The display generation unit 206 also calculates or acquires road surface gradient information based on the high-precision map data. The display generation unit 206 calculates the gradient information by, for example, a geometric calculation using position coordinates of two points that define a slope. Alternatively, the display generation unit 206 may calculate the gradient information based on the three-dimensional shape information of the lane markings. Alternatively, the display generation unit 206 may estimate the gradient information based on the information that can estimate the gradient information among the information included in the high-precision map data. The display generation unit 206 performs AR guidance by geometric calculation based on the specified relative position, the occupant's viewpoint position acquired from the DSM 22, the position of the projection area PA, the road surface gradient at the relative position, and the like. The projection position and projection shape of the image Gi1 are calculated. The display generation unit 206 generates the AR guide image Gi1 based on the calculation result, outputs the data to the HUD 230, and displays the AR guide image Gi1 as the virtual image Vi.
 また表示生成部206は、センサ情報取得部204にて検出情報が取得できていることに基づいて3次元表示態様に決定されている場合、ナビゲーション地図の2次元位置座標と、周辺情報とを組み合わせて、AR案内画像Gi1を生成する。例えば表示生成部206は、検出情報から取得される、または推定される高さ情報と、ナビゲーション地図の2次元位置座標とから、AR案内画像Gi1を重畳する路面の3次元の位置座標を特定する。表示生成部206は、特定した位置座標を用いて、高精度地図データを用いる場合と同様にAR案内画像Gi1の投影位置および投影形状を算出する。なお、表示生成部206は、検出情報に走行道路の形状情報、走行道路の車線数情報、および車両Aが現在走行中の車線情報等が含まれている場合、これらの情報をAR案内画像Gi1の重畳位置の補正に利用してもよい。 Further, the display generation unit 206 combines the two-dimensional position coordinates of the navigation map with the peripheral information when the three-dimensional display mode is determined based on the fact that the sensor information acquisition unit 204 has acquired the detection information. Then, the AR guidance image Gi1 is generated. For example, the display generation unit 206 specifies the three-dimensional position coordinates of the road surface on which the AR guide image Gi1 is superimposed, from the height information acquired or estimated from the detection information and the two-dimensional position coordinates of the navigation map. .. The display generation unit 206 calculates the projection position and the projection shape of the AR guidance image Gi1 using the specified position coordinates, as in the case of using the high-precision map data. When the detection information includes the shape information of the traveling road, the lane number information of the traveling road, the lane information of the vehicle A currently traveling, and the like, the display generation unit 206 includes these pieces of information in the AR guidance image Gi1. It may be used to correct the superimposed position of.
 表示態様が2次元表示態様に決定されている場合、表示生成部206は、ナビゲーション地図の2次元位置座標の情報を取得し、経路案内画像を生成する。表示生成部206は、2次元位置座標を取得していることに基づき、経路案内画像の前景に対する重畳位置を、予め設定された位置に決定する。表示生成部206は、2次元位置座標に基づき、経路案内画像の投影形状を決定し、経路案内画像を生成する。表示生成部206は、HUD230へ生成したデータを出力して経路案内画像を非AR表示の虚像Viとして表示させる。 When the display mode is determined to be the two-dimensional display mode, the display generation unit 206 acquires information on the two-dimensional position coordinates of the navigation map and generates a route guidance image. The display generation unit 206 determines the superimposed position of the route guidance image with respect to the foreground to the preset position based on the acquisition of the two-dimensional position coordinates. The display generation unit 206 determines the projected shape of the route guidance image based on the two-dimensional position coordinates and generates the route guidance image. The display generation unit 206 outputs the generated data to the HUD 230 and displays the route guidance image as a virtual image Vi for non-AR display.
 加えて、表示生成部206は、表示している経路案内画像の表示態様を乗員に提示する態様提示画像Iiを生成する。表示生成部206は、例えば態様提示画像Iiを文字画像として生成する。図4~図6に示す例では、AR案内画像Gi1が表示されている場合、表示生成部206は、「3D」の文字画像にて3次元表示態様を示す態様提示画像Iiとして生成している。そして非AR案内画像Gi2が表示されている場合、表示生成部206は、「2D」の文字画像を、2次元表示態様を示す態様提示画像Iiとして生成している。 In addition, the display generation unit 206 generates a mode presentation image Ii for presenting the display mode of the displayed route guidance image to the occupant. The display generation unit 206 generates, for example, the aspect presentation image Ii as a character image. In the example illustrated in FIGS. 4 to 6, when the AR guidance image Gi1 is displayed, the display generation unit 206 generates the mode presentation image Ii indicating the three-dimensional display mode with the character image of “3D”. .. When the non-AR guidance image Gi2 is displayed, the display generation unit 206 has generated the character image of “2D” as the mode presentation image Ii indicating the two-dimensional display mode.
 なお、表示生成部206は、態様提示画像Iiを、記号や図柄等の文字情報以外の情報として提示してもよい。また、表示生成部206は、態様提示画像IiをCID232やMID231等のHUD230以外の表示装置に表示させてもよい。この場合、表示生成部206は、表示態様を乗員に提示しつつHUD230の投影領域PA内の情報量を低減させることができ、乗員の煩わしさを低減できる。「表示生成部206」は、「態様提示部」の一例である。 The display generation unit 206 may present the mode presentation image Ii as information other than character information such as symbols and designs. Further, the display generation unit 206 may display the aspect presentation image Ii on a display device other than the HUD 230 such as the CID 232 or the MID 231. In this case, the display generation unit 206 can reduce the amount of information in the projection area PA of the HUD 230 while presenting the display mode to the occupant, and can reduce the annoyance of the occupant. The “display generation unit 206” is an example of the “mode presentation unit”.
 次に、HCU20の実行する表示処理について、図9のフローチャートを参照して説明する。HCU20は、図9の処理を、ナビゲーション装置7に目的地が設定されて走行予定経路が設定された場合に開始する。 Next, the display processing executed by the HCU 20 will be described with reference to the flowchart in FIG. The HCU 20 starts the process of FIG. 9 when the destination is set in the navigation device 7 and the planned travel route is set.
 まずステップS10では、経路案内表示を開始するか否かを判定する。例えばステップS10では、案内地点と車両Aとの間の距離が閾値(例えば300m)を下回った場合に、経路案内表示を開始すると判定する。経路案内表示を開始すると判定すると、ステップS20へと進み、ロケータ3から自車位置情報を取得する。 First, in step S10, it is determined whether or not the route guidance display is started. For example, in step S10, when the distance between the guidance point and the vehicle A is less than a threshold value (for example, 300 m), it is determined to start the route guidance display. When it is determined that the route guidance display is started, the process proceeds to step S20, and the vehicle position information is acquired from the locator 3.
 次にステップS30では、ロケータ3から自車位置およびその周辺に関する通知情報を取得して、ステップS40へと進む。ステップ40では、通知情報等に基づき、高精度地図データを取得可能であるか否かを判定する。取得可能であると判定すると、ステップS42へと進む。 Next, in step S30, notification information about the vehicle position and its surroundings is acquired from the locator 3, and the process proceeds to step S40. In step 40, it is determined based on the notification information or the like whether or not the high precision map data can be acquired. If it is determined that it can be acquired, the process proceeds to step S42.
 ステップS42では、ロケータ3からの情報に基づいて、将来走行区間GSに、高精度地図データがあるか否かを判定する。将来走行区間GSに高精度地図データがあると判定されると、ステップS44へと進み、形状条件が成立するか否かを判定する。形状条件が成立しないと判定されると、ステップS50へと進む。 In step S42, based on the information from the locator 3, it is determined whether or not there is high-precision map data in the future traveling section GS. If it is determined that there is high-precision map data in the future traveling section GS, the process proceeds to step S44, and it is determined whether or not the shape condition is satisfied. If it is determined that the shape condition is not satisfied, the process proceeds to step S50.
 ステップS50では、地図情報取得部203が高精度地図データを取得する。ステップS60では、取得された高精度地図データの3次元座標に基づいて、3次元表示態様の経路案内画像を生成し、ステップS120へと進む。ステップS120では、生成した経路案内画像をHUD230へと出力し、HUD230に経路案内画像を虚像Viとして生成させる。 In step S50, the map information acquisition unit 203 acquires high precision map data. In step S60, a route guidance image in a three-dimensional display mode is generated based on the acquired three-dimensional coordinates of the high-precision map data, and the process proceeds to step S120. In step S120, the generated route guidance image is output to the HUD 230, and the HUD 230 is caused to generate the route guidance image as the virtual image Vi.
 一方ステップS40にて、高精度地図データを取得不可能であると判定されると、ステップS70へと進む。ステップS70では、車載センサから検出情報を取得可能であるか否かを判定する。検出情報を取得不可能であると判定された場合、ステップS80へと進む。 On the other hand, if it is determined in step S40 that high-precision map data cannot be acquired, the process proceeds to step S70. In step S70, it is determined whether the detection information can be acquired from the vehicle-mounted sensor. If it is determined that the detection information cannot be acquired, the process proceeds to step S80.
 ステップS80では、ナビゲーション装置7よりナビゲーション地図データを取得して、ステップS90へと進む。ステップS90では、ナビゲーション地図データに基づき、2次元表示態様で経路案内画像を生成する。その後ステップS120へと進み、生成した経路案内画像をHUD230へと出力する。 In step S80, the navigation map data is acquired from the navigation device 7, and the process proceeds to step S90. In step S90, a route guidance image is generated in a two-dimensional display mode based on the navigation map data. After that, the process proceeds to step S120, and the generated route guidance image is output to the HUD 230.
 また、ステップS42にて将来走行区間GSが高精度地図データに含まれないと判定された場合にも、ステップS80へと進む。加えて、ステップS44にて形状条件が成立すると判定された場合にも、ステップS80へと進む。 Also, if it is determined in step S42 that the future traveling section GS is not included in the high-precision map data, the process proceeds to step S80. In addition, if it is determined in step S44 that the shape condition is satisfied, the process proceeds to step S80.
 一方、ステップS70にて周辺監視センサ4から検出情報を取得可能であると判定された場合には、ステップS100へと進む。ステップS100では、ナビゲーション地図データおよび検出情報を取得する。ステップS110では、ナビゲーション地図データおよび検出情報に基づいて、3次元表示態様の経路案内画像を生成する。その後ステップS120にて、生成した画像データを、HUD230へと出力する。 On the other hand, if it is determined in step S70 that the detection information can be acquired from the peripheral monitoring sensor 4, the process proceeds to step S100. In step S100, navigation map data and detection information are acquired. In step S110, a route guidance image in a three-dimensional display mode is generated based on the navigation map data and the detection information. Then, in step S120, the generated image data is output to the HUD 230.
 次に第1実施形態のHCU20の構成および作用効果について説明する。 Next, the configuration and operational effects of the HCU 20 of the first embodiment will be described.
 HCU20は、前景中における虚像Viの重畳位置に関する地図情報を、高精度地図データまたはナビゲーション地図データとして取得する地図情報取得部203と、地図情報に基づいて虚像Viを生成する表示生成部206とを備える。表示生成部206は、高精度地図データを取得可能である場合には、高精度地図データに基づく3次元表示態様で虚像Viを生成し、高精度地図データを取得不可能である場合には、ナビゲーション地図データに基づく、2次元表示態様で虚像Viを生成する。 The HCU 20 includes a map information acquisition unit 203 that acquires map information relating to the superimposed position of the virtual image Vi in the foreground as high-precision map data or navigation map data, and a display generation unit 206 that generates the virtual image Vi based on the map information. Prepare When the high-precision map data can be acquired, the display generation unit 206 generates the virtual image Vi in a three-dimensional display mode based on the high-precision map data, and when the high-precision map data cannot be acquired, The virtual image Vi is generated in a two-dimensional display mode based on the navigation map data.
 これによれば、高精度地図データを取得可能である場合には、虚像Viの生成に高精度地図データが用いられ、高精度地図データを取得不可能である場合には、虚像Viの生成に低精度地図情報が用いられる。これにより、高精度地図データと低精度地図情報とを使い分けての虚像Viの表示が可能となる。以上により、地図情報を有効利用可能なHCU20および表示制御プログラムを提供することができる。 According to this, when the high-accuracy map data can be acquired, the high-accuracy map data is used to generate the virtual image Vi, and when the high-accuracy map data cannot be acquired, the virtual image Vi is generated. Low precision map information is used. As a result, the virtual image Vi can be displayed by selectively using the high-precision map data and the low-precision map information. As described above, it is possible to provide the HCU 20 and the display control program that can effectively use the map information.
 表示生成部206は、3次元表示態様において、虚像Viを前景中の特定の重畳対象である路面に重畳させ、2次元表示態様において、虚像Viを路面に重畳させない。これにより、HCU20は、比較的低精度なナビゲーション地図データに基づいて虚像Viを路面に重畳させることを回避できる。したがって、HCU20は、精度の低い地図情報に基づく虚像Viの重畳表示によって表示位置のずれが発生することを抑制できる。 The display generation unit 206 does not superimpose the virtual image Vi on the road surface in the two-dimensional display mode and the virtual image Vi on the road surface in the foreground, which is a specific superimposition target, in the three-dimensional display mode. As a result, the HCU 20 can avoid superimposing the virtual image Vi on the road surface based on the navigation map data of relatively low accuracy. Therefore, the HCU 20 can suppress the occurrence of the displacement of the display position due to the superimposed display of the virtual image Vi based on the map information with low accuracy.
 表示生成部206は、高精度地図データに、車両Aの将来走行区間GSに関する情報が含まれていない場合には、高精度地図データを取得可能である場合であっても、2次元表示態様で虚像Viを生成する。これによれば、現在位置において高精度地図データの取得が可能な場合であっても、案内地点において高精度地図データがない場合、3次元表示態様での虚像Viの生成を行わない。このため、案内地点の付近で虚像Viの表示態様が3次元表示態様から2次元表示態様へと変更することを回避できる。したがって、HCU20は、虚像Viの表示態様の変更により乗員に煩わしさを与えることを抑制できる。 When the high-precision map data does not include the information about the future traveling section GS of the vehicle A, the display generation unit 206 uses the two-dimensional display mode even if the high-precision map data can be acquired. A virtual image Vi is generated. According to this, even if the high-precision map data can be obtained at the current position, the virtual image Vi is not generated in the three-dimensional display mode if the high-precision map data does not exist at the guide point. Therefore, it is possible to avoid changing the display mode of the virtual image Vi from the three-dimensional display mode to the two-dimensional display mode near the guide point. Therefore, the HCU 20 can prevent the occupant from being bothered by changing the display mode of the virtual image Vi.
 表示生成部206は、車両Aの走行する道路形状に関して3次元表示態様での虚像Viの生成を中止する形状条件が成立する場合には、高精度地図データを取得可能である場合であっても、2次元表示態様で虚像Viを生成する。これによれば、HCU20は、走行道路が2次元表示態様の虚像Viで比較的乗員に情報を伝達しやすい道路形状である場合には、2次元表示態様で虚像Viを生成できる。これにより、HCU20は、虚像Viの情報を乗員に伝達しつつ、高精度地図データを用いることによる処理の煩雑化を抑制できる。  The display generation unit 206 can acquire high-accuracy map data even when the high-precision map data can be acquired when the shape condition for stopping the generation of the virtual image Vi in the three-dimensional display mode is satisfied with respect to the road shape on which the vehicle A is traveling. The virtual image Vi is generated in a two-dimensional display mode. According to this, the HCU 20 can generate the virtual image Vi in the two-dimensional display mode when the traveling road is the virtual image Vi in the two-dimensional display mode and the road shape is relatively easy to transmit information to the occupant. Thereby, the HCU 20 can suppress the complication of the processing due to the use of the high precision map data while transmitting the information of the virtual image Vi to the occupant. ‥
 HCU20は、周辺監視センサ4から検出情報を取得するセンサ情報取得部204を備える。表示生成部206は、高精度地図データを取得不可能である場合で、センサ情報取得部204にて検出情報を取得可能である場合には、ナビゲーション地図データと検出情報との組み合わせに基づく3次元表示態様で虚像Viを生成する。これによれば、HCU20は、高精度地図データを取得不可能である場合であっても、ナビゲーション地図データに検出情報を組み合わせることで、高精度地図データに基づく表示態様と同様の表示態様で、虚像Viを生成できる。 The HCU 20 includes a sensor information acquisition unit 204 that acquires detection information from the peripheral monitoring sensor 4. In the case where the high-accuracy map data cannot be acquired and the sensor information acquisition unit 204 can acquire the detection information, the display generation unit 206 determines a three-dimensional shape based on the combination of the navigation map data and the detection information. The virtual image Vi is generated in the display mode. According to this, even when the HCU 20 cannot acquire the high-accuracy map data, the HCU 20 combines the detection information with the navigation map data to display the same display mode as the high-accuracy map data. The virtual image Vi can be generated.
 表示生成部206は、3次元表示態様および2次元表示態様のいずれで虚像Viが生成されているのかを乗員に提示する。これによれば、HCU20は、虚像Viの表示態様を乗員に対してより直接的に提示することができる。したがって、HCU20は、乗員に対して虚像Viの示す情報の理解を促進させることが可能となる。 The display generation unit 206 indicates to the occupant which of the three-dimensional display mode and the two-dimensional display mode the virtual image Vi is generated. According to this, the HCU 20 can more directly present the display mode of the virtual image Vi to the occupant. Therefore, the HCU 20 can facilitate the occupant to understand the information indicated by the virtual image Vi.
 地図情報取得部203は、道路の勾配情報、区画線の3次元形状情報、および道路勾配が推定可能な情報の少なくとも1つを含む地図情報を高精度地図データとして取得する。これによれば、HCU20は、道路の勾配情報を取得または推定して3次元表示態様の虚像Viを生成できる。このため、HCU20は、3次元表示態様の虚像Viの表示位置のずれをより確実に抑制できる。 The map information acquisition unit 203 acquires map information including at least one of road gradient information, lane marking three-dimensional shape information, and road gradient estimation information as high-precision map data. According to this, the HCU 20 can obtain or estimate the road gradient information and generate the virtual image Vi in the three-dimensional display mode. Therefore, the HCU 20 can more reliably suppress the shift in the display position of the virtual image Vi in the three-dimensional display mode.
 (第2実施形態)
 第2実施形態では、第1実施形態におけるHCU20の変形例について説明する。図10および図11において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(Second embodiment)
In the second embodiment, a modified example of the HCU 20 in the first embodiment will be described. In FIGS. 10 and 11, constituent elements denoted by the same reference numerals as those in the drawings of the first embodiment are the same constituent elements and have similar functions and effects.
 第1実施形態において、HCU20は、ロケータ3に格納された高精度地図データを取得するとした。これに代えて、HCU20は、プローブ地図データを高精度地図情報として取得してもよい。 In the first embodiment, the HCU 20 acquires the high precision map data stored in the locator 3. Instead of this, the HCU 20 may acquire the probe map data as high precision map information.
 センタ9は、複数のプローブ車両Mから送信されたプローブ情報を通信部91にて受信し、制御部90に保存する。プローブ情報は、各プローブ車両Mにおける周辺監視センサ4、またはロケータ3等により取得された情報であり、プローブ車両Mの走行軌跡、道路の形状情報等を、3次元位置座標にて表される情報として含んでいる。 The center 9 receives the probe information transmitted from the plurality of probe vehicles M at the communication unit 91 and stores it in the control unit 90. The probe information is information acquired by the perimeter monitoring sensor 4 in each probe vehicle M, the locator 3, or the like, and is information in which the traveling locus of the probe vehicle M, the road shape information, and the like are represented by three-dimensional position coordinates. Included as.
 制御部90は、プロセッサ、RAM、メモリ装置、I/O、およびこれらを接続するバスを備えるマイクロコンピュータを主体として構成されている。制御部90は、機能ブロックとして地図生成部90aを備える。地図生成部90aは、取得したプローブ情報に基づいて、プローブ地図データを生成する。プローブ情報が3次元位置座標を含むデータであるため、生成されるプローブ地図データは、各地点の高さ情報を含んだ3次元地図データとなる。 The control unit 90 is mainly composed of a microcomputer including a processor, a RAM, a memory device, an I/O, and a bus connecting these. The control unit 90 includes a map generation unit 90a as a functional block. The map generator 90a generates probe map data based on the acquired probe information. Since the probe information is data including three-dimensional position coordinates, the generated probe map data is three-dimensional map data including height information of each point.
 車両システム1は、通信部8にて無線通信網を介してセンタ9と通信し、プローブ地図データを取得する。通信部8は、取得したプローブ地図データを、運転支援ECU6に保存する。 The vehicle system 1 communicates with the center 9 via the wireless communication network at the communication unit 8 and acquires probe map data. The communication unit 8 stores the acquired probe map data in the driving support ECU 6.
 運転支援ECU6は、機能ブロックとして地図通知部601を有する。地図通知部601は、第1実施形態におけるロケータ3の地図通知部301と同様に、測位した車両位置と、ナビゲーション装置7から取得した情報とに基づき、自車位置およびその周囲のエリアに関する情報がプローブ地図データに含まれているか否かを判定する。地図通知部601は、自車位置およびその周囲のエリアに関する情報がプローブ地図データに含まれていると判定した場合には、その旨を通知情報としてHCU20に出力する。 The driving support ECU 6 has a map notification unit 601 as a functional block. Similar to the map notification unit 301 of the locator 3 in the first embodiment, the map notification unit 601 provides information regarding the own vehicle position and the surrounding area based on the measured vehicle position and the information acquired from the navigation device 7. It is determined whether or not it is included in the probe map data. When the map notification unit 601 determines that the probe map data includes information about the vehicle position and the area around it, the map notification unit 601 outputs that to the HCU 20 as notification information.
 HCU20の地図情報取得部203は、地図判定部202にて高精度地図情報であるプローブ地図データを取得可能であると判定された場合には、運転支援ECU6からプローブ地図データを取得する。表示生成部206は、プローブ地図データに基づいて、AR案内画像Gi1を生成する。 The map information acquisition unit 203 of the HCU 20 acquires the probe map data from the driving support ECU 6 when the map determination unit 202 determines that the probe map data that is the high-accuracy map information can be acquired. The display generation unit 206 generates the AR guide image Gi1 based on the probe map data.
 (第3実施形態)
 第3実施形態では、第1実施形態におけるHCU20の変形例について説明する。図12~図17において第1実施形態の図面中と同一符号を付した構成要素は、同様の構成要素であり、同様の作用効果を奏するものである。
(Third Embodiment)
In the third embodiment, a modification of the HCU 20 in the first embodiment will be described. 12 to 17, the components designated by the same reference numerals as those in the drawings of the first embodiment are the same components and have the same effects.
 第3実施形態のHCU20は、第1表示態様において、高精度地図データに基づく重畳位置にて、路面に対して経路案内画像を重畳表示させ、第2表示態様において、ナビゲーション地図データに基づく重畳位置にて、路面に対して経路案内画像を重畳表示させる。以下において、第1表示態様の経路案内画像を、第1AR案内画像CT1、第2表示態様の経路案内画像を、第2AR案内画像CT2と表記する。 The HCU 20 of the third embodiment causes the route guidance image to be superimposed and displayed on the road surface at the superimposition position based on the high-precision map data in the first display mode, and the superimposition position based on the navigation map data in the second display mode. At, the route guidance image is superimposed and displayed on the road surface. Hereinafter, the route guidance image in the first display mode will be referred to as a first AR guidance image CT1, and the route guidance image in the second display mode will be referred to as a second AR guidance image CT2.
 表示態様決定部205は、高精度地図データを取得可能な場合には、第1AR案内画像CT1の表示を決定し、高精度地図データを取得不可能であり且つナビゲーション地図データを取得可能な場合には、第2AR案内画像CT2の表示を決定する。 When the high-accuracy map data can be acquired, the display mode determination unit 205 determines the display of the first AR guide image CT1, and when the high-accuracy map data cannot be acquired and the navigation map data can be acquired. Determines to display the second AR guide image CT2.
 ただし、表示態様決定部205は、高精度地図データの新しさ(鮮度)に関して定められた鮮度条件が成立する場合には、高精度地図データを取得可能であっても、第2AR案内画像CT2の表示を決定する。鮮度条件は、例えば、高精度地図データがナビゲーション地図データよりも古い場合に成立する。 However, when the freshness condition defined with respect to the freshness (freshness) of the high-precision map data is satisfied, the display mode determination unit 205 can obtain the high-precision map data even if the high-precision map data can be acquired. Determine the display. The freshness condition is satisfied, for example, when the high-precision map data is older than the navigation map data.
 加えて、表示態様決定部205は、第2表示態様にて表示する場合における重畳位置ずれの大きさを、取得した各種情報に基づいて評価する。表示態様決定部205は、例えば自車位置の測位精度、および地物認識情報の有無に基づいて、重畳位置ずれの大きさを評価する。 In addition, the display mode determination unit 205 evaluates the magnitude of the superimposed position shift when displaying in the second display mode based on the acquired various information. The display mode determination unit 205 evaluates the magnitude of the superimposed position shift, for example, based on the positioning accuracy of the vehicle position and the presence/absence of feature recognition information.
 表示態様決定部205は、自車位置の測位精度について、所定のレベル以上であるか否かを判定する。具体的には、表示態様決定部205は、ロケータ3から取得した自車位置を、周辺監視センサ4から取得した検出情報により評価する。例えば、表示態様決定部205は、前方カメラ41の撮像画像から交差点CPを検出し、交差点CPに対する車両Aの相対位置を解析する。そして表示態様決定部205は、当該相対位置と地図データとから特定される車両Aの位置と、ロケータ3から取得した自車位置とのずれの大きさが、所定のレベル以上であるか否か判定する。なお、表示態様決定部205は、車両Aの位置を特定可能な交差点CP以外の物体を撮像画像から検出し、上述の処理を実施してもよい。表示態様決定部205は、撮像画像の解析結果を運転支援ECU6等の他のECUから取得してもよい。 The display mode determination unit 205 determines whether or not the positioning accuracy of the vehicle position is at a predetermined level or higher. Specifically, the display mode determination unit 205 evaluates the vehicle position acquired from the locator 3 based on the detection information acquired from the surroundings monitoring sensor 4. For example, the display mode determination unit 205 detects the intersection CP from the image captured by the front camera 41 and analyzes the relative position of the vehicle A with respect to the intersection CP. Then, the display mode determination unit 205 determines whether or not the magnitude of the deviation between the position of the vehicle A specified from the relative position and the map data and the own vehicle position acquired from the locator 3 is equal to or higher than a predetermined level. judge. The display mode determination unit 205 may detect an object other than the intersection CP capable of specifying the position of the vehicle A from the captured image and perform the above-described processing. The display mode determination unit 205 may acquire the analysis result of the captured image from another ECU such as the driving assistance ECU 6.
 なお、表示態様決定部205は、疑似距離の残差、ロケータ3にて捕捉された測位衛星の個数、測位信号のS/N比等に基づく測位精度の評価値が所定のレベル以上の高さであるか否かを判定してもよい。 The display mode determination unit 205 determines that the evaluation value of the positioning accuracy based on the residual of the pseudo distance, the number of positioning satellites captured by the locator 3, the S/N ratio of the positioning signal, or the like is equal to or higher than a predetermined level. May be determined.
 表示態様決定部205は、地物認識情報が周辺監視センサ4から取得されているか否かを判定する。地物認識情報は、周辺監視センサ4による地物の認識情報であり、車両Aの前後左右方向の重畳位置の補正に使用可能な情報である。当該地物には、例えば、停止線、交差点の中央標示、走行区画線等の路面標示が含まれる。これらの地物の車両Aに対する相対位置に基づき、地図データ上での自車位置を補正することで、第2AR案内画像CT2の前後左右方向の重畳位置を補正可能である。なお、路面標示以外に、縁石等の道路境界、標識等の路上設置物などが自車位置の補正に使用可能な地物に含まれていてもよい。 The display mode determination unit 205 determines whether the feature recognition information is acquired from the surroundings monitoring sensor 4. The feature recognition information is recognition information of the feature by the peripheral monitoring sensor 4, and is information that can be used to correct the overlapping position of the vehicle A in the front, rear, left, and right directions. The features include, for example, road markings such as stop lines, intersection central markings, and lane markings. By correcting the own vehicle position on the map data based on the relative positions of these features with respect to the vehicle A, it is possible to correct the overlapping position of the second AR guide image CT2 in the front-rear and left-right directions. In addition to road markings, road boundaries such as curbs and road installations such as signs may be included in the features that can be used to correct the vehicle position.
 表示態様決定部205は、以上の各種情報の組み合わせ、すなわち自車位置の測位精度の高低、および地物認識情報の有無の組み合わせに基づいて、表示される第2AR案内画像CT2の重畳位置ずれの大きさを評価する。例えば、表示態様決定部205は、組み合わせに応じて、重畳位置ずれの大きさを「小」「中」「大」の3段階のレベルに分類する。 The display mode determination unit 205 determines the superimposed position shift of the displayed second AR guidance image CT2 based on the combination of the above various types of information, that is, the combination of the positioning accuracy of the vehicle position and the presence or absence of the feature recognition information. Evaluate the size. For example, the display mode determination unit 205 classifies the magnitude of the superposition positional deviation into three levels of “small”, “medium”, and “large” according to the combination.
 具体的には、表示態様決定部205は、測位精度が所定のレベル以上であり、地物認識情報が有る場合には、ずれの大きさ小と判定する。表示態様決定部205は、測位精度が所定のレベル以上であり、地物認識情報が無い場合には、ずれの大きさ中と判定する。表示態様決定部205は、測位精度が所定のレベル未満であり、地物認識情報が有る場合にも、ずれの大きさ中と判定する。表示態様決定部205は、測位精度が所定のレベル未満であり、地物認識情報も無い場合には、ずれの大きさ大と判定する。 Specifically, when the positioning accuracy is equal to or higher than a predetermined level and the feature recognition information is present, the display mode determination unit 205 determines that the deviation is small. When the positioning accuracy is equal to or higher than a predetermined level and there is no feature recognition information, the display mode determination unit 205 determines that the deviation is in the middle. The display mode determination unit 205 determines that the deviation is in the middle even when the positioning accuracy is less than the predetermined level and the feature recognition information is present. The display mode determination unit 205 determines that the magnitude of the deviation is large when the positioning accuracy is less than the predetermined level and there is no feature recognition information.
 表示態様決定部205は、表示態様の決定結果および第2表示態様の場合に評価したずれの大きさを、経路案内画像の生成に必要な情報とともに表示生成部206へと提供する。 The display mode determination unit 205 provides the display generation unit 206 with the determination result of the display mode and the magnitude of the deviation evaluated in the case of the second display mode together with the information necessary for generating the route guidance image.
 表示生成部206は、表示態様決定部205から提供された情報に基づき、第1AR案内画像CT1および第2AR案内画像CT2のいずれかを生成する。各AR案内画像CT1,CT2は、AR表示によって案内地点における車両Aの走行予定経路を示す。各AR案内画像CT1,CT2は、第1実施形態と同様に路面を重畳対象としたAR虚像である。一例として、交差点CPを含む走行エリアにおける交差点CPでの右左折(図では左折)を案内するシーンの場合、各AR案内画像CT1,CT2は、交差点CPへの進入経路を示す進入経路コンテンツCTaと、交差点CPからの退出経路を示す退出経路コンテンツCTeとを含んでいる。進入経路コンテンツCTaは、例えば、走行予定経路に沿って並んだ三角形状の複数のオブジェクトとされる。退出経路コンテンツCTeは、走行予定経路に沿って並んだ矢印形状の複数のオブジェクトとされる。 The display generation unit 206 generates either the first AR guidance image CT1 or the second AR guidance image CT2 based on the information provided by the display mode determination unit 205. Each AR guidance image CT1, CT2 shows the planned traveling route of the vehicle A at the guidance point by AR display. Each of the AR guide images CT1 and CT2 is an AR virtual image in which the road surface is to be superimposed, as in the first embodiment. As an example, in the case of a scene that guides a right/left turn (a left turn in the figure) at an intersection CP in a traveling area including the intersection CP, each AR guidance image CT1, CT2 includes an entry route content CTa indicating an approach route to the intersection CP. , And an exit route content CTe indicating an exit route from the intersection CP. The approach route content CTa is, for example, a plurality of triangular objects arranged along the planned travel route. The exit route content CTe is a plurality of arrow-shaped objects arranged along the planned travel route.
 第1AR案内画像CT1を生成する場合、表示生成部206は、高精度地図データを利用して第1AR案内画像CT1の重畳位置および重畳形状を決定する。具体的には、表示生成部206は、高精度地図データに基づく路面位置、ロケータ3による自車位置、DSM22による乗員の視点位置、および設定された投影領域PAの位置関係等の各種位置情報を用いる。表示生成部206は、この各種位置情報に基づき、幾何学的な演算により第1AR案内画像CT1の重畳位置および重畳形状を算出する。 When generating the first AR guidance image CT1, the display generation unit 206 determines the superposition position and the superimposition shape of the first AR guidance image CT1 using the high-precision map data. Specifically, the display generation unit 206 provides various position information such as the road surface position based on the high-precision map data, the own vehicle position by the locator 3, the occupant's viewpoint position by the DSM 22, and the positional relationship of the set projection area PA. To use. The display generation unit 206 calculates the superimposed position and the superimposed shape of the first AR guide image CT1 by geometrical calculation based on the various position information.
 詳記すると、表示生成部206は、高精度地図データに基づく自車位置情報、高精度地図データおよび検出情報等に基づいて車両Aの現在の走行環境を仮想空間中に再現する。詳記すると、図12に示すように、表示生成部206は、仮想の3次元空間の基準位置に自車オブジェクトAOを設定する。表示生成部206は、地図データの示す形状の道路モデルを、自車位置情報に基づき、自車オブジェクトAOに関連付けて、3次元空間にマッピングする。 Specifically, the display generation unit 206 reproduces the current traveling environment of the vehicle A in the virtual space based on the vehicle position information based on the high precision map data, the high precision map data, the detection information, and the like. Specifically, as shown in FIG. 12, the display generation unit 206 sets the own vehicle object AO at the reference position in the virtual three-dimensional space. The display generation unit 206 maps the road model having the shape indicated by the map data in the three-dimensional space in association with the own vehicle object AO based on the own vehicle position information.
 表示生成部206は、自車オブジェクトAOに関連付けて、仮想カメラ位置VPおよび重畳範囲SAを設定する。仮想カメラ位置VPは、乗員の視点位置に対応する仮想位置である。表示生成部206は、DSM22から取得される最新の視点位置座標に基づき、自車オブジェクトAOに対する仮想カメラ位置VPを逐次補正する。重畳範囲SAは、虚像Viの重畳表示が可能となる範囲である。表示生成部206は、仮想カメラ位置VPと、記憶部13(図1参照)等に予め記憶された投影領域PAの外縁位置(座標)情報とに基づき、仮想カメラ位置VPから前方を見たときに結像面の内側となる前方範囲を、重畳範囲SAとして設定する。重畳範囲SAは、HUD230の投影領域PAおよび画角に対応している。 The display generation unit 206 sets the virtual camera position VP and the superposition range SA in association with the own vehicle object AO. The virtual camera position VP is a virtual position corresponding to the viewpoint position of the occupant. The display generation unit 206 sequentially corrects the virtual camera position VP for the own vehicle object AO based on the latest viewpoint position coordinates acquired from the DSM 22. The superposition range SA is a range in which the virtual image Vi can be superposed and displayed. When the display generation unit 206 looks forward from the virtual camera position VP based on the virtual camera position VP and the outer edge position (coordinates) information of the projection area PA stored in advance in the storage unit 13 (see FIG. 1) and the like. The front area inside the image plane is set as the superposition area SA. The superposition range SA corresponds to the projection area PA and the angle of view of the HUD 230.
 表示生成部206は、仮想空間中に第1AR案内画像CT1を模る仮想オブジェクトVOを配置する。仮想オブジェクトVOは、3次元空間の道路モデルの路面上において、走行予定経路に沿うように配置される。仮想オブジェクトVOは、第1AR案内画像CT1を虚像表示させる場合に、仮想空間中に設定される。仮想オブジェクトVOは、第1AR案内画像CT1の位置と形状を規定する。すなわち、仮想カメラ位置VPから見た仮想オブジェクトVOの形状が、視点位置から視認される第1AR案内画像CT1の虚像形状となる。 The display generation unit 206 arranges a virtual object VO imitating the first AR guide image CT1 in the virtual space. The virtual object VO is arranged along the planned traveling route on the road surface of the road model in the three-dimensional space. The virtual object VO is set in the virtual space when displaying the first AR guide image CT1 as a virtual image. The virtual object VO defines the position and shape of the first AR guide image CT1. That is, the shape of the virtual object VO viewed from the virtual camera position VP becomes the virtual image shape of the first AR guide image CT1 visually recognized from the viewpoint position.
 表示生成部206は、自車車線Lns上の仮想オブジェクトVOに関して、車線幅方向における自車車線Lnsの中央部Lcに配置する。中央部Lcは、例えば、自車車線Lnsの走行区画線または道路端により規定される両側の車線境界線同士の中間点である。 The display generation unit 206 arranges the virtual object VO on the own lane Lns in the central portion Lc of the own lane Lns in the lane width direction. The central portion Lc is, for example, a midway point between the lane boundary lines on both sides defined by the lane markings of the own lane Lns or the road edges.
 これにより、進入経路コンテンツCTaの重畳位置は、自車車線Lnsの実質的な中央部Lcに設定される(図3参照)。なお、現在走行中の自車車線Lnsと、交差点CPへの進入車線が異なる場合には、進入経路コンテンツCTaは、自車車線Lnsの中央部から進入車線の中央部へと移り、進入車線の中央部を辿って延びるように表示されればよい。 As a result, the superposition position of the approach route content CTa is set to the substantial center portion Lc of the own lane Lns (see FIG. 3). When the own lane Lns currently traveling is different from the approach lane to the intersection CP, the approach route content CTa moves from the center of the own lane Lns to the center of the approach lane, It may be displayed so as to extend along the central portion.
 また、退出経路コンテンツCTeは、走行予定経路に沿って、進入経路コンテンツCTaに続いて並ぶように配置される。退出経路コンテンツCTeは、交差点CPおよび退出車線の中央部における路面から浮いた位置に重畳される。なお、退出経路コンテンツCTeは、図13に示すように、重畳対象の路面が視認不可能な場合には、画角内の路面上端よりも上方に浮いて視認されるように、重畳位置を決定される。 Further, the exit route content CTe is arranged so as to be lined up following the approach route content CTa along the planned travel route. The exit route content CTe is superimposed on the intersection CP and a position floating from the road surface in the center of the exit lane. Note that, as shown in FIG. 13, when the road surface to be superimposed is invisible, the exit route content CTe determines the overlapping position so as to be floated above the upper end of the road surface within the angle of view and visually recognized. To be done.
 表示生成部206は、以上の第1AR案内画像CT1を、交差点CPまでの残距離が閾値(例えば300m)を下回った場合に表示開始する。表示生成部206は、第1AR案内画像CT1の重畳位置および重畳形状を逐次更新することで、路面に相対固定されているように表示させる。すなわち、表示生成部206は、車両Aの走行に伴って相対的に移動する路面に追従するように、第1AR案内画像CT1を乗員の見た目上で移動可能に表示させる。 The display generation unit 206 starts displaying the above first AR guidance image CT1 when the remaining distance to the intersection CP is below a threshold value (for example, 300 m). The display generation unit 206 sequentially updates the overlapping position and the overlapping shape of the first AR guide image CT1 so that the first AR guide image CT1 is displayed so as to be relatively fixed to the road surface. That is, the display generation unit 206 displays the first AR guide image CT1 so that the occupant can visually move the first AR guide image CT1 so as to follow the road surface that relatively moves as the vehicle A travels.
 第2AR案内画像CT2を生成する場合、表示生成部206は、高精度地図データの代わりにナビゲーション地図データを利用して第2AR案内画像CT2の重畳位置および重畳形状を決定する。この場合、表示生成部206は、重畳対象の路面を起伏のない平坦な路面と仮定して、その路面位置を設定する。例えば、表示生成部206は、水平面の路面を重畳対象の仮想路面に設定して、その仮想路面位置と他の各種位置情報とに基づく幾何学的な演算により、第2AR案内画像CT2の重畳位置および重畳形状を算出する。 When generating the second AR guidance image CT2, the display generation unit 206 determines the superimposed position and the superimposed shape of the second AR guidance image CT2 using the navigation map data instead of the high accuracy map data. In this case, the display generation unit 206 sets the road surface position on the assumption that the road surface to be superimposed is a flat road surface without undulations. For example, the display generation unit 206 sets a horizontal road surface as a virtual road surface to be superimposed, and performs geometrical calculation based on the virtual road surface position and other various position information, and thereby the superimposed position of the second AR guide image CT2. And calculate the overlapping shape.
 したがって、この場合に表示生成部206により設定される仮想路面は、高精度地図データに基づいて設定されたものと比較して、より不正確なものとなり得る。例えば、図14に示すように、上り勾配の路面に勾配のない平坦な交差点CPが連なる道路形状の場合、交差点CP部分の仮想路面が、実際の路面に対してずれたものとなり得る。なお、図14の例では、交差点CP部分の仮想路面のずれを明示するため、仮想路面の形状が上り勾配を反映したものとしているが、実際には上り勾配が仮想路面に反映されるとは限らない。 Therefore, in this case, the virtual road surface set by the display generation unit 206 can be more inaccurate as compared with the virtual road surface set based on the high-precision map data. For example, as shown in FIG. 14, in the case of a road shape in which a flat intersection CP without slopes is continuous on an upward slope road surface, the virtual road surface at the intersection CP portion may be displaced from the actual road surface. Note that in the example of FIG. 14, the shape of the virtual road surface reflects the upward slope in order to clearly show the deviation of the virtual road surface at the intersection CP portion, but in reality, the upward slope is reflected on the virtual road surface. Not exclusively.
 第2AR案内画像CT2の生成において、表示生成部206は、仮想空間中の仮想オブジェクトVOの左右方向の位置を、ずれの大きさに基づいて決定する。具体的には、ずれの大きさが小レベルである場合、表示生成部206は、車両Aの中央に対応する重畳範囲SA内の位置である車両中央位置Vcに仮想オブジェクトVOを配置する。ここで車両中央位置Vcは、車幅方向における車両Aの中央を通り、且つ車両Aの前後方向に延びる仮想の直線を仮想路面上に想定した場合における、重畳範囲SA内での当該直線の位置である。これにより、進入経路コンテンツCTaは、図5に示すように、投影領域PAの上下方向に対して傾いた並びとなる。 In the generation of the second AR guide image CT2, the display generation unit 206 determines the horizontal position of the virtual object VO in the virtual space based on the size of the shift. Specifically, when the magnitude of the shift is at a small level, the display generation unit 206 arranges the virtual object VO at the vehicle center position Vc, which is a position within the superposition range SA corresponding to the center of the vehicle A. Here, the vehicle center position Vc is the position of the straight line within the overlapping range SA when a virtual straight line passing through the center of the vehicle A in the vehicle width direction and extending in the front-rear direction of the vehicle A is assumed on the virtual road surface. Is. As a result, the entry route contents CTa are arranged obliquely with respect to the vertical direction of the projection area PA, as shown in FIG.
 そして、ずれの大きさが中レベルまたは大レベルである場合、表示生成部206は、投影領域PAの左右方向における中央部Acに第2AR案内画像CT2を配置する。この場合、進入経路コンテンツCTaは、図6に示すように、投影領域PAの上下方向に並んで配置された状態で表示される。 Then, when the magnitude of the shift is at the middle level or the large level, the display generation unit 206 arranges the second AR guide image CT2 in the central portion Ac in the left-right direction of the projection area PA. In this case, the approach route contents CTa are displayed in a state of being arranged side by side in the vertical direction of the projection area PA, as shown in FIG.
 また、表示生成部206は、地物認識情報が有る場合、当該地物認識情報に基づき重畳位置を補正する。例えば、表示生成部206は、ナビゲーション地図データに基づき設定された仮想路面上での前後左右方向の自車位置を、地物認識情報に基づき補正した上で、第2AR案内画像CT2の重畳位置および重畳形状を算出する。 Further, when the feature recognition information is present, the display generation unit 206 corrects the superimposed position based on the feature recognition information. For example, the display generation unit 206 corrects the vehicle position in the front-rear, left-right direction on the virtual road surface set based on the navigation map data based on the feature recognition information, and then determines the overlapping position of the second AR guidance image CT2 and Calculate the overlapping shape.
 なお、表示生成部206は、地図データ以外の高さ情報である高さ補正情報が有る場合、当該高さ補正情報に基づき重畳位置を補正する。高さ補正情報は、例えば、路車間通信により取得される路側装置の3次元位置情報等である。この場合、表示生成部206は、車両Aに搭載されたV2X通信器を介して、情報を取得すればよい。表示生成部206は、または、高さ補正情報は、周辺監視センサ4による検出物体の高さ情報であってもよい。すなわち、周辺監視センサ4の検出情報の解析により路上設置物、路面標示等の3次元位置情報を特定可能な場合、当該3次元位置情報に含まれる高さ情報が、高さ補正情報に含まれてもよい。表示生成部206は、例えば、高さ補正情報に基づき仮想路面の位置および形状を水平面の路面から変更することで、当該仮想路面上に仮想的に配置される第2AR案内画像CT2の高さ方向の重畳位置を補正する。 If there is height correction information that is height information other than map data, the display generation unit 206 corrects the superposition position based on the height correction information. The height correction information is, for example, three-dimensional position information of the roadside device acquired by road-to-vehicle communication. In this case, the display generation unit 206 may acquire the information via the V2X communication device mounted on the vehicle A. The display generation unit 206 or the height correction information may be height information of an object detected by the periphery monitoring sensor 4. That is, when the three-dimensional position information such as a road installation or a road marking can be specified by analyzing the detection information of the perimeter monitoring sensor 4, the height information included in the three-dimensional position information is included in the height correction information. May be. The display generation unit 206 changes the position and shape of the virtual road surface from the horizontal road surface based on the height correction information, so that the height direction of the second AR guide image CT2 virtually arranged on the virtual road surface, for example. Correct the superimposed position of.
 加えて、表示生成部206は、第2AR案内画像CT2の重畳表示を、第1AR案内画像CT1よりも走行予定経路の手前側までに制限する。具体的には、表示生成部206は、第2AR案内画像CT2の退出経路コンテンツCTeについて、第1AR案内画像CT1よりも走行予定経路の車両Aから離れた側に重畳される部分を非表示とし、手前側に重畳される部分のみを表示させる。図15および図16に示す例では、第1AR案内画像CT1が表示される場合には退出経路コンテンツCTeが3つ表示されるのに対して、第2AR案内画像CT2が表示される場合には、退出経路コンテンツCTeがより手前側の1つのみに制限されている。すなわち、第2AR案内画像CT2は、交差点CPからの退出方向を提示し、退出経路の道筋に関しては提示しないコンテンツとされ、第1AR案内画像CT1と比較して簡素なものとなる。 In addition, the display generation unit 206 limits the superimposed display of the second AR guidance image CT2 to the front side of the planned traveling route with respect to the first AR guidance image CT1. Specifically, the display generation unit 206 hides a portion of the exit route content CTe of the second AR guide image CT2 that is superimposed on the side of the planned traveling route farther from the vehicle A than the first AR guide image CT1. Only the part that is superimposed on the front side is displayed. In the example shown in FIGS. 15 and 16, three exit route contents CTe are displayed when the first AR guidance image CT1 is displayed, whereas when the second AR guidance image CT2 is displayed, The exit route content CTe is limited to only one on the near side. That is, the second AR guide image CT2 is a content that presents the exit direction from the intersection CP and does not present the route of the exit route, and is simpler than the first AR guide image CT1.
 表示生成部206は、以上の第2AR案内画像CT2を、第1AR案内画像CT1とは異なるタイミングで表示開始する。具体的には、表示生成部206は、交差点CPまでの残距離が第1閾値を下回った段階では、第2AR案内画像CT2の代わりに非AR案内画像Gi2を表示させる。そして、表示生成部206は、残距離が第1閾値よりも小さい第2閾値(例えば100m)を下回った場合に、非AR案内画像Gi2から第2AR案内画像CT2へと表示を切り替える。すなわち、表示生成部206は、第1AR案内画像CT1を表示する場合よりも、交差点CPにより接近した段階で、第2AR案内画像CT2の表示を開始する。なお、非AR案内画像Gi2を表示させる閾値は、第2閾値よりも大きい値であれば第1閾値でなくてもよい。 The display generation unit 206 starts displaying the second AR guidance image CT2 described above at a timing different from that of the first AR guidance image CT1. Specifically, the display generation unit 206 displays the non-AR guidance image Gi2 instead of the second AR guidance image CT2 when the remaining distance to the intersection CP is below the first threshold. Then, the display generation unit 206 switches the display from the non-AR guidance image Gi2 to the second AR guidance image CT2 when the remaining distance is less than the second threshold (for example, 100 m) smaller than the first threshold. That is, the display generation unit 206 starts displaying the second AR guide image CT2 at a stage closer to the intersection CP than when displaying the first AR guide image CT1. The threshold value for displaying the non-AR guidance image Gi2 may not be the first threshold value as long as it is a value larger than the second threshold value.
 次に、HCU20の実行する表示処理について、図17のフローチャートを参照して説明する。なお、図17に示す処理のうち、図9のフローチャートと同一の符号のステップに関しては、説明を適宜省略する。 Next, the display processing executed by the HCU 20 will be described with reference to the flowchart in FIG. Of the processing shown in FIG. 17, description of steps having the same reference numerals as those in the flowchart of FIG. 9 will be appropriately omitted.
 HCU20は、ステップS44にて形状条件が不成立であると判定されると、ステップS46へと進む。ステップS46では、表示態様決定部205にて、高精度地図データの鮮度条件を判定する。鮮度条件が不成立であると判定した場合には、ステップS50へと進み、鮮度条件が成立すると判定した場合には、ステップS80へと進む。 If it is determined in step S44 that the shape condition is not satisfied, the HCU 20 proceeds to step S46. In step S46, the display mode determination unit 205 determines the freshness condition of the high precision map data. If it is determined that the freshness condition is not satisfied, the process proceeds to step S50, and if it is determined that the freshness condition is satisfied, the process proceeds to step S80.
 ステップS50にて高精度地図データを取得すると、ステップS65へと進み、表示生成部206にて第1AR案内画像CT1を生成する。一方で、ステップS80にてナビゲーション地図データを取得した場合には、ステップS81へと進む。 When the high-precision map data is acquired in step S50, the process proceeds to step S65, and the display generation unit 206 generates the first AR guidance image CT1. On the other hand, when the navigation map data is acquired in step S80, the process proceeds to step S81.
 ステップS81では、表示生成部206にて、交差点CPまでの残距離が第2閾値を下回るか否かを判定する。第2閾値を下回っていないと判定した場合には、ステップS82へと進み、非AR案内画像Gi2を生成した後、ステップS120へと進む。一方で、ステップS81にて第2閾値を下回ったと判定された場合には、ステップS83へと進む。ステップS83では、表示態様決定部205等にて、センサ情報取得部204を介して重畳位置の補正情報が取得される。なお、取得可能な補正情報が無い場合には、ステップS83はスキップされる。 In step S81, the display generation unit 206 determines whether the remaining distance to the intersection CP is less than the second threshold value. When it is determined that it is not below the second threshold value, the process proceeds to step S82, the non-AR guidance image Gi2 is generated, and then the process proceeds to step S120. On the other hand, when it is determined in step S81 that the value is below the second threshold, the process proceeds to step S83. In step S83, the display mode determination unit 205 or the like acquires the correction information of the superimposed position via the sensor information acquisition unit 204. If there is no correction information that can be acquired, step S83 is skipped.
 次に、ステップS84では、表示態様決定部205にて、第2AR案内画像CT2の位置ずれの大きさを評価し、ステップS95へと進む。ステップS95では、取得したナビゲーション地図データ、補正情報、および位置ずれの大きさに関する情報等に基づいて、表示生成部206にて、第2AR案内画像CT2を生成する。 Next, in step S84, the display mode determination unit 205 evaluates the magnitude of the positional deviation of the second AR guide image CT2, and proceeds to step S95. In step S95, the display generation unit 206 generates the second AR guide image CT2 based on the acquired navigation map data, the correction information, the information regarding the magnitude of the positional deviation, and the like.
 以上に説明した第3実施形態によれば、第1表示態様において、高精度地図情報に基づく重畳位置にて路面に対して第1AR案内画像CT1が重畳表示される。そして、第2表示態様において、ナビゲーション地図データに基づく重畳位置にて第2AR案内画像CT2が重畳表示される。故に、HCU20は、高精度地図データを利用可能なエリアと利用不可能なエリアとで、地図データを使い分けつつ、特定の重畳対象に対して虚像Viを重畳表示可能である。 According to the third embodiment described above, in the first display mode, the first AR guidance image CT1 is superimposed and displayed on the road surface at the superimposed position based on the high-precision map information. Then, in the second display mode, the second AR guidance image CT2 is superimposed and displayed at the superimposed position based on the navigation map data. Therefore, the HCU 20 can superimpose and display the virtual image Vi on the specific superimposition target while properly using the map data in the area where the high-precision map data can be used and the area where the high-precision map data cannot be used.
 また、表示生成部206は、交差点CPまでの残距離が、第1AR案内画像CT1を表示開始する第1閾値よりも短い第2閾値に到達すると、第2AR案内画像CT2の表示を開始する。交差点CPは比較的平坦な地形である場合が多いため、表示生成部206は、第1AR案内画像CT1の表示シーンよりも交差点CPに接近した段階で第2AR案内画像CT2を表示開始させることで、第2AR案内画像CT2の位置ずれの大きさを抑制し得る。または、表示生成部206は、第2AR案内画像CT2の位置ずれが大きくなる走行区間を短くし得る。 Further, the display generation unit 206 starts displaying the second AR guidance image CT2 when the remaining distance to the intersection CP reaches a second threshold shorter than the first threshold at which the first AR guidance image CT1 is displayed. Since the intersection CP is often a relatively flat terrain, the display generation unit 206 starts displaying the second AR guidance image CT2 at a stage closer to the intersection CP than the display scene of the first AR guidance image CT1. The magnitude of the positional deviation of the second AR guide image CT2 can be suppressed. Alternatively, the display generation unit 206 can shorten the traveling section in which the positional deviation of the second AR guide image CT2 becomes large.
 (他の実施形態)
 この明細書における開示は、例示された実施形態に制限されない。開示は、例示された実施形態と、それらに基づく当業者による変形態様を包含する。例えば、開示は、実施形態において示された部品および/または要素の組み合わせに限定されない。開示は、多様な組み合わせによって実施可能である。開示は、実施形態に追加可能な追加的な部分をもつことができる。開示は、実施形態の部品および/または要素が省略されたものを包含する。開示は、ひとつの実施形態と他の実施形態との間における部品および/または要素の置き換え、または組み合わせを包含する。開示される技術的範囲は、実施形態の記載に限定されない。
(Other embodiments)
The disclosure herein is not limited to the illustrated embodiments. The disclosure encompasses the illustrated embodiments and variations on them based on them. For example, the disclosure is not limited to the combination of parts and/or elements shown in the embodiments. The disclosure can be implemented in various combinations. The disclosure may have additional parts that may be added to the embodiments. The disclosure includes omissions of parts and/or elements of the embodiments. The disclosure includes replacements or combinations of parts and/or elements between one embodiment and another. The disclosed technical scope is not limited to the description of the embodiments.
 上述の実施形態において、表示生成部206は、高精度地図情報に基づきAR案内画像Gi1を経路案内画像として生成し、ナビゲーション地図データに基づき非AR案内画像Gi2を経路案内画像として生成するとした。これに代えて、またはこれに加えて、表示生成部206は、経路案内画像以外の虚像Viを取得する地図情報によって異なる表示態様で生成する構成であってもよい。例えば、表示生成部206は、乗員が注視すべき対象物(例えば先行車、歩行者および道路標識等)への注視を促す画像を、高精度地図情報を取得可能である場合には対象物に重畳させ、取得できない場合には対象物への重畳を中止してもよい。 In the above-described embodiment, the display generation unit 206 generates the AR guidance image Gi1 as the route guidance image based on the high-precision map information and the non-AR guidance image Gi2 as the route guidance image based on the navigation map data. Instead of or in addition to this, the display generation unit 206 may be configured to generate a virtual image Vi other than the route guidance image in a different display mode depending on the map information to be acquired. For example, the display generation unit 206 sets an image prompting the occupant to gaze at an object to be watched (for example, a preceding vehicle, a pedestrian, a road sign, etc.) when the high-accuracy map information can be acquired. Superimposition may be performed, and if acquisition is not possible, superimposition on the object may be stopped.
 上述の実施形態において、表示生成部206は、経路案内画像とともに態様提示画像Iiを表示するとしたが、経路案内画像の表示前に態様提示画像Iiを表示開始してもよい。 In the above embodiment, the display generation unit 206 displays the mode presentation image Ii together with the route guidance image, but the mode presentation image Ii may be displayed before the route guidance image is displayed.
 第1実施形態において、HCU20は、形状条件が成立した場合に、ナビゲーション地図データに基づいて非AR案内画像Gi2を表示させるとした。これに代えて、HCU20は、形状条件が成立し、且つ高精度地図データを取得可能な場合には、高精度地図データに基づいて非AR案内画像Gi2を表示させてもよい。 In the first embodiment, the HCU 20 is supposed to display the non-AR guidance image Gi2 based on the navigation map data when the shape condition is satisfied. Alternatively, the HCU 20 may display the non-AR guidance image Gi2 based on the high-precision map data when the shape condition is satisfied and the high-precision map data can be acquired.
 第3実施形態において、表示生成部206は、第2AR案内画像CT2の重畳位置ずれの大きさに応じて、第2AR案内画像CT2の重畳位置を車両中央位置Vcおよび投影領域PAの中央部Acのどちらかに決定するとした。これに代えて、表示生成部206は、いずれか一方のみに重畳する構成であってもよい。 In the third embodiment, the display generation unit 206 sets the overlapping position of the second AR guide image CT2 to the vehicle center position Vc and the center part Ac of the projection area PA in accordance with the size of the overlapping position shift of the second AR guide image CT2. I decided to choose one. Instead of this, the display generation unit 206 may be configured to be superimposed on only one of them.
 第3実施形態において、表示生成部206は、非AR案内画像Gi2から第2AR案内画像CT2への切替を交差点CPまでの残距離に基づいて行うとしたが、切替を行う条件はこれに限定されない。例えば、表示生成部206は、第2AR案内画像CT2の重畳位置に関する補正情報を取得できた時点で切り替える構成であってもよい。ここで、補正情報は、第2AR案内画像CT2の重畳位置の補正に利用可能な情報であり、例えば、交差点CPの停止線、交差点CPの中央標示、および他の自車車線Lnsの路面標示等の位置情報である。補正情報は、周辺監視センサ4の検出情報の解析結果として取得される。 In the third embodiment, the display generation unit 206 performs switching from the non-AR guidance image Gi2 to the second AR guidance image CT2 based on the remaining distance to the intersection CP, but the condition for switching is not limited to this. .. For example, the display generation unit 206 may be configured to switch at the time when the correction information regarding the superimposed position of the second AR guide image CT2 can be acquired. Here, the correction information is information that can be used to correct the superimposed position of the second AR guide image CT2, and includes, for example, the stop line of the intersection CP, the center marking of the intersection CP, and the road marking of another own vehicle lane Lns. It is the position information of. The correction information is acquired as an analysis result of the detection information of the peripheral monitoring sensor 4.
 上述の実施形態において、表示生成部206は、高精度地図データに、将来走行区間GSについての情報が含まれていない場合には、第2表示態様で経路案内画像を生成するとした、これに代えて、表示生成部206は、車両の現在位置に対応する高精度地図データが取得可能であれば、第1表示態様で経路案内画像を生成する構成であってもよい。この場合、表示生成部206は、車両の現在位置に対応する高精度地図データが取得不可能になった場合に、第1表示態様から第2表示態様に切り替えればよい。 In the embodiment described above, the display generation unit 206 generates the route guidance image in the second display mode when the high-precision map data does not include the information about the future traveling section GS. Then, the display generation unit 206 may be configured to generate the route guidance image in the first display mode as long as the high-precision map data corresponding to the current position of the vehicle can be acquired. In this case, the display generation unit 206 may switch from the first display mode to the second display mode when the high-precision map data corresponding to the current position of the vehicle cannot be acquired.
 加えて、第3実施形態の表示生成部206は、上述のように表示態様を切り替える場合、第1AR案内画像CT1の重畳位置から、第2AR案内画像CT2の重畳位置まで、経路案内画像が連続的に移動するように表示してもよい。これにより、表示生成部206は、重畳位置が瞬間的に切り替わることによる乗員の違和感を低減できる。なお、このときの経路案内画像の移動速度は、経路案内画像の移動自体に乗員の意識が誘導されない程度に遅いことが望ましい。 In addition, when switching the display mode as described above, the display generation unit 206 of the third embodiment continuously displays the route guidance image from the superimposing position of the first AR guide image CT1 to the superimposing position of the second AR guide image CT2. You may display so that it may move to. As a result, the display generation unit 206 can reduce the occupant's discomfort due to the instantaneous switching of the overlapping position. It should be noted that the moving speed of the route guidance image at this time is preferably slow enough not to guide the occupant's consciousness to the movement itself of the route guidance image.
 第3実施形態において、表示生成部206は、第1AR案内画像CT1の進入経路コンテンツCTaと退出経路コンテンツCTeを、異なる形状のコンテンツで表示する。これに代えて、表示生成部206は、図18に示すように、各コンテンツCTa,CTeを実質的に同じ形状のコンテンツとしてもよい。図18に示す例では、各コンテンツCTa,CTeが、走行予定経路に沿って並ぶ複数の三角形の形状とされる。この場合、表示生成部206は、第2AR案内画像CT2の表示において、退出経路コンテンツCTeを、退出方向を示す矢印形状の画像に変更すればよい(図19参照)。なお、表示生成部206は、経路案内画像を、走行予定経路に沿って一続きに延びる帯状のコンテンツとして表示してもよい。この場合には、第2AR案内画像CT2が、第1AR案内画像CT1よりも走行予定経路の手前側までの長さに制限される態様で表示されればよい。 In the third embodiment, the display generation unit 206 displays the entry route content CTa and the exit route content CTe of the first AR guidance image CT1 in different shapes. Instead of this, the display generation unit 206 may set the contents CTa and CTe as contents having substantially the same shape, as shown in FIG. In the example shown in FIG. 18, each of the contents CTa and CTe is in the shape of a plurality of triangles arranged along the planned travel route. In this case, the display generation unit 206 may change the exit route content CTe to an arrow-shaped image indicating the exit direction in the display of the second AR guide image CT2 (see FIG. 19). The display generation unit 206 may display the route guidance image as a strip of content that extends continuously along the planned travel route. In this case, the second AR guide image CT2 may be displayed in a mode in which the length of the second AR guide image CT2 to the front side of the planned traveling route is limited to that of the first AR guide image CT1.
 上述の実施形態のプロセッサは、1つまたは複数のCPU(Central Processing Unit)を含む処理部である。こうしたプロセッサは、CPUに加えて、GPU(Graphics Processing Unit)およびDFP(Data Flow Processor)等を含む処理部であってよい。さらにプロセッサは、FPGA(Field-Programmable Gate Array)、並びにAIの学習及び推論等の特定処理に特化したIPコア等を含む処理部であってもよい。こうしたプロセッサの各演算回路部は、プリント基板に個別に実装された構成であってもよく、またはASIC(Application Specific Integrated Circuit)およびFPGA等に実装された構成であってもよい。 The processor of the above-described embodiment is a processing unit including one or more CPUs (Central Processing Units). Such a processor may be a processing unit including a GPU (Graphics Processing Unit) and a DFP (Data Flow Processor) in addition to the CPU. Further, the processor may be a processing unit including an FPGA (Field-Programmable Gate Array) and an IP core specialized in specific processing such as learning and inference of AI. Each arithmetic circuit unit of such a processor may be mounted individually on a printed circuit board, or may be mounted on an ASIC (Application Specific Integrated Circuit) and FPGA.
 表示制御プログラム等を記憶するメモリ装置には、フラッシュメモリ及びハードディスク等の種々の非遷移的実体的記憶媒体(non-transitory tangible storage medium)が採用可能である。こうした記憶媒体の形態も、適宜変更されてよい。例えば記憶媒体は、メモリカード等の形態であり、車載ECUに設けられたスロット部に挿入されて、制御回路に電気的に接続される構成であってよい。 For the memory device that stores the display control program, various non-transitional tangible storage mediums such as a flash memory and a hard disk can be adopted. The form of such a storage medium may be appropriately changed. For example, the storage medium may be in the form of a memory card or the like, and may be configured to be inserted into a slot portion provided in the vehicle-mounted ECU and electrically connected to the control circuit.
 本開示に記載の制御部およびその手法は、コンピュータプログラムにより具体化された1つ乃至は複数の機能を実行するようにプログラムされたプロセッサを構成する専用コンピュータにより、実現されてもよい。あるいは、本開示に記載の装置およびその手法は、専用ハードウエア論理回路により、実現されてもよい。もしくは、本開示に記載の装置およびその手法は、コンピュータプログラムを実行するプロセッサと1つ以上のハードウエア論理回路との組み合わせにより構成された1つ以上の専用コンピュータにより、実現されてもよい。また、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていてもよい。 The control unit and the method thereof described in the present disclosure may be realized by a dedicated computer that configures a processor programmed to execute one or more functions embodied by a computer program. Alternatively, the apparatus and method described in the present disclosure may be realized by a dedicated hardware logic circuit. Alternatively, the device and method described in the present disclosure may be implemented by one or more dedicated computers configured by a combination of a processor that executes a computer program and one or more hardware logic circuits. Further, the computer program may be stored in a computer-readable non-transition tangible recording medium as an instruction executed by a computer.
 本開示に記載の制御部及びその手法は、コンピュータプログラムにより具体化された一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリーを構成することによって提供された専用コンピュータにより、実現されてもよい。あるいは、本開示に記載の制御部及びその手法は、一つ以上の専用ハードウエア論理回路によってプロセッサを構成することによって提供された専用コンピュータにより、実現されてもよい。もしくは、本開示に記載の制御部及びその手法は、一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリーと一つ以上のハードウエア論理回路によって構成されたプロセッサとの組み合わせにより構成された一つ以上の専用コンピュータにより、実現されてもよい。また、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていてもよい。 The control unit and the method thereof described in the present disclosure are realized by a dedicated computer provided by configuring a processor and a memory programmed to execute one or more functions embodied by a computer program. May be done. Alternatively, the control unit and the method described in the present disclosure may be realized by a dedicated computer provided by configuring a processor with one or more dedicated hardware logic circuits. Alternatively, the control unit and the method thereof described in the present disclosure are based on a combination of a processor and a memory programmed to execute one or more functions and a processor configured by one or more hardware logic circuits. It may be realized by one or more configured dedicated computers. Further, the computer program may be stored in a computer-readable non-transition tangible recording medium as an instruction executed by a computer.
 ここで、この出願に記載されるフローチャート、あるいは、フローチャートの処理は、複数のセクション(あるいはステップと言及される)から構成され、各セクションは、たとえば、S10と表現される。さらに、各セクションは、複数のサブセクションに分割されることができる、一方、複数のセクションが合わさって一つのセクションにすることも可能である。さらに、このように構成される各セクションは、デバイス、モジュール、ミーンズとして言及されることができる。 Here, the flowchart described in this application or the process of the flowchart is composed of a plurality of sections (also referred to as steps), and each section is expressed as, for example, S10. Further, each section can be divided into multiple subsections, while multiple sections can be combined into one section. Further, each section thus configured can be referred to as a device, module, means.
 本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。 Although the present disclosure has been described according to the embodiments, it is understood that the present disclosure is not limited to the embodiments and the structure. The present disclosure also includes various modifications and modifications within an equivalent range. In addition, various combinations and forms, and other combinations and forms including only one element, more, or less than those, also fall within the scope and spirit of the present disclosure.

Claims (14)

  1.  車両(A)において用いられ、乗員の前景に重畳される虚像(Vi)の表示を制御する表示制御装置であって、
     前記車両の位置を取得する車両位置取得部(201)と、
     前記位置に対応する高精度地図情報または前記高精度地図情報よりも精度の低い低精度地図情報を取得する地図情報取得部(203)と、
     前記高精度地図情報を取得可能である場合には、前記高精度地図情報に基づく第1表示態様で前記虚像を生成し、前記高精度地図情報を取得不可能である場合には、前記低精度地図情報に基づく、前記第1表示態様とは異なる第2表示態様で前記虚像を生成する表示生成部(206)と、
     を備える表示制御装置。
    A display control device which is used in a vehicle (A) and controls display of a virtual image (Vi) superimposed on a foreground of an occupant,
    A vehicle position acquisition unit (201) for acquiring the position of the vehicle,
    A map information acquisition unit (203) for acquiring high-precision map information corresponding to the position or low-precision map information having a lower precision than the high-precision map information;
    If the high-precision map information can be obtained, the virtual image is generated in the first display mode based on the high-precision map information, and if the high-precision map information cannot be obtained, the low-precision map information is obtained. A display generation unit (206) that generates the virtual image in a second display mode different from the first display mode based on map information;
    And a display control device.
  2.  前記高精度地図情報を取得可能か否かを判定する地図判定部(202)をさらに備える請求項1に記載の表示制御装置。 The display control device according to claim 1, further comprising a map determination unit (202) that determines whether or not the high-precision map information can be acquired.
  3.  前記表示生成部は、前記第1表示態様において、前記前景中の特定の重畳対象に前記虚像を重畳させ、前記第2表示態様において、特定の重畳対象への前記虚像の重畳を中止させる請求項1または請求項2に記載の表示制御装置。 The display generation unit superimposes the virtual image on a specific superimposition target in the foreground in the first display mode, and stops superimposing the virtual image on a specific superimposition target in the second display mode. The display control device according to claim 1 or 2.
  4.  前記表示生成部は、前記第1表示態様において、前記高精度地図情報に基づく重畳位置にて、前記前景中の特定の重畳対象に対して前記虚像を重畳させ、前記第2表示態様において、前記低精度地図情報に基づく重畳位置にて、前記特定の重畳対象に対して前記虚像を重畳させる請求項1または請求項2に記載の表示制御装置。 In the first display mode, the display generation unit superimposes the virtual image on a specific superimposition target in the foreground at a superposition position based on the high-precision map information, and in the second display mode, The display control device according to claim 1, wherein the virtual image is superimposed on the specific superimposition target at a superimposition position based on low-precision map information.
  5.  前記表示生成部は、交差点(CP)を含む走行エリアにおける前記車両の走行予定経路を提示する経路案内画像を前記虚像に含み、前記第2表示態様で前記経路案内画像の生成を開始する前記交差点までの残距離を、前記第1表示態様で前記経路案内画像を生成する場合よりも短く設定する請求項4に記載の表示制御装置。 The display generation unit includes, in the virtual image, a route guidance image that presents a planned traveling route of the vehicle in a traveling area including an intersection (CP), and the intersection that starts generating the route guidance image in the second display mode. The display control device according to claim 4, wherein the remaining distance up to is set shorter than when the route guidance image is generated in the first display mode.
  6.  前記表示生成部は、前記高精度地図情報に、前記車両の将来の走行区間についての情報が含まれていない場合には、前記第2表示態様で前記虚像を生成する請求項1から請求項5のいずれか1項に記載の表示制御装置。 The display generation unit generates the virtual image in the second display mode when the high-precision map information does not include information on a future travel section of the vehicle. The display control device according to claim 1.
  7.  前記表示生成部は、前記車両の走行する道路形状に関して前記第1表示態様での前記虚像の生成を中止する形状条件が成立する場合には、前記高精度地図情報を取得可能であっても、前記第2表示態様で前記虚像を生成する請求項1から請求項6のいずれか1項に記載の表示制御装置。 If the shape condition for stopping the generation of the virtual image in the first display mode is satisfied with respect to the road shape on which the vehicle is traveling, the display generation unit can acquire the high-precision map information, The display control device according to claim 1, wherein the virtual image is generated in the second display mode.
  8.  前記表示生成部は、前記車両の走行する道路形状に関して前記第1表示態様での前記虚像の生成を中止する形状条件が成立し、且つ前記高精度地図情報を取得可能な場合には、前記特定の重畳対象への重畳を中止された前記虚像を、前記高精度地図情報に基づいて生成する請求項3に記載の表示制御装置。 When the shape condition for stopping the generation of the virtual image in the first display mode is satisfied with respect to the road shape on which the vehicle is traveling, and the high-accuracy map information can be acquired, the display generation unit determines the identification. The display control device according to claim 3, wherein the virtual image whose superimposition on the superimposing target is stopped is generated based on the high-precision map information.
  9.  車載センサ(4)から検出物体の高さ情報を取得するセンサ情報取得部(204)をさらに備え、
     前記表示生成部は、前記高精度地図情報を取得不可能であっても、前記センサ情報取得部にて前記高さ情報が取得される場合には、前記低精度地図情報と前記高さ情報との組み合わせに基づく表示態様で前記虚像を生成する請求項1から請求項8のいずれか1項に記載の表示制御装置。 
    The vehicle-mounted sensor (4) further comprises a sensor information acquisition unit (204) for acquiring height information of a detected object,
    Even if the display generation unit cannot acquire the high-precision map information, when the height information is acquired by the sensor information acquisition unit, the low-precision map information and the height information are acquired. The display control device according to any one of claims 1 to 8, wherein the virtual image is generated in a display mode based on a combination of.
  10.  前記第1表示態様および前記第2表示態様のいずれで前記虚像が生成されているのかを前記乗員に提示する態様提示部(206)をさらに有する請求項1から請求項9のいずれか1項に記載の表示制御装置。 The mode presenting section (206) for presenting to the occupant which one of the first display mode and the second display mode the virtual image is generated, further comprising: a mode presentation unit (206). The display control device described.
  11.  前記地図情報取得部は、道路の勾配情報、区画線の3次元形状情報、および道路勾配が推定可能な情報の少なくとも1つを含む地図情報を前記高精度地図情報として取得する請求項1から請求項10のいずれか1項に記載の表示制御装置。 The map information acquisition unit acquires map information including at least one of road gradient information, lane marking three-dimensional shape information, and road gradient estimation information as the high-accuracy map information. Item 11. The display control device according to any one of items 10.
  12.  車両(A)において用いられ、乗員の前景に重畳される虚像(Vi)の表示を制御する表示制御プログラムであって、
     少なくとも1つの処理部(20a)を、
     前記車両の位置を取得する車両位置取得部(201)と、
     前記位置に対応する高精度地図情報または前記高精度地図情報よりも精度の低い低精度地図情報を取得する地図情報取得部(203)と、
     前記高精度地図情報を取得可能である場合には、前記高精度地図情報に基づく第1表示態様で前記虚像を生成し、前記高精度地図情報を取得不可能である場合には、前記低精度地図情報に基づく、前記第1表示態様とは異なる第2表示態様で前記虚像を生成する表示生成部(206)、として機能させる表示制御プログラム。
    A display control program for controlling display of a virtual image (Vi) used in a vehicle (A) and superimposed on a foreground of an occupant,
    At least one processing unit (20a),
    A vehicle position acquisition unit (201) for acquiring the position of the vehicle,
    A map information acquisition unit (203) for acquiring high-precision map information corresponding to the position or low-precision map information having a lower precision than the high-precision map information;
    If the high-precision map information can be obtained, the virtual image is generated in the first display mode based on the high-precision map information, and if the high-precision map information cannot be obtained, the low-precision map information is obtained. A display control program that functions as a display generation unit (206) that generates the virtual image in a second display mode different from the first display mode based on map information.
  13.  コンピュータにより実行されるインストラクションを含むコンピュータ読み出し可能持続的有形記録媒体であって、当該インストラクションは、車両(A)において用いられ、乗員の前景に重畳される虚像(Vi)の表示を制御するものであり、当該インストラクションは、
     前記車両の位置を取得することと、
     前記位置に対応する高精度地図情報または前記高精度地図情報よりも精度の低い低精度地図情報を取得することと、
     前記高精度地図情報を取得可能である場合には、前記高精度地図情報に基づく第1表示態様で前記虚像を生成し、前記高精度地図情報を取得不可能である場合には、前記低精度地図情報に基づく、前記第1表示態様とは異なる第2表示態様で前記虚像を生成することと、
     を備えるコンピュータ読み出し可能持続的有形記録媒体。
    A computer-readable persistent tangible recording medium containing computer-implemented instructions, the instructions being used in a vehicle (A) to control the display of a virtual image (Vi) superimposed on the occupant's foreground. Yes, the instructions are
    Acquiring the position of the vehicle,
    Acquiring high-precision map information corresponding to the position or low-precision map information having a lower precision than the high-precision map information,
    If the high-precision map information can be obtained, the virtual image is generated in the first display mode based on the high-precision map information, and if the high-precision map information cannot be obtained, the low-precision map information is obtained. Generating the virtual image in a second display mode different from the first display mode based on map information;
    A computer-readable persistent tangible recording medium comprising.
  14.  車両(A)において用いられ、乗員の前景に重畳される虚像(Vi)の表示を制御する表示制御装置は、
     少なくとも1つの処理部(20a)を備え、
     当該少なくとも一つの処理部は、
       前記車両の位置を取得し、
       前記位置に対応する高精度地図情報または前記高精度地図情報よりも精度の低い低精度地図情報を取得し、
       前記高精度地図情報を取得可能である場合には、前記高精度地図情報に基づく第1表示態様で前記虚像を生成し、前記高精度地図情報を取得不可能である場合には、前記低精度地図情報に基づく、前記第1表示態様とは異なる第2表示態様で前記虚像を生成する、
     表示制御装置。
    A display control device that is used in a vehicle (A) and controls the display of a virtual image (Vi) that is superimposed on the occupant's foreground is:
    At least one processing unit (20a) is provided,
    The at least one processing unit is
    Obtain the position of the vehicle,
    Acquiring high-precision map information corresponding to the position or low-precision map information having a lower precision than the high-precision map information,
    If the high-precision map information can be obtained, the virtual image is generated in the first display mode based on the high-precision map information, and if the high-precision map information cannot be obtained, the low-precision map information is obtained. Generating the virtual image based on map information in a second display mode different from the first display mode,
    Display controller.
PCT/JP2019/046318 2018-12-14 2019-11-27 Display control device, display control program, and tangible, non-transitory computer-readable recording medium WO2020121810A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE112019006171.2T DE112019006171T5 (en) 2018-12-14 2019-11-27 Display controller, display control program and non-transitory tangible computer readable storage medium
US17/222,259 US20210223058A1 (en) 2018-12-14 2021-04-05 Display control device and non-transitory computer-readable storage medium for the same

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018-234566 2018-12-14
JP2018234566 2018-12-14
JP2019196468A JP7052786B2 (en) 2018-12-14 2019-10-29 Display control device and display control program
JP2019-196468 2019-10-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/222,259 Continuation US20210223058A1 (en) 2018-12-14 2021-04-05 Display control device and non-transitory computer-readable storage medium for the same

Publications (1)

Publication Number Publication Date
WO2020121810A1 true WO2020121810A1 (en) 2020-06-18

Family

ID=71076404

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/046318 WO2020121810A1 (en) 2018-12-14 2019-11-27 Display control device, display control program, and tangible, non-transitory computer-readable recording medium

Country Status (2)

Country Link
JP (1) JP7416114B2 (en)
WO (1) WO2020121810A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220308240A1 (en) * 2021-03-25 2022-09-29 Casio Computer Co., Ltd. Information processing device, information processing system, information processing method and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011135660A1 (en) * 2010-04-26 2011-11-03 パイオニア株式会社 Navigation system, navigation method, navigation program, and storage medium
JP2017167053A (en) * 2016-03-17 2017-09-21 株式会社デンソー Vehicle location determination device
JP2018133031A (en) * 2017-02-17 2018-08-23 オムロン株式会社 Driving switching support device and driving switching support method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5382356B2 (en) * 2010-03-25 2014-01-08 株式会社エクォス・リサーチ Driving assistance system
JP7009747B2 (en) * 2017-02-20 2022-01-26 株式会社Jvcケンウッド Terminal device, control method, program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011135660A1 (en) * 2010-04-26 2011-11-03 パイオニア株式会社 Navigation system, navigation method, navigation program, and storage medium
JP2017167053A (en) * 2016-03-17 2017-09-21 株式会社デンソー Vehicle location determination device
JP2018133031A (en) * 2017-02-17 2018-08-23 オムロン株式会社 Driving switching support device and driving switching support method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220308240A1 (en) * 2021-03-25 2022-09-29 Casio Computer Co., Ltd. Information processing device, information processing system, information processing method and storage medium

Also Published As

Publication number Publication date
JP7416114B2 (en) 2024-01-17
JP2022079590A (en) 2022-05-26

Similar Documents

Publication Publication Date Title
JP7052786B2 (en) Display control device and display control program
US10293748B2 (en) Information presentation system
US11535155B2 (en) Superimposed-image display device and computer program
JP6566132B2 (en) Object detection method and object detection apparatus
JP6775188B2 (en) Head-up display device and display control method
US11996018B2 (en) Display control device and display control program product
WO2020162109A1 (en) Display control device, display control program, and persistent physical computer-readable medium
US20230191911A1 (en) Vehicle display apparatus
WO2020208989A1 (en) Display control device and display control program
WO2020246113A1 (en) Display control device and display control program
JP7416114B2 (en) Display control device and display control program
CN110888432B (en) Display system, display control method, and storage medium
JP7088152B2 (en) Display control device and display control program
JP2020199839A (en) Display control device
JP2020118545A (en) Display controller and display control program
JP2020138609A (en) Display control device for vehicle, display control method for vehicle, and display control program for vehicle
WO2020246114A1 (en) Display control device and display control program
JP7487713B2 (en) Vehicle display control device, vehicle display device, vehicle display control method and program
JP7294091B2 (en) Display controller and display control program
JP7151653B2 (en) In-vehicle display controller
JP7001169B2 (en) Operation plan display method and operation plan display device
JP2021028587A (en) In-vehicle display control device
JP2020091148A (en) Display controller and display control program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19894773

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19894773

Country of ref document: EP

Kind code of ref document: A1