WO2019137065A1 - 图像处理方法、装置、车载抬头显示***及车辆 - Google Patents

图像处理方法、装置、车载抬头显示***及车辆 Download PDF

Info

Publication number
WO2019137065A1
WO2019137065A1 PCT/CN2018/111905 CN2018111905W WO2019137065A1 WO 2019137065 A1 WO2019137065 A1 WO 2019137065A1 CN 2018111905 W CN2018111905 W CN 2018111905W WO 2019137065 A1 WO2019137065 A1 WO 2019137065A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
position information
displayed
range
image processing
Prior art date
Application number
PCT/CN2018/111905
Other languages
English (en)
French (fr)
Inventor
武乃福
马希通
刘向阳
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/470,238 priority Critical patent/US11120531B2/en
Priority to EP18900446.8A priority patent/EP3739545A4/en
Publication of WO2019137065A1 publication Critical patent/WO2019137065A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/10Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/23Head-up displays [HUD]
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/149Instrument input by detecting viewing direction not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/65Instruments specially adapted for specific vehicle types or users, e.g. for left- or right-hand drive
    • B60K35/658Instruments specially adapted for specific vehicle types or users, e.g. for left- or right-hand drive the instruments being ergonomically adjustable to the user
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/011Head-up displays characterised by optical features comprising device for correcting geometrical aberrations, distortion
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0181Adaptation to the pilot/driver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior

Definitions

  • the present disclosure relates to the field of image processing, positioning, and vehicles.
  • the present disclosure relates to an image processing method, apparatus, vehicle head-up display system, and vehicle apparatus.
  • the head-up display HUD head up display
  • the head-up display HUD can display important information on a transparent glass in front of the line of sight. It is first applied to fighters. Its main purpose is to allow pilots to focus on the data in the dashboard without frequent attention. Prevent the pilot from viewing the environmental information in the area in front of the flight while viewing the data in the dashboard.
  • an image processing method for a vehicle head-up display device comprising the steps of:
  • Distortion correction processing is performed on the image to be displayed according to the distortion parameter to obtain an image to be displayed after the distortion correction.
  • the image processing method before the step of determining the location information of the target within the active range, the image processing method further includes:
  • Each position information in the range of activity and the distortion parameter corresponding to each position information are stored in association with each other.
  • the step of associating the location information in the range of the activity with the distortion parameter corresponding to each location information includes:
  • the mobile camera moves within the range of motion to store each location information within the range of motion and distortion parameters corresponding to each location information.
  • the step of determining the imaging position information of the mobile camera in the active range includes:
  • the imaging position information is converted to determine imaging position information of the mobile camera within the range of motion.
  • the step of determining the distortion parameter corresponding to the imaging position information includes:
  • storing the image capturing position information in association with the corresponding distortion parameter further includes:
  • the imaging position information and the corresponding distortion parameter are stored in a lookup table.
  • the range of motion is an area in which the object can view an image to be displayed displayed on a display carrier.
  • the method before the step of determining location information of the target, the method specifically includes:
  • the image including the object is photographed in response to detecting that there is a target within the range of motion.
  • detecting whether the target exists within the range of activities further includes:
  • the presence or absence of the target within the range of motion is sensed by the sensor.
  • the senor includes one or more of an infrared sensor and a gravity sensor.
  • the image processing method further includes:
  • the corrected image to be displayed is displayed on the display carrier by projection.
  • the present disclosure also provides an image processing apparatus, including:
  • a determining module for determining location information of the target within the active range
  • An obtaining module configured to determine, by using a lookup table, a distortion parameter corresponding to the location information
  • a processing module configured to perform distortion correction processing on the image to be displayed according to the distortion parameter to obtain a corrected image to be displayed.
  • the present disclosure also provides an image processing apparatus including a processor for storing a computer program, the computer program being executed by the processor to implement the steps of the image processing method of any of the technical solutions.
  • a lookup table is stored in the memory, and the lookup table is associated with storing location information within the active range and corresponding distortion parameters.
  • the present disclosure also provides a vehicle head-up display system including a fixed camera device, a display carrier, and an image processing device as described above.
  • the fixed camera device is configured to collect a second image including the target object
  • the image processing apparatus is configured to determine position information of the target object according to the second image, determine a distortion parameter corresponding to the position information by looking up a table, and perform distortion correction on the image to be displayed according to the distortion parameter Processing to obtain a corrected image to be displayed, and
  • the display carrier is used to display an image to be displayed after distortion correction.
  • the on-vehicle heads-up display system further includes a moving camera device for acquiring a test image when moving within the range of motion, the image processing device for using the measurement The image is compared with the reference image to determine a distortion parameter corresponding to the imaging position information when the mobile imaging device acquires the test image.
  • the present disclosure also provides a vehicle comprising: an image processing apparatus as described above; or an on-board head-up display system according to any one of the technical solutions.
  • FIG. 1 is a flow chart of an exemplary embodiment of an image processing method according to the present disclosure
  • FIG. 2 is a schematic view showing the position of the fixed camera device and the position of the driver's head in the vehicle according to the present disclosure
  • FIG. 3 is a flowchart of still another embodiment of an image processing method according to the present disclosure.
  • FIG. 5 is a range of activities corresponding to an object in the embodiment of the present disclosure, and mainly shows a range of viewing angles that the driver can view the display image on the display carrier;
  • FIG. 6 is a schematic diagram of a position of a driver's head photographed within a range of motion captured by a fixed camera device according to an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of an exemplary embodiment of an image processing apparatus according to the present disclosure.
  • FIG. 8 schematically shows a structural block diagram of an image processing apparatus 800 according to an exemplary embodiment of the present disclosure.
  • the HUD head up display
  • Its main purpose is to prevent pilots from paying attention to the data in the dashboard without frequent attention, so that the pilot can not observe the flight front when viewing the data in the dashboard.
  • Environmental information in the field is to prevent pilots from paying attention to the data in the dashboard without frequent attention, so that the pilot can not observe the flight front when viewing the data in the dashboard.
  • the HUD can be applied to a vehicle.
  • an image processing method for a vehicle head-up display device includes the following steps S100 to S300.
  • S100 Determine location information of the target within the active range.
  • the target 3 has a certain position in the passenger compartment, and the range in which the target 3 moves within the passenger compartment is limited.
  • Fig. 2 also shows the position of the fixed camera unit 1 in the vehicle compartment, wherein the broken line shows the line of sight of the object 3 on the display carrier 4 to view the image to be displayed.
  • the image in the upper right corner of FIG. 2 is an image 2 including the object photographed by the fixed image pickup apparatus 1.
  • the activity range is set, and the activity range includes a moving area of the maximum range of the target, in which the target can view the image to be displayed displayed on the display carrier.
  • the optical center of the fixed camera is used as the coordinate origin, and the coordinate system of the fixed camera 1 is established to determine the position information of the target in the coordinate system of the fixed camera, that is, the target is fixed.
  • the coordinate value in the camera coordinate system, the position information is a real-time coordinate value, and when the fixed device is at the position, an image of the target object in the active range can be captured. Since the target object is simulated by the mobile camera device in the active range in the early stage, and the image capturing position information of the mobile camera device at different position points is also determined by the fixed image capturing device, and the image capturing position information is also the coordinate in the coordinate system of the fixed camera device.
  • the imaging position information of each position point and the distortion parameter of the mobile imaging device at each position point are stored in a mapping relationship. Since the mobile camera device simulates the target object, when the position information of the target object is determined, we can find the image capturing position information of the corresponding mobile camera device by the position information of the target object, so as to be based on the image capturing position. The information determines the corresponding distortion parameter. It should be noted that, according to an embodiment of the present disclosure, the target may be the eyes of the driver.
  • the process of determining the target position information is that, referring to FIG. 2, the second image 2 including the object 3 is first captured by the fixed imaging device 1, and the feature of the target 3 is detected in the second image 2, and based on The object 3 is in the imaging plane of the fixed camera device 1 (such as position information and pixel information in the imaging plane), internal parameters (such as focal length) known to the fixed camera 1 and external parameters (such as the fixed camera 1)
  • the position information of the installed position, the positional relationship with the range of motion, and the like, and the conversion relationship between the coordinate values of the fixed camera coordinate system and the imaging plane determine the position information of the target in the fixed camera coordinate system.
  • the origin of the coordinate system and the origin of the fixed camera coordinate system are on the same straight line, that is, the two coordinate systems have the same coordinate axis, and the imaging is performed.
  • the focal length between the plane and the fixed camera device is fixed, and the internal parameters and external parameters of the fixed camera device are known, and it can be determined that the coordinate value of a point in the coordinate system of the fixed camera device and the coordinate value of the corresponding point on the imaging plane are fixed. (Specificly related to the employed camera device, and the ratio is suitable for fixing the camera system coordinate system to all points corresponding to the imaging plane).
  • position information of the target on the imaging plane (coordinate value in the image coordinate system) can be detected, and the value and the focal length are converted by a fixed ratio to determine that the target is in the fixed camera.
  • the coordinate value in the coordinate system that is, the position information of the target.
  • S200 Determine a distortion parameter corresponding to the location information by using a lookup table.
  • the coordinate value of the target in the fixed camera coordinate system is determined, and since the coordinate value of the fixed camera coordinate system and the distortion parameter are stored in a mapping relationship, it is determined that the target is fixed.
  • the distortion parameter can be determined according to the coordinate value, so that the distortion parameter can be used for subsequent image correction processing, thereby saving time for image distortion processing, and being convenient for being fast.
  • the display image after the distortion processing is displayed on the display carrier.
  • the target object is the driver's eyes
  • the coordinate value of the driver's eyes in the fixed camera coordinate system is determined in step S100, and the coordinate value and the distortion parameter are determined according to the fixed camera coordinate system.
  • the mapping relationship is determined by looking up the distortion parameter corresponding to the coordinate value, and then the distortion parameter is used for the correction processing of the subsequent image.
  • the distortion parameter corresponding to the coordinate value is directly determined by looking up the table according to the mapping relationship between the coordinate value and the distortion parameter, thereby saving the processing time of the entire image display process.
  • the image to be displayed can be displayed on the display carrier of the transparent window more quickly, which improves the safety of the driver's driving, thereby reducing the traffic accident that occurs when the driver views the meter data.
  • S300 Perform distortion correction processing on the image to be displayed according to the distortion parameter to obtain an image to be displayed after distortion correction.
  • the distortion parameter of the image to be displayed is determined by the foregoing steps S100 and S200, and the distortion correction process is performed on the image to be displayed, wherein the distortion parameter is a two-dimensional coordinate value.
  • the distortion correction process after the corrected distortion parameter is determined, each pixel point in the image to be displayed is correspondingly moved according to the distortion parameter.
  • the image to be displayed is corresponding to the imaging plane of the mobile camera.
  • a coordinate system is also established on the imaging plane of the mobile camera.
  • the image to be displayed before the correction is a mobile camera.
  • the image to be displayed is correspondingly moved on the imaging plane (image coordinate system) of the mobile camera according to the value of the distortion parameter.
  • the high order can be adopted in the process.
  • Analytical, interpolation algorithms, iterative algorithms, etc. determine the distortion parameters used for correction.
  • the high-level analytical model is as follows:
  • k 1 , k 2 , and k 3 respectively represent distortion parameters of the first-order, second-order, and third-order radial distortion of the moving camera device
  • p 1 and p 2 represent distortion parameters of the tangential distortion of the mobile camera device
  • (x) d , y d ) represents the coordinate value of a point in the image to be displayed before the distortion correction in the imaging plane of the mobile camera
  • (x p , y p ) represents the distortion correction of the image to be displayed, and the point is in the imaging plane of the mobile camera
  • the coordinate value, r represents the distance between the point where the coordinate value (x d , y d ) is located and the center point of the imaging plane of the mobile camera.
  • the mobile camera device is mainly used to simulate the position of the driver's eyes in the vehicle compartment, and the distortion parameter of the camera device is determined by the calibration method and the distortion model, and the camera device is not used after the simulation is completed.
  • the imaging device corresponding to the distortion parameter is a mobile imaging device that simulates a target.
  • the image to be displayed after the distortion correction is displayed on the display carrier.
  • the display carrier is a transparent window on the vehicle, and may be a windshield mounted in front of the driver's head.
  • the display carrier may be other terminal devices for display or particles such as water molecules in the air. In the process of displaying the image to be displayed on the display carrier, the position on the display carrier on which the image to be displayed is displayed is fixed.
  • the display method includes a projection method, and the projection method has a high color reproduction degree and has a good visual effect, thereby making the displayed image more conformable to the real object.
  • the target is the eyes of the driver
  • the display carrier is a window of the vehicle.
  • the image to be displayed is normalized, such as smoothing processing, enhanced processing gray processing, etc., and the normalized image is processed. Projected on the transparent form of the vehicle by projection, and the position corresponds to the driver's eyes, thereby realizing the function of head-up display, so that the driver can simultaneously notice the environmental information and the vehicle while driving the vehicle. information.
  • a process of performing other normalization processing on the image to be displayed is further included.
  • the method before the step of determining location information of the target, the method includes:
  • Each position information in the range of activity and the distortion parameter corresponding to each position information are stored in association.
  • S102 to S104 are included:
  • S102 Determine imaging position information of the mobile camera device in the active range
  • S103 Determine a distortion parameter corresponding to the imaging position information, and store the imaging position information in association with the corresponding distortion parameter;
  • S104 The mobile imaging device moves within the active range to store each location information in the active range and a distortion parameter corresponding to each location information.
  • the storing the image capturing position information in association with the corresponding distortion parameter may further include: storing the image capturing position information and the corresponding distortion parameter in a manner of a lookup table.
  • the mobile camera is used to simulate the movement of the target in the vehicle compartment. Since the driver is in the driving position, the range of movement of the eyes or the head is limited when observing the environmental information in front of the vehicle. There is a relatively fixed range of motion (ie, the aforementioned range of motion), as shown in FIGS. 4 and 5, and therefore, when the mobile camera simulates the target, it moves within the range of motion. According to the foregoing description, the mobile camera device has different distortion parameters at different positions. Therefore, the fixed camera device that determines the position information of the target object is used to capture the mobile camera device, and the mobile camera device is determined according to the mobile camera device captured by the fixed camera device. Camera location information within the range of activity.
  • the specific process is consistent with the process of determining the location information of the target in the foregoing, and will not be described here.
  • the distortion parameter corresponding to the imaging position information is determined, and the distortion parameter and the distortion parameter are determined.
  • the imaging position information of the corresponding mobile imaging device is stored in association (for example, in a lookup table) to form a distortion parameter of the target. Therefore, when the position information of the target is determined later, the imaging position information of the mobile imaging device that is the same as the position information or within the error range can be searched, and the distortion parameter corresponding to the imaging position information is determined by looking up the table as the target.
  • Distortion parameters of the object to correct distortion of the displayed image is as described above, and Zhang Zhengyou's image calibration model, Tasi distortion model, model including radial distortion and tangential distortion, nonlinear mathematical distortion model, and linear mathematics can also be used.
  • a distortion model or the like, and in the foregoing model, is determined based on the deviation of the positional information of the test image and the reference image on the imaging plane of the mobile imaging device.
  • the distortion parameter and the position point information of the mobile camera device are stored in a database in a mapping relationship (for example, in a lookup table), and the database may be in a local device, such as a micro control unit. in.
  • the database can also be a cloud database and/or a removable storage medium coupled to the local device.
  • it may also be stored in other mobile storage media connected to the local database, such as in a mobile terminal (such as a mobile phone, a tablet, etc.), or in a cloud database, and the cloud database may be It can be directly connected to the local database or indirectly connected through other terminal devices.
  • it can also be stored in any one or more of the local database, the cloud database, and the mobile storage medium.
  • step of determining the imaging position information of the mobile camera in the active range includes the following steps:
  • the imaging position information is converted to determine imaging position information of the mobile imaging device within the active range.
  • the conversion can be performed based on the preset image first imaging position.
  • the preset rule includes one of a homogeneous equation model, a small hole imaging model, and a SIFT operation rule.
  • the mobile camera device is used to simulate that the target object moves within the range of the target object. Therefore, the camera position information of the mobile camera device is equivalent to the position information of the later target object, so as to determine the distortion parameter of the target object at different positions. Therefore, in the later stage, the target object can call the corresponding distortion parameter at different positions, and the distortion correction is performed on the image to be displayed, so that the difference in the image to be displayed on the display carrier is small when the target is viewed at different positions, so that the driver can better determine.
  • Current vehicle information Therefore, as described above, in the previous process, the moving image (first image) of the mobile camera in the active range is acquired by the fixed camera, that is, when the camera is in one position, the fixed camera collects at least one frame.
  • Moving the image, parsing the moving image, determining location information of the mobile camera in the moving image, specifically analyzing the imaging position information of the mobile camera on the imaging plane of the fixed camera, and determining the mobile camera is fixed according to the foregoing description
  • the coordinate value of the imaging device on the imaging plane (the coordinate value in the image coordinate system), and the imaging position information is performed according to the ratio between the coordinate value of a point in the fixed camera coordinate system and the coordinate value of the corresponding point on the imaging plane.
  • the conversion further determines the coordinate value of the mobile imaging device in the fixed imaging device coordinate system as the imaging position information of the mobile imaging device moving within the active range.
  • the ratio between the coordinate value of a point in the coordinate system of the fixed camera device and the coordinate value of the corresponding point on the imaging plane is determined based on a preset rule, which will be described in detail later.
  • the error is more accurate, and the error is small, that is, the point in the coordinate system of the camera is fixed.
  • the ratio of the coordinate values to the coordinate values of the corresponding points on the imaging plane of the fixed camera device is more accurate, which is determined based on the homogeneous equation in the embodiment of the present disclosure.
  • the homogeneous equation is expressed as a matrix:
  • (x, y) is the coordinate value of a certain point in space on the imaging plane of the fixed camera
  • (X C , Y C , Z C ) is the coordinate value of a certain point in space in the coordinate system of the fixed camera
  • f is a fixed camera
  • is a constant factor, according to which the coordinate value of the mobile camera in the coordinate system of the fixed camera can be determined.
  • the aperture imaging model is based on the principle that light travels along a straight line, that is, a point in space at a point in space, an optical center of a camera, and an image on an imaging plane of a camera constitute a straight line.
  • the light emitted by the object passes through the optical center of the imaging device and is then imaged on the imaging plane. If the distance between the plane of the object and the plane of the imaging device is assumed to be d, the actual height of the object is H, and the height on the imaging plane is h.
  • the focal length is f, then there are:
  • the basis of the model is that f, d, and H are known, and the distance between the object and the imaging plane is obtained. According to the proportional relationship between the distances, and based on the proportional relationship of the similar triangles, after determining the coordinates of the object on the imaging plane, According to the ratio, two-dimensional position information of the object in the camera coordinate system can be determined, wherein H is added to the coordinates to obtain position information in the camera coordinate system.
  • the object is replaced with a mobile camera or a target, and the camera is replaced with a fixed camera.
  • SIFT is based on feature point extraction, which determines feature points of corresponding features in a target based on at least two frames of images.
  • the SIFT feature not only has scale invariance, but even when changing the rotation angle, image brightness or shooting angle, good detection results can be obtained.
  • the SIFT feature point extraction steps include: 1.
  • the Gaussian convolution kernel is the only linear kernel that implements the scale transformation, and the Gaussian difference kernel with different scales and the image convolution are used to generate the Gaussian difference scale space, that is, the image pyramid; 2.
  • the DOG scale space is detected. Extreme point, determine a feature point of the image at this scale; 3.
  • the direction of the arrow represents the gradient direction of the pixel
  • the length of the arrow represents the gradient modulus
  • the Gaussian window is used to weight the operation. 6.
  • the generated image is generated according to SIFT. Matching, the descriptors of each scale (all scales) in the two graphs are matched, and matching the upper 128-dimensional can represent that the two feature points are matched, that is, the feature points in the image data of different frames are matched, wherein when matching, it may be The feature points in the image data of the adjacent frames are matched, or the feature points in the image data of the interval frame may be matched.
  • the RANSAC algorithm is used to determine the coordinate value of the target in the coordinate system of the fixed camera.
  • the RANSAC algorithm steps include: 1. randomly extracting 4 sample data from the data set (the four samples are not collinear) to calculate the transformation matrix H, which is recorded as the model M; 2. Calculating all the data in the data set and the model M Projection error, if the error is less than the threshold, add the inner point set I; 3. If the current inner point set element number is greater than the optimal inner point set, update, and update the iteration number k; 4. If the iteration number is greater than k, exit : Otherwise, the number of iterations is increased by 1, and the above steps are repeated to obtain the matching feature points.
  • the motion parameters are estimated, that is, the feature points in the image data are about to be in the next frame or several frames.
  • the middle position, and the matching feature points in all the frames are unified into the aforementioned coordinate system to determine the position information of the target.
  • the process is also used to determine the target object by using the image of the target image captured by the fixed camera device.
  • the coordinate value in the fixed camera coordinate system is also used to determine the target object by using the image of the target image captured by the fixed camera device.
  • the step of determining the distortion parameter corresponding to the imaging position information includes the following steps:
  • the mobile camera has different distortion parameters at different positions. Therefore, in order to obtain the distortion parameter of each point in the range of motion, when the mobile camera moves to a position, the position is the imaging position of the target of the mobile camera simulation, and the movement
  • the camera device captures at least one frame of the test image at the image capturing position, and the test image is taken when the mobile camera simulates the target to view the object at the position (ie, the display carrier or the position on the display carrier displaying the image to be displayed) And comparing the test image with the reference image to determine a deviation of each pixel point in the test image from each pixel point in the reference image, the deviation being a distortion parameter in the embodiment disclosed in the present application.
  • the object to be photographed in the test image and the reference image is a checkerboard
  • the reference image is an image generated when the driver's eyes are facing the display carrier (for example, the center of the display carrier)
  • the test image is The image that the driver sees when looking at the display carrier but not directly looking at the checkerboard on the display carrier
  • both the test image and the reference image are images taken when the mobile camera is used to simulate the target to see the checkerboard displayed on the display carrier.
  • the objects in the reference image are the same as the objects in the measured image, and the positions between the objects are also one-to-one correspondence to achieve contrast between the test image and the reference image.
  • the coordinate system (image coordinate system) is established on the imaging plane of the mobile camera. Since the plane where the optical center of the camera device is located is parallel to the imaging plane during the operation, the origin on the conventional imaging plane is the point at which the optical center is vertically projected on the imaging plane, so that the same reference point is obtained during the process of determining the distortion parameter. That is, the origin on the image side of the image.
  • the obtained distortion parameter is determined by an iterative algorithm, a high-order analysis or a difference algorithm to determine a specific distortion parameter.
  • the specific high-order analysis such as the high-order analytical model in the foregoing, acquires a plurality of pairs of corresponding coordinate values on the test image and the reference image and substitutes the high-order analytical model to determine the distortion parameter.
  • the pixel points of the one-to-one corresponding feature of the same object on the test image and the reference image are mainly compared.
  • the plurality of sets of the imaging position information and the distortion parameter corresponding to the imaging position information are included.
  • the difference in the image to be displayed displayed on the display carrier is small at each position, and the fixed camera device collects the activity when the mobile camera simulates the target.
  • the mobile camera moves the image at least one frame at different positions to determine the position information of the mobile camera in the active range, and determines the distortion parameters corresponding to the mobile camera at different positions, so that each position has a corresponding position.
  • the distortion parameter so that the position information corresponding to each position has a corresponding distortion parameter.
  • associating the image capturing position information with the corresponding distortion parameter further includes:
  • the imaging position information and the corresponding distortion parameter are stored in a lookup table.
  • the method specifically includes the following steps:
  • a second image including the target is captured.
  • detecting whether the target object exists in the range of the activity further includes: sensing, by the sensor, whether the target object exists within the range of the activity.
  • the sensor includes one or more of an infrared sensor, a gravity sensor.
  • the present disclosure adopts a method for tracking and positioning the fixed camera, that is, using a fixed camera to capture at least one frame including an image of the target within the active range, according to the image. Position information of the target in the fixed camera coordinate system is determined. Since the present disclosure is mainly used on a vehicle, the image to be displayed mainly includes data displayed by the meter in the vehicle instrument panel, and the data is mainly provided to the driver for viewing, so that the driver can judge the current driving condition of the vehicle, and thus the driver can Better driving.
  • the target is the driver's eyes, whether the driver's eyes are included in the currently captured second image, and the current shooting is determined by contour recognition, iris recognition, feature recognition, and the like.
  • the contour corresponding to the contour of the eye is included in the second image and the distance of the contour from the imaging device is within the range of the preset driver's eye and the fixed camera.
  • the iris is an annular portion between the black pupil and the white sclera that contains many detailed features of interlaced spots, filaments, crowns, stripes, crypts, and the like.
  • the feature recognition is like the previous two recognition methods, and it is determined whether the driver's eyes are present in the captured second image according to the set identification features and the distance between the features and the camera. Further determining whether the driver views the display carrier is based on other contour features of the face, such as face, mouth, nose, eyebrows and/or ears, and eye contour features. By determining whether the captured second image includes one or more of the following features: face, mouth, nose, eyebrow, ear, eyes, etc., further determining whether the size of the feature satisfies a preset feature ratio, whether the positional relationship between the features is satisfied The preset positional relationship or the like determines the target object in the second image captured by the fixed camera device after the one or more determination conditions are satisfied.
  • the avatar of the member or the head feature contour information and the head feature contour information set by the user may include the aforementioned judgment condition and the like.
  • the target may be the driver's eyes.
  • the target object may be the driver's eyes, and of course other objects according to the user setting, including one or more such as face, mouth, nose, and eyebrow as described above. Targets formed by ears, eyes, etc.
  • the image pickup apparatus 1 captures an image including the target. Specifically, when detecting that there is a target in the active range, the imaging apparatus 1 performs capturing of the image including the target (as shown in FIG. 6), so that subsequent steps can determine the target according to the image. Location information of the object.
  • the second image including the object is captured and the first image is captured by the same fixed camera.
  • the method specifically includes:
  • the image to be displayed after distortion correction is displayed on the display carrier by projection.
  • the image to be displayed is displayed on the display carrier by means of projection, and the degree of reduction of the chromaticity of the projection is high, so that the display effect is better, so that the displayed image to be displayed is more conformable to the object.
  • the image to be displayed is mainly formed by data displayed by a meter in a vehicle instrument panel. Since the speed of the driver and the remaining energy of the vehicle can be displayed in the dashboard during the running of the vehicle, and corresponding to different situations, the speed meter The disc, the remaining energy instrument panel, etc. will display different colors. In order to make the driver see the image to be displayed formed by the speed instrument panel and the remaining energy instrument panel, the color difference between the actual speed dashboard and the remaining energy dashboard is compared.
  • the image to be displayed is displayed on the display carrier, and the position displayed on the display carrier is set in advance (for example, a preset position), and after the image to be displayed undergoes an image processing process such as distortion correction,
  • the display image is correspondingly displayed on the display carrier.
  • the preset position corresponds to the driver's eyes, so that the driver can look up and when the eye views the display carrier, the image to be displayed projected on the display carrier can be seen, and the function of head-up display can be realized.
  • the target object is the driver's eyes
  • the display carrier is the vehicle transparent window
  • the image to be displayed is the data displayed by the instrument in the vehicle instrument panel, because the distortion processing process of the image to be displayed corresponds to the driver's eye correspondingly in the fixed camera coordinate system.
  • the coordinate value determines the distortion parameter
  • the image to be displayed is processed according to the distortion processing process described above, and the image to be displayed after the distortion correction is projected to the preset position of the transparent form of the vehicle, so that the driver is looking up (no The head can be seen in the dashboard of the vehicle.
  • the present disclosure also provides an image processing apparatus.
  • the determination module 10 the acquisition module 20, and the processing module 30 are included.
  • a determining module 10 configured to determine location information of the target within the active range
  • the obtaining module 20 is configured to determine, by using a lookup table, a distortion parameter corresponding to the location information
  • the processing module 30 is configured to perform distortion correction processing on the image to be displayed according to the distortion parameter to obtain an image to be displayed after the distortion correction.
  • the present disclosure also provides an image processing apparatus including a processor for storing a computer program, and a step of implementing the image processing method according to any one of the technical solutions when the computer program is executed by the processor .
  • a memory is further stored in the lookup table, and the lookup table is associated with storing location information within the active range and the corresponding distortion parameter.
  • the present disclosure also provides a vehicle head-up display system including a fixed camera device, a display carrier, and the image processing device.
  • the fixed imaging device is configured to collect a second image that includes the target object
  • the image processing device is configured to determine location information of the target object according to the second image, and determine, by using a lookup table, the location information corresponding to the location information.
  • the distortion parameter is subjected to distortion correction processing on the image to be displayed according to the distortion parameter to obtain an image to be displayed after the distortion correction
  • the display carrier is configured to display the image to be displayed after the distortion correction.
  • the fixed camera device can also be used to acquire a first image including a mobile camera device, so that the image processing device determines the image capturing position information of the mobile camera device based on the first image.
  • the vehicle head-up display system further includes a mobile camera device for acquiring a test image when moving within the range of motion, the image processing device for comparing the measurement image with a reference image, Determining a distortion parameter corresponding to the imaging position information of the test image acquired by the mobile camera.
  • the present disclosure also provides a vehicle comprising: the image processing device; or the on-board head-up display system of any one of the technical solutions.
  • the present disclosure provides an image processing method, apparatus, vehicle head-up display system, and vehicle, which capture an image including a driver's eyes through an imaging device (eg, a monocular camera or a binocular camera, etc.), and determine the driver's eyes through the image.
  • the coordinate value (position information) in the fixed camera coordinate system is compared with the position point information of the mobile camera in the database to determine the corresponding distortion parameter of the position information of the driver's eyes.
  • the mobile camera is used to simulate the movement of the target in the range of motion (Figure 4). 4 is an image of a divided moving area, the divided squares in the image are virtual and the edges of the lattice are identified by A, B, C, ..., 1, 2, 3, ...
  • the position information of the actual moving camera and the position information of the target are the three-dimensional coordinate values in the coordinate system of the fixed camera, that is, the coordinate system of the fixed camera is three-dimensional.
  • the coordinate system, the range of the driver's viewing angle in the moving area preferably the range of viewing angles when the driver looks directly at the display carrier and the range of viewing angles when the driver's head moves in the effective range.
  • FIG. 6 is an image on the fixed camera when the driver's eyes view the display carrier, and corresponds to the range of motion, that is, FIG. 6 corresponds to the driver's eyes in the corresponding range of motion in FIG. 5.
  • the coordinate values of the points on the imaging plane of the fixed camera device can be converted to the coordinates in the coordinate system of the fixed camera device according to the homogeneous equation, thereby determining the coordinate values of the mobile camera device or the eyes of the driver in the coordinate system of the fixed camera device.
  • the test picture is compared with the pixel of the same feature in the reference picture to determine the deviation (ie, the distortion parameter) when the test picture is corrected to the reference picture, and then the distortion parameter is
  • the distortion parameter corresponds to the position point information (coordinate value in the fixed imaging device) of the mobile imaging device, and is stored in the database in a mapping relationship, and the two are associated.
  • the distortion parameter is called according to the positional point information of the mobile imaging device and the corresponding relationship of the distortion, and The distortion parameter is used for distortion processing (distortion correction) of the image to be displayed, and the image to be displayed is normalized to obtain an image after normalization processing and distortion correction, and then the distortion correction is performed by projection.
  • the display image is displayed on the transparent window, and the position of the image to be displayed is fixed on the transparent window, thereby realizing the application of the HUD on the vehicle, and the whole processing process is fast and simple, and the manner of the projection display is color
  • the degree of reduction is high and has a good viewing effect.
  • the image to be displayed is an image corresponding to data in the dashboard of the vehicle, or an image including real-time data of the dashboard of the vehicle and the dashboard.
  • the image pickup apparatus 1 is disposed at the top of the vehicle in front of the driver's head to facilitate capturing an image including the eyes of the driver.
  • the transparent window is preferably a windshield in front of the vehicle.
  • the display carrier may be a transparent window of a vehicle, water molecules in the air, or the like.
  • the image to be displayed can be displayed on the display carrier normally and with high quality, and the driver can normally view the image to be displayed in different viewing positions in the window region, thereby solving the problem of poor display effect.
  • storing the distortion parameter and the corresponding location information in an association relationship in the database mainly states that the activity range is processed in advance, and is stored in an association relationship, so that it is convenient to determine that the target object (for example, an eye) is fixed.
  • the corresponding distortion parameter is determined, the process of determining the distortion parameter is simplified, and the time for determining the distortion parameter is shortened, which facilitates the application of the head-up display technology.
  • the target object is tracked by the image capturing device, and the image of the target object is imaged by the image capturing device, and the position information of the target object is positioned, and the image capturing device may be used in an existing vehicle.
  • the camera device realizes the head-up display of the vehicle without increasing the inner member of the vehicle, thereby saving the accommodation space inside the vehicle.
  • the distortion parameter is stored in advance in the micro control unit or the database, which saves the calculation time of the head-up display, and can quickly determine the position of the image to be displayed.
  • each functional unit in various embodiments of the present disclosure may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • FIG. 8 schematically shows a structural block diagram of an image processing apparatus 800 according to an exemplary embodiment of the present disclosure.
  • Image processing apparatus 800 may be used to perform the method described with reference to FIG. 1 or the method described with reference to FIG.
  • FIG. 8 For the sake of brevity, only the schematic structure of the image processing apparatus according to an exemplary embodiment of the present disclosure will be described herein, and the details already detailed in the method described above with reference to FIG. 1 or 3 are omitted.
  • the image processing apparatus 800 includes a processing unit or processor 801, which may be a single unit or a combination of a plurality of units for performing various operations; a memory 802 in which a computer executable is stored The instructions, when executed by the processor 801, cause the processor 2201 to perform the steps of the above method in accordance with the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Combustion & Propulsion (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本公开提供了一种用于车载抬头显示装置的图像处理方法,包括以下步骤:确定目标物在活动范围内的位置信息;通过查表确定与所述位置信息相对应的畸变参数;以及根据所述畸变参数对待显示图像进行畸变矫正处理,以获得畸变矫正后的待显示图像。

Description

图像处理方法、装置、车载抬头显示***及车辆
相关申请的交叉引用
本公开要求于2018年1月12日向中国知识产权局提交的题为“图像处理方法、装置、车载抬头显示***及车辆”、申请号为201810032426.3的申请的优先权,在此通过引用以并入其全文。
技术领域
本公开涉及图像处理、定位及车辆领域,具体而言,本公开涉及一种图像处理方法、装置、车载抬头显示***及车辆装置。
背景技术
抬头显示器HUD(head up display)能够将重要信息在视线前方的一块透明玻璃上显示,最早应用于战斗机上,其主要目的是为了让飞行员不需要频繁集中注意力低头看仪表盘中的数据,从而避免飞行员在观看仪表盘中的数据时,不能观察到飞行前方领域的环境信息。
发明内容
根据本公开的第一方面,提供一种用于车载抬头显示装置的图像处理方法,包括以下步骤:
确定目标物在活动范围内的位置信息;
通过查表确定与所述位置信息相对应的畸变参数;以及
根据所述畸变参数对待显示图像进行畸变矫正处理以获得畸变矫正后的待显示图像。
根据本公开的一个实施例,在所述确定目标物在活动范围内的位置信息的步骤之前,所述图像处理方法还包括:
将活动范围内的各位置信息和与各位置信息对应的畸变参数进行关联存储。
根据本公开的一个实施例,在所述将活动范围内的各位置信息和与各 位置信息对应的畸变参数进行关联存储的步骤中,包括:
确定移动摄像装置在所述活动范围内的摄像位置信息;
确定所述摄像位置信息所对应的畸变参数,并将所述摄像位置信息与所对应的畸变参数进行关联存储;以及
所述移动摄像装置在所述活动范围内移动,以将活动范围内的各位置信息和与各位置信息对应的畸变参数进行关联存储。
根据本公开的一个实施例,所述确定移动摄像装置在所述活动范围内的摄像位置信息的步骤中,包括:
通过固定摄像装置采集所述移动摄像装置在所述活动范围内的第一图像;
对所述第一图像进行解析,确定所述第一图像中所述移动摄像装置在所述固定摄像装置的成像平面上的成像位置信息;以及
对成像位置信息进行转换,以确定移动摄像装置在所述活动范围内的摄像位置信息。
根据本公开的一个实施例,所述确定所述摄像位置信息所对应的畸变参数的步骤中,包括:
通过所述移动摄像装置获取测试图像;
将所述测试图像与基准图像进行对比,确定所述摄像位置信息所对应的畸变参数。
根据本公开的一个实施例,将所述摄像位置信息与所对应的畸变参数进行关联存储还包括:
以查找表的方式存储所述摄像位置信息和所对应的畸变参数。
根据本公开的一个实施例,所述活动范围是所述目标物能够观看到显示载体上显示的待显示图像的区域。
根据本公开的一个实施例,在所述确定目标物的位置信息的步骤之前,具体包括:
检测所述活动范围内是否存在目标物;
响应于检测到所述活动范围内存在目标物,拍摄包括所述目标物的所述图像。
根据本公开的一个实施例,检测所述活动范围内是否存在目标物还包括:
通过传感器感测所述活动范围内是否存在目标物。
根据本公开的一个实施例,所述传感器包括红外传感器、重力传感器中的一种或多种。
根据本公开的一个实施例,所述图像处理方法还包括:
通过投影的方式将矫正处理后的所述待显示图像显示在显示载体上。
本公开还提供了一种图像处理装置,包括:
确定模块,用于确定目标物在活动范围内的位置信息;
获取模块,用于通过查表确定与所述位置信息相对应的畸变参数;以及
处理模块,用于根据所述畸变参数对待显示图像进行畸变矫正处理以获得矫正后的待显示图像。
本公开还提供了一种图像处理装置,包括处理器、存储器,所述存储器用于存储计算机程序,所述计算机程序被所述处理器执行时实现任一技术方案所述图像处理方法的步骤。
根据本公开的一个实施例,所述存储器中存储有查找表,所述查找表关联存储活动范围内的位置信息和所对应的畸变参数。
本公开还提供了一种车载抬头显示***,包括固定摄像装置、显示载体和如前所述的图像处理装置,
所述固定摄像装置用于采集包括所述目标物的第二图像,
所述图像处理装置用于根据所述第二图像,确定目标物的位置信息,通过查表确定与所述位置信息相对应的畸变参数,根据所述畸变参数对所述待显示图像进行畸变矫正处理以获得矫正后的待显示图像,以及
所述显示载体用于显示畸变矫正后的待显示图像。
根据本公开的一个实施例,所述车载抬头显示***还包括移动摄像装置,所述移动摄像装置用于在所述活动范围内移动时获取测试图像,所述图像处理装置用于将所述测量图像与基准图像进行对比,确定与所述移动摄像装置获取测试图像时的摄像位置信息所对应的畸变参数。
本公开还提供了一种车辆,包括:如前所述的图像处理装置;或者任一技术方案所述的车载抬头显示***。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本公开一种图像处理方法的典型实施例的流程图;
图2为本公开车辆内固定摄像装置位置与驾驶员头部位置的示意图;
图3为本公开一种图像处理方法的又一实施例的流程图;
图4为本公开实施例中活动范围区域划分图;
图5为本公开实施例中目标物对应的活动范围,主要示出驾驶员在显示载体上能查看显示图像的视角范围;
图6为本公开实施例中固定摄像装置拍摄的驾驶员头部在活动范围内位置的示意图;
图7为本公开一种图像处理装置的典型实施例的结构示意图;以及
图8示意性地示出了根据本公开示例性实施例的图像处理装置800的结构框图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本公开,而不能解释为对本公开的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本公开的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或 者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本公开所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。
HUD(head up display)最早应用在战斗机上,其主要目的是为了让飞行员不需要频繁集中注意力低头看仪表盘中的数据,从而避免飞行员在观看仪表盘中的数据时,不能观察到飞行前方领域的环境信息。
随着汽车行业的发展以及我国道路的情况,驾驶员驾驶车辆行驶的过程中,常常要集中注意观察环境信息,因而不能低头注意到仪表台上的仪表信息,这有可能导致快速驾驶,违反交通规则,或者驾驶员在注意仪表信息时,不能观察到环境信息时,则有可能造成交通事故。
根据本公开的实施例,能够将HUD应用于车辆上。
如图1所示,根据本公开的实施例的一种用于车载抬头显示装置的图像处理方法包括以下步骤S100至S300。
S100:确定目标物在活动范围内的位置信息。
首先,参考图2,目标物3在车厢中具有确定的位置,且目标物3在车厢内移动的范围有限。图2还示出了固定摄像装置1在车厢内的位置,其中虚线示出了目标物3观看显示载体4上的待显示图像的视线。图2右上角的图像为固定摄像装置1所拍摄的包括目标物的图像2。设置活动范围,该活动范围包含了目标物最大范围的移动区域,在该移动区域内,目标物均能够观看到显示载体上显示的待显示图像。为了能更好的确定目标物的位置,以固定摄像装置的光心为坐标原点,建立固定摄像装置1坐标系,确定目标物在固定摄像装置坐标系中的位置信息,即确定目标物在固定摄像装置坐标系中的坐标值,位置信息为实时的坐标值,且固定装置在该位置时,能拍摄到活动范围内有目标物的图像。由于在前期在活动范围 内采用移动摄像装置模拟了目标物,且同样通过固定摄像装置确定移动摄像装置在不同位置点的摄像位置信息,且该摄像位置信息同样是固定摄像装置坐标系中的坐标值,确定移动摄像装置在不同位置点的摄像位置信息后,并将各位置点的摄像位置信息与移动摄像装置在各位置点的畸变参数以映射关系存储。由于移动摄像装置是模拟目标物的,因此,在确定了目标物的位置信息时,我们可以通过目标物的位置信息找到对应的移动摄像装置模拟时的摄像位置信息,以便后文根据该摄像位置信息确定对应的畸变参数。需要说明的是,根据本公开的实施例,目标物可以是驾驶员的眼睛。
具体的,目标物位置信息的确定过程是,参考图2,首先通过固定摄像装置1拍摄包括目标物3的第二图像2,在所述第二图像2中检测目标物3的特征,并且基于目标物3在固定摄像装置1的成像平面中数据(如在成像平面中位置信息以及像素信息)、固定摄像装置1已知的内部参数(如焦距)和外部参数(如所述固定摄像装置1所安装的位置信息、与活动范围的位置关系等)以及固定摄像装置坐标系中坐标值与成像平面之间距离的转换关系,确定目标物在固定摄像装置坐标系中的位置信息。具体的,由于在固定摄像装置1的成像平面上建立一图像坐标系,该坐标系的原点和固定摄像装置坐标系的原点在同一条直线上,即两坐标系具有一条相同的坐标轴,成像平面与固定摄像装置之间的焦距固定,且固定摄像装置的内部参数和外部参数已知,能够确定固定摄像装置坐标系中一点的坐标值与成像平面上对应点的坐标值为一固定的比例(具体与采用的摄像装置有关,且该比例适合于固定摄像装置坐标系与成像平面上相对应的所有点)。当目标物在成像平面上成像时,可以检测到目标物在成像平面上的位置信息(图像坐标系中的坐标值),将该值以及焦距通过固定比例的换算,确定目标物在固定摄像装置坐标系中的坐标值,即目标物的位置信息。
S200:通过查表确定与所述位置信息相对应的畸变参数。
如步骤S100中的分析,确定了目标物在固定摄像装置坐标系中的坐标值,又由于固定摄像装置坐标系的坐标值与畸变参数以映射关系进行存储的,因此,确定了目标物在固定摄像装置坐标系中的坐标值后,便可根 据所述坐标值确定所述畸变参数,便于将所述畸变参数用于后续的图像矫正处理,进而节约图像畸变处理的时间,便于能够较快的在显示载体上显示畸变处理后的所述显示图像。具体的,所述目标物为驾驶员的眼睛,在步骤S100中确定了驾驶员的眼睛在固定摄像装置坐标系中的坐标值,并且根据在固定摄像装置坐标系中坐标值与所述畸变参数的映射关系,通过查表确定与所述坐标值相对应的所述畸变参数,进而将所述畸变参数用于后续图像的矫正处理。由于在确定畸变参数时,能够根据所述坐标值与所述畸变参数的映射关系,直接通过查表确定与所述坐标值相对应的所述畸变参数,从而节约了整个图像显示过程的处理时间,使得待显示图像能够较为快速地显示在例如透明窗体的显示载体上,提高了驾驶员驾驶的安全性,进而减少驾驶员查看仪表数据时发生的交通事故。
S300:根据所述畸变参数对待显示图像进行畸变矫正处理以获得畸变矫正后的待显示图像。
通过前文的步骤S100和步骤S200确定了待显示图像的畸变参数,对待显示图像进行畸变矫正处理,其中,畸变参数为二维的坐标值。在畸变矫正过程中,确定了矫正的畸变参数后,将待显示图像中的各像素点按照畸变参数进行对应的移动。具体的,将待显示图像对应到移动摄像装置的成像平面上,同前文固定摄像装置一样,在移动摄像装置的成像平面上同样建立了一坐标系,矫正前的待显示图像为移动摄像装置直视待显示图像时的图像。在进行矫正时,将待显示图像按照畸变参数的值在移动摄像装置的成像平面(图像坐标系)上进行对应的移动,当然,在该处理过程中为了降低矫正的误差,还可以通过高阶解析、插值算法、迭代算法等方法确定用于矫正的畸变参数。如高阶解析模型如下:
Figure PCTCN2018111905-appb-000001
其中,k 1、k 2、k 3分别表示一阶、二阶、三阶该移动摄像装置径向畸变的畸变参数,p 1、p 2表示该移动摄像装置切向畸变的畸变参数,(x d,y d)表示畸变矫正前待显示图像中一点在移动摄像装置成像平面中的坐标 值,(x p,y p)表示对待显示图像进行畸变矫正后该点在移动摄像装置的成像平面中的坐标值,r表示(x d,y d)坐标值所在点距离移动摄像装置成像平面中心点的距离。其中,该移动摄像装置主要用于模拟驾驶员眼睛在车厢内中的位置,通过标定的方法以及畸变模型确定该摄像装置的畸变参数,且在模拟完成后,该摄像装置不被使用。在结合后文的描述,畸变参数对应的摄像装置为模拟目标物的移动摄像装置。
畸变矫正后的待显示图像显示于显示载体上。显示载体为车辆上透明窗体,也可以是安装在驾驶员头部前方的挡风玻璃,当然,显示载体也可以是其他用于显示的终端装置或者如空气中的水分子等微粒等。在显示载体上显示待显示图像的过程中,显示载体上显示所述待显示图像的位置固定。
通过对图像进行畸变矫正,使得驾驶员在不同视角观看显示载体上显示的待显示图像时,均降低待显示图像在显示时给驾驶员所带来的视觉差异。在畸变矫正后,还需对图像进行其他处理(如灰度处理),便于生成畸变矫正后的图像,将完全畸变矫正处理后的图像再显示于显示载体上。显示方法包括投影的方法,投影的方法色彩还原度高,具有较好的视觉效果,进而使显示出的图像更贴合实物。具体的,如前文所述,所述目标物为驾驶员的眼睛,所述显示载体为车辆的窗体。如前文所述,确定了在车辆窗体上显示待显示图像的畸变参数后,对待显示图像进行归一化处理,如:平滑处理、增强处理灰度处理等,将归一化处理后的图像通过投影的方式投影在车辆的透明窗体上,且该位置与驾驶员眼睛相对应,进而实现了抬头显示的功能,使得驾驶员便在驾驶车辆时,能够同时注意到环境信息和车辆行驶的信息。在本公开的实施例中,在对待显示图像进行畸变矫正处理过程中,还包括对待显示图像进行其他归一化处理的过程。虽然在此未将该过程详细说明,本领域人员也是可以理解和实施的,其主要目的在于使图像能够更为真实清晰的显示在显示载体上。
进一步地,在其中一种实施方式中,在所述确定目标物的位置信息的步骤之前,包括:
将活动范围内的各位置信息和与各位置信息对应的畸变参数进行关 联存储。
进一步地,在其中一种实施方式中,如图3所示在所述将活动范围内的各位置信息和与各位置信息对应的畸变参数进行关联存储的步骤中,包括S102至S104:
S102:确定移动摄像装置在所述活动范围内的摄像位置信息;
S103:确定所述摄像位置信息所对应的畸变参数,并将所述摄像位置信息与所对应的畸变参数进行关联存储;
S104:所述移动摄像装置在所述活动范围内移动,以将活动范围内的各位置信息及与各位置信息对应的畸变参数进行关联存储。
将所述摄像位置信息与所对应的畸变参数进行关联存储还可以包括:以查找表的方式存储所述摄像位置信息和所对应的畸变参数。
在本公开的实施例中,采用移动摄像装置模拟目标物在车厢内进行移动,由于驾驶员在驾驶位置上,在观察车辆前方的环境信息时,其眼睛或者头部移动的范围是有限的,具有较为固定的移动范围(即前述的活动范围),如图4和图5,因此,在移动摄像装置模拟目标物时,在活动范围内进行移动。结合前文的说明,移动摄像装置在不同的位置具有不同畸变参数,因此,采用确定目标物的位置信息的固定摄像装置拍摄移动摄像装置,并根据固定摄像装置拍摄的移动摄像装置确定移动摄像装置在活动范围内的摄像位置信息。其具体过程同前文确定目标物的位置信息过程一致,在此不做赘述。又由于移动摄像装置在不同的位置具有不同畸变参数,因此在通过固定摄像装置确定移动摄像装置的摄像位置信息的同时,确定与该摄像位置信息对应的畸变参数,将该畸变参数和与畸变参数对应的移动摄像装置的摄像位置信息进行关联存储(例如,以查找表的方式),形成目标物的畸变参数。从而在后期确定目标物的位置信息时,便能够寻找与该位置信息相同或者在误差范围内的移动摄像装置的摄像位置信息,进而通过查表确定该摄像位置信息对应的畸变参数,作为针对目标物的畸变参数,以便对待显示图像进行畸变矫正。具体的,确定畸变参数的模型如前文所述的模型,同时还可以采用张正友的图像标定模型、Tasi畸变模型、包括径向畸变和切向畸变的模型、非线性的数学畸变模型以及线性的数学 畸变模型等,且前述模型中,均是基于测试图像和基准图像在移动摄像装置成像平面上的位置信息的偏差确定。
优选地,为了能够快捷的调取畸变参数,将畸变参数以及移动摄像装置的位置点信息以映射关系(例如,以查找表的方式)存储在数据库,该数据库可以在本地装置,如微型控制单元中。同样的,所述数据库还能够是云端数据库和/或与所述本地装置相连接的移动存储介质。当然为了使得本地装置能够快速地进行图像处理,也可以存储在与本地数据库相连接的其他移动存储介质中,如存储在移动终端中(如手机,平板等),或者云端数据库中,云端数据库可以与本地数据库直接连接或者通过其他终端设备间接连接,当然也可以同时存储在本地数据库、云端数据库、移动存储介质任意一种或者多种中。
进一步地,所述确定移动摄像装置在所述活动范围内的摄像位置信息的步骤中,包括以下步骤:
通过固定摄像装置采集所述移动摄像装置在所述活动范围内的第一图像;
对所述第一图像进行解析,确定所述第一图像中所述移动摄像装置在所述固定摄像装置的成像平面上的成像位置信息;
对所述成像位置信息进行转换,确定移动摄像装置在所述活动范围内的摄像位置信息。
可以基于预设规则第成像位置进行转换。所述预设规则包括齐次方程模型、小孔成像模型、SIFT运算规则中的一种。
如前文所述,移动摄像装置用于模拟目标物在目标物活动范围内移动,因此,移动摄像装置的摄像位置信息相当于后期目标物的位置信息,以便于确定目标物在不同位置的畸变参数,从而在后期目标物在不同位置能够调用对应的畸变参数,对待显示图像进行畸变矫正,使得目标物在不同位置观看到显示载体上的待显示图像差异较小,以便驾驶员能够更好地确定当前车辆的信息。因此,如前文的说明,在前期的过程中,通过固定摄像装置采集移动摄像装置在活动范围内的移动图像(第一图像),即移动摄像装置在一个位置时,固定摄像装置采集至少一帧移动图像,对所述移动 图像进行解析,确定移动图像中移动摄像装置的位置信息,具体为解析出移动摄像装置在固定摄像装置成像平面上的成像位置信息,结合前文说明,确定移动摄像装置在固定摄像装置成像平面上的坐标值(图像坐标系中的坐标值),并根据固定摄像装置坐标系中一点的坐标值与成像平面上对应点的坐标值之间的比例,将该成像位置信息进行转换,进而确定移动摄像装置在固定摄像装置坐标系中的坐标值,作为所述移动摄像装置在活动范围内移动的摄像位置信息。固定摄像装置坐标系中一点的坐标值与成像平面上对应点的坐标值之间的比例基于预设规则确定,具体后文详述。
具体的,为了使得将移动摄像装置在固定摄像装置成像平面上的位置信息转换到固定摄像装置坐标系中的坐标值时,更为准确,误差较小,即使得固定摄像装置坐标系中一点的坐标值与固定摄像装置成像平面上对应点的坐标值的比例更为准确,在本公开的实施例中基于齐次方程确定该比例。在具体的过程中,将齐次方程表示为矩阵形式:
Figure PCTCN2018111905-appb-000002
(x,y)为空间某一点在固定摄像装置成像平面上的坐标值,(X C,Y C,Z C)为空间某一点在固定摄像装置坐标系中的坐标值,f为固定摄像装置坐标系原点与成像平面上原点之间的距离,即固定摄像装置的焦距,ρ为常数因子,根据此转换矩阵便可以确定移动摄像装置在固定摄像装置坐标系中的坐标值。
小孔成像模型基于光沿直线传播的原理,即空间中一点在空间中点、摄像装置光心、在摄像装置成像平面上的像构成一条直线。物体发出的光经过摄像装置的光心,然后成像于成像平面上,如果假设物体所在平面与摄像装置所在平面的距离为d,物体实际高度为H,在成像平面上的高度为h,摄像装置的焦距为f,则有:
f/d=h/H
该模型的基础是f、d、H已知,求得物体与成像平面的距离,根据该 距离之间的比例关系,再基于相似三角形的等比例关系,在确定物体在成像平面上的坐标后,便能根据该比例确定物体在摄像装置坐标系中二维位置信息,其中将H添加到坐标中,得出在摄像装置坐标系中的位置信息。具体的,在本公开的实施例中,物体替换为移动摄像装置或者目标物,摄像装置替换为固定摄像装置。
SIFT是基于特征点的提取,根据至少两帧图像确定目标物中相对应特征的特征点。SIFT特征不只具有尺度不变性,即使在改变旋转角度,图像亮度或拍摄视角时,仍然能够得到好的检测效果。SIFT特征点提取步骤包括:1、通过高斯卷积核是实现尺度变换的唯一线性核,利用不同尺度的高斯差分核与图像卷积生成高斯差分尺度空间,即图像金字塔;2、检测DOG尺度空间极值点,确定图像在该尺度下的一个特征点;3、使用近似Harris Corner检测器通过拟和三维二次函数以精确确定关键点的位置和尺度(达到亚像素精度),同时去除低对比度的关键点和不稳定的边缘响应点,以增强匹配稳定性、提高抗噪声能力;4、通过确定的每幅图中的特征点,为每个特征点计算一个方向,依照这个方向做进一步的计算,利用关键点邻域像素的梯度方向分布特性为每个关键点指定方向参数,使算子具备旋转不变性;5、将坐标轴旋转为关键点的方向,以确保旋转不变性。利用公式求得每个像素的梯度幅值与梯度方向,箭头方向代表该像素的梯度方向,箭头长度代表梯度模值,然后用高斯窗口对其进行加权运算;6、根据SIFT将生成的图进行匹配,两图中各个scale(所有scale)的描述子进行匹配,匹配上128维即可表示两个特征点匹配上了,即将不同帧图像数据中的特征点进行匹配,其中匹配时,可以是将相邻帧的图像数据中的特征点进行匹配,也可以是间隔帧的图像数据中的特征点进行匹配。在前述的基础上结合RANSAC算法确定目标物在固定摄像装置坐标系中的坐标值。RANSAC算法步骤包括:1、随机从数据集中随机抽出4个样本数据(此四个样本之间不共线)计算出变换矩阵H,记为模型M;2、计算数据集中所有数据与模型M的投影误差,若误差小于阈值,加入内点集I;3、如果当前内点集元素个数大于最优内点集,则更新,同时更新迭代次数k;4、如果迭代次数大于k,则退出:否则迭代次数加1,并 重复上述步骤,得到匹配正确的特征点,根据特征点在图像数据中的变化,预估出运动参数,即图像数据中的特征点即将在下一帧或者几帧图像中位置,并将所有帧中匹配正确的特征点统一到前述的坐标系中,确定目标物的位置信息,当然,该过程也同样使用于通过固定摄像装置拍摄的包括目标物的图像确定目标物在固定摄像装置坐标系中的坐标值。
进一步地,所述确定所述摄像位置信息所对应的畸变参数的步骤中,包括以下步骤:
通过所述移动摄像装置获取测试图像;
将所述测试图像与基准图像进行对比,确定所述摄像位置信息所对应的畸变参数。
移动摄像装置在不同的位置具有不同畸变参数,因此,为了得到在活动范围内各个点的畸变参数,在移动摄像装置移动至一位置时,该位置为移动摄像装置模拟目标物的摄像位置,移动摄像装置在该摄像位置拍摄至少一帧测试图像,且测试图像为移动摄像装置模拟目标物在该位置处观看被拍摄物(即,显示载体,或者显示载体上显示待显示图像的位置)时拍摄的图像,将该测试图像与基准图像进行对比,确定测试图像中各像素点与基准图像中各像素点的偏差,该偏差为本申请公开的实施例中的畸变参数。具体的,例如,测试图像和基准图像中被拍摄的物体为棋盘格,基准图像为驾驶员眼睛正对显示载体(例如,显示载体的中心)时观看到棋盘格时产生的图像,测试图像为驾驶员眼睛看向显示载体但不是直视显示载体上的棋盘格时看到的图像,且测试图像和基准图像均是采用移动摄像装置模拟目标物看显示载体上显示的棋盘格时拍摄的图像。为了便于对比,基准图像中的物体与测图像中的物体相同,且各物体间的位置也是一一对应的,以实现测试图像和基准图像之间的对比。具体的,在进行对比时,对比的是测试图像和基准图像在移动摄像装置成像平面上的偏差,其过程如前文中的说明,移动摄像装置成像平面上建立有坐标系(图像坐标系),由于在运算过程中认为摄像装置光心所在的平面与成像平面平行,常规的成像平面上的原点为光心垂直投影在成像平面上的点,因此确定畸变参数过程中,具有相同的基准点,即在成像图像面上原点。在对比时,将测试 图像和基准图像上的全部或者部分像素点一一比对,将得到的畸变参数通过迭代算法、高阶解析或者差值算法等确定具体的畸变参数。具体的高阶解析,如前文中的高阶解析模型,在测试图像和基准图像上获取多对对应的坐标值并代入高阶解析模型以确定畸变参数。在将测试图像和基准图像上的全部或者部分像素点一一比对的过程中,主要是将测试图像和基准图像上的相同物体的一一对应特征的像素点进行比对。
优选地,包括多组所述摄像位置信息及与所述摄像位置信息相对应的所述畸变参数。
为了能够使得目标物在不同的位置具有不同的畸变参数,使得在各个位置看到显示载体上显示的待显示图像的差异较小,进而在移动摄像装置模拟目标物时,固定摄像装置采集在活动范围内移动摄像装置在不同位置的至少一帧移动图像来确定移动摄像装置在活动范围内的各位置点信息,同时确定移动摄像装置在不同位置对应的畸变参数,使得每一位置均有相对应的畸变参数,从而每一位置对应的位置信息有相对应的畸变参数。
进一步地,将所述摄像位置信息与所对应的畸变参数进行关联存储还包括:
以查找表的方式存储所述摄像位置信息和所对应的畸变参数。
进一步地,在所述确定目标物在活动范围内的位置信息的步骤之前,具体包括以下步骤:
检测所述活动范围是否存在目标物;
当检测到所述活动范围内存在目标物时,拍摄包括所述目标物的第二图像。
具体为,检测活动范围内是否存在目标物还包括:通过传感器感测所述活动范围内是否存在目标物。根据本公开的实施例,传感器包括红外传感器、重力传感器中的一种或多种。
在检测到活动范围内存在目标物时,通过固定摄像装置拍摄包括目标物的第二图像,确定目标物在活动范围内的位置信息。为了能够确定目标物在固定摄像装置坐标系中的位置信息,本公开采用了固定摄像装置追踪定位的方法,即采用固定摄像装置拍摄至少一帧包括目标物在活动范围内 的图像,根据该图像确定目标物在所述固定摄像装置坐标系中的位置信息。由于本公开主要用于车辆上,而所述待显示图像主要包括车辆仪表台中仪表显示的数据,且该数据主要提供给驾驶员观看,使驾驶员能够判断车辆当前的行驶情况,进而驾驶员能更好的进行驾驶。因此,为了能够更方便驾驶员观看,假定目标物是驾驶员的眼睛,确定当前拍摄的第二图像内是否包括驾驶员的眼睛,通过轮廓识别、虹膜识别、特征识别等方法,确定当前拍摄的第二图像内包括了与眼睛轮廓相对应的轮廓且该轮廓与摄像装置的距离在预设的驾驶员眼睛与固定摄像装置的范围内。虹膜是位于黑色瞳孔和白色巩膜之间的圆环状部分,其包含有很多相互交错的斑点、细丝、冠状、条纹、隐窝等的细节特征。特征识别如同前面两者识别方法,根据设置的识别特征以及特征与摄像装置距离,确定拍摄的第二图像内是否存在驾驶员的眼睛。进一步的根据面部的其他轮廓特征(如脸、嘴、鼻、眉和/或耳朵等)以及眼部轮廓特征确定驾驶员是否观看显示载体。通过确定拍摄的第二图像内包括一个或者多个如下特征:脸、嘴、鼻、眉、耳朵、眼睛等,进一步判断特征的大小是否满足预设的特征比例、特征之间的位置关系是否满足预设的位置关系等,当满足一个或者多个判断条件后,确定固定摄像装置拍摄的第二图像中的目标物。进一步地,为了方便驾驶员能够在观察车辆前方时,同时能看到所述待显示图像,所述拍摄的第二图像内应同时存在一对眼睛,当然为了方便进行特征的判断,在预存有驾驶员的头像或者根据用户设定的头部特征轮廓信息、头部特征轮廓信息时,可包括前述的判断条件等。
在本公开的实施例中,所述目标物可以为驾驶员的眼睛。
根据前文的举例分析,所述目标物可以为驾驶员的眼睛,当然也可以是其他根据用户设定的所述目标物,如前文所述的包括一个或者多个如脸、嘴、鼻、眉、耳朵、眼睛等形成的目标物。
如前文所述,拍摄包括所述目标物的图像之前,具体需要检测在活动范围内是否存在所述目标物。根据本公开的实施例,可以通过例如重力传感器或红外传感器等感测活动范围内是否存在人,以检测活动范围内是否存在目标物。固定摄像装置1在车内安装示意图如图2。在检测出存在目 标物后,摄像装置1拍摄包括所述目标物的图像。具体为,在检测出所述活动范围内存在目标物时,所述摄像装置1进行拍摄包括所述目标物的所述图像(如图6),以便于后续步骤能够根据该图像确定所述目标物的位置信息。
优选地,通过相同的固定摄像装置拍摄包括所述目标物的第二图像和采集第一图像。
进一步地,在将所述待显示图像显示于显示载体上的过程中,具体包括:
通过投影的方式将畸变矫正后的所述待显示图像显示在所述显示载体上。
所述待显示图像通过投影的方式显示在所述显示载体上,投影对色度的还原度较高,从而具有较好观看效果,使显示出的所述待显示图像更贴合实物。具体的,所述待显示图像主要由车辆仪表台中仪表显示的数据形成,由于车辆在行驶过程中,驾驶员的速度、车辆剩余能量等都能显示在仪表盘中,且对应不同情况,速度仪表盘、剩余能量仪表盘等都会显示不同的颜色,为了使驾驶员看到速度仪表盘、剩余能量仪表盘等形成的待显示图像时,与实际的速度仪表盘、剩余能量仪表盘等的色差较小,采用投影的方式,进而满足驾驶员的使用习惯等,降低色彩的改变对驾驶员在行驶过程中的影响。进一步地,所述待显示图像显示在所述显示载体上,且显示载体上显示的位置是提前设置的(例如,预设位置),在待显示图像经过畸变矫正等图像处理过程之后,将待显示图像对应地显示在显示载体上。当然,为了驾驶员便于观看,预设位置与驾驶员的眼睛相对应,使得驾驶员能够抬头且眼睛观看显示载体时,能够看到投影在显示载体上的待显示图像,实现抬头显示的功能。如:目标物为驾驶员的眼睛,显示载体为车辆透明窗体,待显示图像为车辆仪表台中仪表显示的数据,由于待显示图像的畸变处理过程根据驾驶员眼睛对应在固定摄像装置坐标系中的坐标值确定畸变参数,根据前述的畸变处理过程,对待显示图像进行处理,畸变矫正后的所述待显示图像投影到所述车辆透明窗体预设位置时,使得驾驶员在抬头时(不低头)便可看见车辆仪表盘中数据,该方法方便了驾 驶员的安全驾驶、处理过程简单,便于将HUD应用在常规的量产车中。
本公开还提供了一种图像处理装置,在其中一种实施方式中,如图7所示,包括确定模块10、获取模块20、处理模块30。
确定模块10,用于确定目标物在活动范围内的位置信息;
获取模块20,用于通过查表确定与所述位置信息相对应的畸变参数;
处理模块30,用于根据所述畸变参数对待显示图像进行畸变矫正处理以获得畸变矫正后的待显示图像。
本公开还提供了一种图像处理装置,包括处理器、存储器,所述存储器用于存储计算机程序,所述计算机程序被所述处理器执行时实现任一项技术方案所述图像处理方法的步骤。
根据本公开的实施例,存储器中还存储有查找表,所述查找表关联存储活动范围内的位置信息和所对应的畸变参数。
本公开还提供了一种车载抬头显示***,包括固定摄像装置、显示载体和所述的图像处理装置,
所述固定摄像装置用于采集包括所述目标物的第二图像,所述图像处理装置用于根据该第二图像,确定目标物的位置信息,通过查表确定与所述位置信息相对应的畸变参数,根据所述畸变参数对所述待显示图像进行畸变矫正处理以获得畸变矫正后的待显示图像,所述显示载体用于显示畸变矫正后的待显示图像。当然,该固定摄像装置还可用于采集包括移动摄像装置的第一图像,以便图像处理装置基于该第一图像确定移动摄像装置的摄像位置信息。
进一步地,车载抬头显示***还包括移动摄像装置,所述移动摄像装置用于在所述活动范围内移动时获取测试图像,所述图像处理装置用于将所述测量图像与基准图像进行对比,确定与所述移动摄像装置获取测试图像的摄像位置信息所对应的畸变参数。
本公开还提供了一种车辆,包括:所述的图像处理装置;或者任一技术方案所述的车载抬头显示***。
本公开提供的一种图像处理方法、装置、车载抬头显示***及车辆,通过摄像装置(如为单目摄像装置或者双目摄像头等)拍摄包括驾驶员眼 睛的图像,通过该图像确定驾驶员眼睛在固定摄像装置坐标系中的坐标值(位置信息),将其与数据库中移动摄像装置的位置点信息进行对比,确定驾驶员眼睛的位置信息的对应畸变参数。在进行使用之前,采用移动摄像装置模拟目标物在活动范围(如图4)移动。图4为划分了的移动区域的图像,该图像中的划分的格子是虚拟的且格子边缘标识的A、B、C......以及1、2、3......与固定摄像装置坐标系中的二维坐标值相对应,实际的移动摄像装置的位置点信息和目标物的位置信息为在固定摄像装置坐标系中立体坐标值,也即固定摄像装置坐标系为立体坐标系,驾驶员在该移动区域的视角范围(优选为驾驶员直视显示载体时的视角范围以及驾驶员头部在有效范围移动时的视角范围)。图6为当驾驶员的眼睛观看显示载体时在固定摄像装置上的图像,且其对应在活动范围内,即图6为图5中对应的活动范围内对应驾驶员眼睛。如前文所述,可以根据齐次方程将固定摄像装置成像平面上点的坐标值转换到固定摄像装置坐标系中坐标,进而确定移动摄像装置或者驾驶员眼睛在固定摄像装置坐标系中的坐标值。而在进行畸变参数确定的过程中,将测试图片与基准图片中相同物体相同特征的像素点进行对比,确定测试图片矫正到基准图片时的偏差(即畸变参数),再将畸变参数和与该畸变参数对应移动摄像装置的位置点信息(在固定摄像装置中的坐标值)以映射关系对应存储在数据库中,将两者关联起来。由于移动摄像装置模拟的是目标物,因此,在确定目标物的位置信息(在固定摄像装置中的坐标值)时,根据移动摄像装置的位置点信息以及畸变的对应关系,调用畸变参数,将畸变参数用于对待显示图像的畸变处理(畸变矫正),且对该待显示图像进行归一化处理,获得归一化处理和畸变矫正后的图像,再通过投影的方式将畸变矫正后的所述待显示图像显示在所述透明窗体上,且待显示图像在透明窗体上显示的位置固定,进而实现HUD在车辆上的应用,且整个处理过程快捷、简单,投影显示的方式对色彩的还原度高,具有较好的观看效果,所述待显示图像为车辆仪表盘中数据对应的图像,或者包括车辆仪表盘以及仪表盘实时数据的图像。如图5,摄像装置1设置在驾驶员头部前方的车辆顶部,便于拍摄包括驾驶员眼睛的图像,为了便于驾驶员观看,透明窗体优选为车辆前方的挡风 玻璃。
需要说明的是,在本公开的实施例中,显示载体可以是车辆的透明窗体、空气中的水分子等。
在本公开的实施例中,待显示图像能够正常且质量较高地显示在显示载体上,驾驶员在视窗区域内的不同观看位置都能够正常地观看待显示图像,进而解决显示效果差的问题。
在本公开的实施例中,将所述畸变参数与对应的位置信息以关联关系存储在数据库中主要说明提前对活动范围进行处理,以关联关系进行存储,便于确定目标物(例如眼睛)在固定摄像装置坐标系中的位置信息后,通过与存储的位置信息匹配后,确定对应的畸变参数,简化了确定畸变参数的过程和缩短了确定畸变参数的时间,方便了抬头显示技术的应用。
在本公开的实施例中,通过摄像装置对目标物(眼睛)进行追踪,并通过摄像装置拍摄的包括所述目标物的图像对目标物的位置信息进行定位,摄像装置可采用现有车辆中的摄像装置,在不增加车内构件的情况下便实现了车辆的抬头显示,节约了车辆内部的容纳空间。
在本公开的实施例中,畸变参数为提前存储在微型控制单元或者数据库中,节约了抬头显示的运算时间,能够快速确定待显示图像显示的位置。此外,在本公开各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以下将参照图8,对根据本公开示例性实施例的图像处理装置的结构进行描述。图8示意性地示出了根据本公开示例性实施例的图像处理装置800的结构框图。图像处理装置800可以用于执行参考图1描述的方法或参考图3描述的方法。为了简明,在此仅对根据本公开示例性实施例的图像处理装置的示意性结构进行描述,而省略了如前参考图1或3描述的方法中已经详述过的细节。
如图8所示,图像处理装置800包括处理单元或处理器801,所述处理器801可以是单个单元或者多个单元的组合,用于执行各种操作;存储器802,其中存储有计算机可执行指令,所述指令在被处理器801执行时,使处理器2201执行根据本公开上述方法的步骤。
以上所述仅是本公开的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。

Claims (17)

  1. 一种用于车载抬头显示装置的图像处理方法,包括:
    确定目标物在活动范围内的位置信息;
    通过查表确定与所述位置信息相对应的畸变参数;以及
    依据所述畸变参数对待显示图像进行畸变矫正处理以获得畸变矫正后的待显示图像。
  2. 根据权利要求1所述的图像处理方法,其中,在所述确定目标物在活动范围内的位置信息的步骤之前,所述图像处理方法还包括:
    将活动范围内的各位置信息和与各位置信息对应的畸变参数进行关联存储。
  3. 根据权利要求2所述的图像处理方法,其中,在所述将活动范围内的各位置信息和与各位置信息对应的畸变参数进行关联存储的步骤中,包括:
    确定移动摄像装置在所述活动范围内的摄像位置信息;
    确定所述摄像位置信息所对应的畸变参数,并将所述摄像位置信息与所对应的畸变参数进行关联存储;以及
    所述移动摄像装置在所述活动范围内移动,以将活动范围内的各位置信息和与各位置信息对应的畸变参数进行关联存储。
  4. 根据权利要求3所述的图像处理方法,其中,所述确定移动摄像装置在所述活动范围内的摄像位置信息的步骤中,包括:
    通过固定摄像装置采集所述移动摄像装置在所述活动范围内的第一图像;
    对所述第一图像进行解析,确定所述第一图像中所述移动摄像装置在所述固定摄像装置的成像平面上的成像位置信息;以及
    对所述成像位置信息进行转换,以确定移动摄像装置在所述活动范围内的摄像位置信息。
  5. 根据权利要求3所述的图像处理方法,其中,所述确定所述摄像位置信息所对应的畸变参数的步骤中,包括:
    通过所述移动摄像装置获取测试图像;
    将所述测试图像与基准图像进行对比,确定所述摄像位置信息所对应的畸变参数。
  6. 根据权利要求3所述的图像处理方法,其中,将所述摄像位置信息与所对应的畸变参数进行关联存储还包括:
    以查找表的方式存储所述摄像位置信息和所对应的畸变参数。
  7. 根据权利要求1所述的图像处理方法,其中,所述活动范围是所述目标物能够观看到显示载体上显示的待显示图像的区域。
  8. 根据权利要求1所述的图像处理方法,其中,在所述确定目标物在活动范围内的位置信息的步骤之前,具体包括:
    检测所述活动范围内是否存在目标物;
    响应于检测到所述活动范围内存在目标物,拍摄包括所述目标物的图像。
  9. 根据权利要求8所述的图像处理方法,其中,检测所述活动范围内是否存在目标物还包括:
    通过传感器感测所述活动范围内是否存在目标物。
  10. 根据权利要求9所述的图像处理方法,其中,所述传感器包括红外传感器、重力传感器中的一种或多种。
  11. 根据权利要求1所述的图像处理方法,还包括:
    通过投影的方式将矫正处理后的所述待显示图像显示在显示载体上。
  12. 一种图像处理装置,其特征在于,包括:
    确定模块,用于确定目标物在活动范围内的位置信息;
    获取模块,用于通过查表确定与所述位置信息相对应的畸变参数;以及
    处理模块,用于根据所述畸变参数对待显示图像进行畸变矫正处理以获得畸变矫正后的待显示图像。
  13. 一种图像处理装置,包括处理器、存储器,所述存储器用于存储计算机程序,所述计算机程序被所述处理器执行时实现权利要求1-11任一项所述图像处理方法的步骤。
  14. 根据权利要求13所述的图像处理装置,其中,所述存储器中存储有查找表,所述查找表关联存储活动范围内的位置信息和所对应的畸变参数。
  15. 一种车载抬头显示***,包括固定摄像装置、显示载体和权利要求12所述的图像处理装置,
    所述固定摄像装置用于采集包括所述目标物的第二图像,
    所述图像处理装置用于根据所述第二图像,确定目标物的位置信息,通过查表确定与所述位置信息相对应的畸变参数,根据所述畸变参数对所述待显示图像进行畸变矫正处理以获得畸变矫正后的待显示图像,以及
    所述显示载体用于显示畸变矫正后的待显示图像。
  16. 根据权利要求15所述的车载抬头显示***,还包括移动摄像装置,所述移动摄像装置用于在所述活动范围内移动时获取测试图像,并且其中所述图像处理装置用于将所述测量图像与基准图像进行对比,确定与所述移动摄像装置获取测试图像时的摄像位置信息所对应的畸变参数。
  17. 一种车辆,包括:权利要求12或13所述的图像处理装置。
PCT/CN2018/111905 2018-01-12 2018-10-25 图像处理方法、装置、车载抬头显示***及车辆 WO2019137065A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/470,238 US11120531B2 (en) 2018-01-12 2018-10-25 Method and device for image processing, vehicle head-up display system and vehicle
EP18900446.8A EP3739545A4 (en) 2018-01-12 2018-10-25 IMAGE PROCESSING METHOD AND APPARATUS, VEHICLE MOUNTED HEAD-UP DISPLAY SYSTEM, AND VEHICLE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810032426.3A CN108171673B (zh) 2018-01-12 2018-01-12 图像处理方法、装置、车载抬头显示***及车辆
CN201810032426.3 2018-01-12

Publications (1)

Publication Number Publication Date
WO2019137065A1 true WO2019137065A1 (zh) 2019-07-18

Family

ID=62514705

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111905 WO2019137065A1 (zh) 2018-01-12 2018-10-25 图像处理方法、装置、车载抬头显示***及车辆

Country Status (4)

Country Link
US (1) US11120531B2 (zh)
EP (1) EP3739545A4 (zh)
CN (1) CN108171673B (zh)
WO (1) WO2019137065A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932506A (zh) * 2020-07-22 2020-11-13 四川大学 一种提取图像中非连续直线的方法

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171673B (zh) 2018-01-12 2024-01-23 京东方科技集团股份有限公司 图像处理方法、装置、车载抬头显示***及车辆
CN109993713B (zh) * 2019-04-04 2021-11-05 阿波罗智联(北京)科技有限公司 车载平视显示***图像畸变矫正方法和装置
CN110728638A (zh) * 2019-09-25 2020-01-24 深圳疆程技术有限公司 一种图像的畸变矫正方法、车机及汽车
CN111080545B (zh) * 2019-12-09 2024-03-12 Oppo广东移动通信有限公司 人脸畸变校正方法、装置、终端设备和存储介质
CN111127365B (zh) * 2019-12-26 2023-08-29 重庆矢崎仪表有限公司 基于三次样条曲线拟合的hud畸变矫正方法
CN113452897B (zh) * 2020-03-27 2023-04-07 浙江宇视科技有限公司 一种图像处理方法、***、设备及计算机可读存储介质
CN113514952A (zh) * 2020-04-09 2021-10-19 华为技术有限公司 抬头显示装置和抬头显示方法
CN113672077A (zh) * 2020-05-15 2021-11-19 华为技术有限公司 一种数据处理方法及其设备
CN113515981A (zh) * 2020-05-22 2021-10-19 阿里巴巴集团控股有限公司 识别方法、装置、设备和存储介质
US11263729B2 (en) * 2020-05-26 2022-03-01 Microsoft Technology Licensing, Llc Reprojection and wobulation at head-mounted display device
CN112164377B (zh) * 2020-08-27 2022-04-01 江苏泽景汽车电子股份有限公司 一种hud图像矫正的自适配方法
CN112731664A (zh) * 2020-12-28 2021-04-30 北京经纬恒润科技股份有限公司 一种车载增强现实抬头显示***及显示方法
CN113421346B (zh) * 2021-06-30 2023-02-17 暨南大学 一种增强驾驶感的ar-hud抬头显示界面的设计方法
CN115689920B (zh) * 2022-10-26 2023-08-11 北京灵犀微光科技有限公司 Hud成像的辅助矫正方法、装置及矫正***
CN116017174B (zh) * 2022-12-28 2024-02-06 江苏泽景汽车电子股份有限公司 Hud畸变矫正方法、装置及***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103792674A (zh) * 2014-01-21 2014-05-14 浙江大学 一种测量和校正虚拟现实显示器畸变的装置和方法
CN104656253A (zh) * 2013-11-21 2015-05-27 中强光电股份有限公司 抬头显示***
US20160368417A1 (en) * 2015-06-17 2016-12-22 Geo Semiconductor Inc. Vehicle vision system
CN206332777U (zh) * 2016-12-23 2017-07-14 深圳点石创新科技有限公司 车载抬头显示***
CN108171673A (zh) * 2018-01-12 2018-06-15 京东方科技集团股份有限公司 图像处理方法、装置、车载抬头显示***及车辆

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004079653A1 (ja) * 2003-03-05 2004-09-16 3D Media Co. Ltd. 画像処理方法、画像処理システム、画像処理装置、及びコンピュータプログラム
EP1897057B1 (de) * 2005-06-29 2014-04-02 Bayerische Motoren Werke Aktiengesellschaft Verfahren zur verzerrungsfreien darstellung
US7835592B2 (en) * 2006-10-17 2010-11-16 Seiko Epson Corporation Calibration technique for heads up display system
US20080088528A1 (en) * 2006-10-17 2008-04-17 Takashi Shindo Warp Image Circuit
CN101289064A (zh) * 2007-04-18 2008-10-22 宇诚科技股份有限公司 移动式抬头显示导航装置
EP2962175B1 (en) * 2013-03-01 2019-05-01 Tobii AB Delay warp gaze interaction
US10026095B2 (en) * 2013-09-10 2018-07-17 Chian Chiu Li Systems and methods for obtaining and utilizing user reaction and feedback
US20160035144A1 (en) 2014-07-30 2016-02-04 GM Global Technology Operations LLC Supplementing compact in-vehicle information displays
DE102015215573A1 (de) * 2014-08-14 2016-02-18 Ford Global Technologies, Llc Verfahren zur eingabe eines weges für ein fahrzeug und einen anhänger
KR101921672B1 (ko) * 2014-10-31 2019-02-13 후아웨이 테크놀러지 컴퍼니 리미티드 이미지 처리 방법 및 장치
CN105700136A (zh) * 2014-11-27 2016-06-22 比亚迪股份有限公司 一种车载抬头显示***及汽车
CN105774679B (zh) * 2014-12-25 2019-01-29 比亚迪股份有限公司 一种汽车、车载抬头显示***及其投影图像高度调节方法
US20160235291A1 (en) * 2015-02-13 2016-08-18 Dennis Choohan Goh Systems and Methods for Mapping and Evaluating Visual Distortions
CN204731990U (zh) * 2015-05-14 2015-10-28 青岛歌尔声学科技有限公司 一种车载抬头显示设备及其信息处理装置
CN105404011B (zh) * 2015-12-24 2017-12-12 深圳点石创新科技有限公司 一种抬头显示器的3d图像校正方法以及抬头显示器
CN105527710B (zh) * 2016-01-08 2018-11-20 北京乐驾科技有限公司 一种智能抬头显示***
CN106226905B (zh) 2016-08-23 2019-08-23 北京乐驾科技有限公司 一种抬头显示装置
CN106446801B (zh) * 2016-09-06 2020-01-07 清华大学 基于超声主动探测的微手势识别方法及***
CN106226910A (zh) * 2016-09-08 2016-12-14 邹文韬 Hud***及其影像调整方法
CN106846409B (zh) * 2016-10-28 2020-05-01 北京鑫洋泉电子科技有限公司 鱼眼相机的标定方法及装置
CN106799993B (zh) * 2017-01-09 2021-06-11 智车优行科技(北京)有限公司 街景采集方法和***、车辆
CN107272194A (zh) * 2017-07-12 2017-10-20 湖南海翼电子商务股份有限公司 抬头显示装置及其方法
CN107527324B (zh) * 2017-07-13 2019-07-12 江苏泽景汽车电子股份有限公司 一种hud的图像畸变矫正方法
KR102463712B1 (ko) * 2017-11-24 2022-11-08 현대자동차주식회사 가상 터치 인식 장치 및 그의 인식 오류 보정 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104656253A (zh) * 2013-11-21 2015-05-27 中强光电股份有限公司 抬头显示***
CN103792674A (zh) * 2014-01-21 2014-05-14 浙江大学 一种测量和校正虚拟现实显示器畸变的装置和方法
US20160368417A1 (en) * 2015-06-17 2016-12-22 Geo Semiconductor Inc. Vehicle vision system
CN206332777U (zh) * 2016-12-23 2017-07-14 深圳点石创新科技有限公司 车载抬头显示***
CN108171673A (zh) * 2018-01-12 2018-06-15 京东方科技集团股份有限公司 图像处理方法、装置、车载抬头显示***及车辆

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3739545A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932506A (zh) * 2020-07-22 2020-11-13 四川大学 一种提取图像中非连续直线的方法
CN111932506B (zh) * 2020-07-22 2023-07-14 四川大学 一种提取图像中非连续直线的方法

Also Published As

Publication number Publication date
CN108171673A (zh) 2018-06-15
US20200334795A1 (en) 2020-10-22
CN108171673B (zh) 2024-01-23
EP3739545A4 (en) 2021-11-10
US11120531B2 (en) 2021-09-14
EP3739545A1 (en) 2020-11-18

Similar Documents

Publication Publication Date Title
WO2019137065A1 (zh) 图像处理方法、装置、车载抬头显示***及车辆
US11281288B2 (en) Eye and head tracking
CN107577988B (zh) 实现侧方车辆定位的方法、装置及存储介质、程序产品
CN110703904B (zh) 一种基于视线跟踪的增强虚拟现实投影方法及***
CN105989593B (zh) 视频录像中进行特定车辆速度测量的方法及装置
CN111144207B (zh) 一种基于多模态信息感知的人体检测和跟踪方法
CN107202982A (zh) 一种基于无人机位姿计算的信标布置及图像处理方法
CN109145864A (zh) 确定视线区域的方法、装置、存储介质和终端设备
CN108848374B (zh) 显示参数测量方法及其装置、存储介质和测量***
WO2022088886A1 (en) Systems and methods for temperature measurement
CN106570899B (zh) 一种目标物体检测方法及装置
CN107492123B (zh) 一种利用路面信息的道路监控摄像机自标定方法
WO2023272453A1 (zh) 视线校准方法及装置、设备、计算机可读存储介质、***、车辆
CN111854620B (zh) 基于单目相机的实际瞳距测定方法、装置以及设备
CN108491810A (zh) 基于背景建模和双目视觉的车辆限高方法及***
CN112135120B (zh) 基于抬头显示***的虚像信息测量方法及***
CN103810475A (zh) 一种目标物识别方法及装置
KR101482645B1 (ko) Fov왜곡 보정 모델에 2d패턴을 적용한 왜곡중심 보정 방법
CN111405263A (zh) 一种双摄像头结合共用于增强抬头显示的方法及***
CN110287893A (zh) 一种车辆盲区提示方法、***、可读存储介质及汽车
CN110895675B (zh) 用于确定3d空间中的对象的特征点的坐标的方法
CN112330726B (zh) 一种图像处理方法及装置
CN108760246B (zh) 一种平视显示器***中眼动范围的检测方法
CN113327192A (zh) 一种通过三维测量技术测算汽车行驶速度的方法
CN118247776B (zh) 一种汽车盲区显示识别方法及***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18900446

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018900446

Country of ref document: EP

Effective date: 20200812