CN117765080A - Display method, display device, electronic equipment and storage medium - Google Patents

Display method, display device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117765080A
CN117765080A CN202311812674.7A CN202311812674A CN117765080A CN 117765080 A CN117765080 A CN 117765080A CN 202311812674 A CN202311812674 A CN 202311812674A CN 117765080 A CN117765080 A CN 117765080A
Authority
CN
China
Prior art keywords
image
view angle
under
transformation matrix
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311812674.7A
Other languages
Chinese (zh)
Inventor
李志强
沙文
程虎
林垠
殷兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202311812674.7A priority Critical patent/CN117765080A/en
Publication of CN117765080A publication Critical patent/CN117765080A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a display method, a display device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring an image under the current view angle of a camera; determining an interested region of an image, and extracting features of the interested region to obtain a feature vector under the current view angle; acquiring a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, wherein the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle; and adjusting the sight line data corresponding to the image based on the target transformation matrix, and performing head-up display by applying the adjusted sight line data. The method, the device, the electronic equipment and the storage medium provided by the invention can realize the self-adaptive correction of the head-up display position under the condition that the driver position is fixed but the camera visual angle can be manually adjusted, and provide a stable and reliable intelligent cabin head-up display function.

Description

Display method, display device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of intelligent driving technologies, and in particular, to a display method, a display device, an electronic device, and a storage medium.
Background
Head-Up Display (HUD) has been widely used in many high-end automobiles as an innovative human-computer interaction method. The HUD is used for directly projecting information such as vehicle speed, navigation indication, lane departure warning and the like to the front of the sight of a driver, so that the driver is helped to better grasp road conditions and driving routes, and the driver can acquire the key information without head-down, and driving safety and convenience are improved.
Due to the difference of factors such as the height and driving habit of the driver, the position of the HUD needs to be adjusted according to personal requirements so as to ensure that the driver can clearly see and read the information displayed by the HUD. The prior art mainly detects the eyeball position of a driver through a sensor and an image recognition technology, so that the position of the HUD is automatically adjusted. This adjustment mode has certain effect to realizing HUD position automatic adjustment under the fixed condition of camera position, however, under the fixed but camera view angle manual adjustment's of driver position condition, because the HUD position that current adjustment mode determined is related to the camera view angle, when camera view angle changed, can lead to HUD position and driver sight inconsistent to influence the new line display effect.
Disclosure of Invention
The invention provides a display method, a display device, electronic equipment and a storage medium, which are used for solving the problems that in the prior art, the position of a head-up display cannot be accurately adjusted when the position of a driver is fixed and the visual angle of a camera can be manually adjusted, and the effect of head-up display is affected.
The invention provides a display method, which comprises the following steps:
acquiring an image under the current view angle of a camera;
determining an interested region of the image, and extracting features of the interested region to obtain a feature vector under the current view angle;
acquiring a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, wherein the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle;
and adjusting the sight line data corresponding to the image based on the target transformation matrix, and applying the adjusted sight line data to perform head-up display.
According to the display method provided by the invention, the construction steps of the preset position feature library comprise:
obtaining a standard position image and a calibration image under each view angle;
determining a transformation matrix corresponding to the transformation of the calibration image under each view angle to the standard position image;
Determining the interested areas of the calibration images under each view angle, and extracting the characteristics of the interested areas of the calibration images under each view angle to obtain the characteristic vectors corresponding to the calibration images under each view angle;
and constructing the preset position feature library based on the feature vector corresponding to the calibration image under each view angle and a transformation matrix corresponding to the calibration image under each view angle transformed to the standard position image.
According to the display method provided by the invention, the determination of the transformation matrix corresponding to the transformation of the calibration image under each view angle to the standard position image comprises the following steps:
acquiring each characteristic point in the calibration image at any view angle;
matching each characteristic point in the calibration image at any view angle with each target characteristic point in the standard position image;
and under the condition that the number of the successfully matched characteristic points reaches a preset threshold value, determining a transformation matrix corresponding to the transformation of the calibration image under any view angle to the standard position image based on each characteristic point in the calibration image under any view angle and each target characteristic point in the standard position image.
According to the display method provided by the invention, the standard position image and the calibration image under each view angle are both images comprising a checkerboard.
According to the display method provided by the invention, the method for determining the region of interest of the image and extracting the characteristics of the region of interest to obtain the characteristic vector under the current view angle comprises the following steps:
performing face detection on the image and determining a face area in the image;
deleting the face region from the image to obtain a region of interest of the image;
and based on the self-encoder, extracting the characteristics of the region of interest to obtain the characteristic vector under the current view angle.
According to the display method provided by the invention, the obtaining of the target transformation matrix corresponding to the feature vector under the current view angle based on the preset position feature library comprises the following steps:
determining a target feature vector corresponding to the feature vector under the current view angle based on the similarity between the feature vector under the current view angle and the feature vector corresponding to the calibration image under each view angle in the preset position feature library;
and based on the target feature vector and the preset position feature library, acquiring a transformation matrix corresponding to the target feature vector, and taking the transformation matrix corresponding to the target feature vector as a target transformation matrix corresponding to the feature vector under the current view angle.
According to the display method provided by the invention, the line of sight data corresponding to the image is adjusted based on the target transformation matrix, and the adjusted line of sight data is applied to perform head-up display, and the method comprises the following steps:
adjusting the sight line data corresponding to the image based on the target transformation matrix to obtain adjusted sight line data;
and determining a display position based on the adjusted sight line data, and performing head-up display by applying the display position.
The present invention also provides a display device including:
the image acquisition unit is used for acquiring an image under the current view angle of the camera;
the feature extraction unit is used for determining an interested region of the image, and extracting features of the interested region to obtain a feature vector under the current view angle;
the matrix acquisition unit is used for acquiring a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, and the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle;
and the adjustment display unit is used for adjusting the sight line data corresponding to the image based on the target transformation matrix and performing head-up display by applying the adjusted sight line data.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a display method as described in any one of the above when executing the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a display method as described in any of the above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements a display method as described in any one of the above.
According to the display method, the device, the electronic equipment and the storage medium, the target transformation matrix corresponding to the feature vector under the current visual angle is obtained based on the preset position feature library, so that the sight line data corresponding to the image under the current visual angle can be adjusted based on the target transformation matrix, and the adjusted sight line data is used for head-up display, so that the influence of camera visual angle change on head-up display can be eliminated, the head-up display position is ensured to be consistent with the sight line direction of a driver, and the effect of self-adaptive correction on the head-up display position under the condition that the driver position is fixed but the camera visual angle is adjustable is realized. In addition, through carrying out feature extraction based on the interested region of the image and the calibration image under each view angle, the problems of inaccurate line-of-sight data correction caused by unobvious cockpit feature points and interference of the face information of the driver can be effectively solved, and therefore accuracy of head-up display position determination is improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a display method provided by the invention;
FIG. 2 is a schematic flow chart of a preset location feature library construction provided by the present invention;
FIG. 3 is a schematic diagram of a self-encoder provided by the present invention;
FIG. 4 is a second schematic flow chart of the construction of the preset position feature library according to the present invention;
FIG. 5 is a schematic flow chart of vision data correction provided by the present invention;
fig. 6 is a schematic structural diagram of a display device according to the present invention;
fig. 7 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
With the continuous development of technology, the automotive industry is undergoing a transition from traditional automobiles to intelligent automobiles. The intelligent cabin is used as an important component of the intelligent automobile, and aims to provide more comfortable, convenient and safe driving experience for users. Among them, head-Up Display (HUD) is an innovative man-machine interaction method, and has been widely used in many high-end automobiles. The HUD enables a driver to acquire key information without head lowering by projecting vehicle information in front of the sight of the driver, so that driving safety and convenience are improved. For example, the information such as the vehicle speed, the navigation indication, the lane departure warning and the like is directly projected to the front of the sight of the driver, so that the driver is helped to better grasp the road condition and the driving route, and the driver can acquire the key information without low head, thereby reducing the risk of looking at the instrument panel at low head, improving the driving safety and convenience, effectively reducing the times of low head of the driver in the driving process, relieving the fatigue of the neck and the shoulder, and improving the driving comfort.
Due to the difference of factors such as the height and driving habit of the driver, the HUD position in the intelligent cabin needs to be adjusted according to personal requirements so as to ensure that the driver can clearly see and read information displayed by the HUD. The prior art mainly detects the eyeball position of a driver through a sensor and an image recognition technology, so that the position of the HUD is automatically adjusted. This adjustment mode has certain effect to sight HUD position adjustment under the fixed condition of camera position, however, under the fixed condition of driver position but camera visual angle manual adjustment, because the HUD position that current adjustment mode determined is related to the camera visual angle, when camera visual angle changes, can lead to HUD position and driver sight inconsistent to influence the new line display effect.
For example, when the camera is mounted on the steering wheel of the automobile, the driver may adjust the steering wheel of the automobile due to driving habit, so that the view angle of the camera is changed, based on the face information of the driver collected under the current view angle, the system predicts a view direction and a view center point again to project and display the vehicle information, but since the view height of the driver is unchanged, the display position of the HUD is not consistent with the view of the driver, so that the driver cannot well view and read the information displayed by the HUD.
In this regard, the embodiment of the invention provides a display method, which can realize adaptive correction of the HUD position under the condition that the driver position is fixed and the camera visual angle can be manually adjusted, ensure that the display position of the HUD is consistent with the sight direction of the driver, provide a stable and reliable intelligent cabin HUD function, and improve the driving experience, thereby overcoming the defects.
Fig. 1 is a schematic flow chart of a display method provided by the present invention, as shown in fig. 1, the method includes:
step 110, obtaining an image under the current view angle of a camera;
specifically, the image at the current angle of view refers to an image captured by the camera at the current capturing angle. The image at the current view of the camera may be acquired by invoking a camera function through an interface of the camera device or API (Application Programming Interface). It will be appreciated that the image at the current viewing angle is determined based on the position and angle of the camera, and by adjusting the position and angle of the camera, the shooting angle can be changed and images at different viewing angles can be acquired.
Step 120, determining an interested region of the image, and extracting features of the interested region to obtain a feature vector under the current view angle;
specifically, after obtaining an image at the current view angle of the camera, a region of interest of the image may be determined first, where the region of interest of the image refers to a region related to a task and containing important information in the entire image. Determining the region of interest of the image may be performed by methods such as object detection, image segmentation, and key point extraction, for example, the object detection may be performed using a deep learning model, or the image segmentation may be performed by methods such as threshold segmentation and edge detection, so as to determine the region of interest of the image.
It can be understood that after the image under the current view angle is obtained, if the feature points in the image are directly extracted, there may be factors such as imaging quality of a camera, illumination condition, shielding of a driver, and the like, so that the feature points are not obvious, and thus the accuracy of extracting the feature points is reduced. Secondly, the face information and the light interference of the driver are large, and in the driving process, the extraction of fixed characteristic points in the cockpit can be more difficult due to the shielding of the driver. Therefore, in order to avoid interference of face information of a driver, in the embodiment of the invention, when the region of interest of an image is determined, the face region can be determined on the original image, and after the face region information is deleted, only other regions are reserved as the region of interest, so that factors which have interference on feature extraction in the image can be filtered.
After determining the region of interest of the image, feature extraction may be performed on the region of interest, where feature extraction refers to extracting key visual features from the region of interest to describe and represent information of the region. For example, a Convolutional Neural Network (CNN) may be used to extract a feature vector, or a conventional image feature descriptor, such as a Local Binary Pattern (LBP), a direction gradient Histogram (HOG), etc., may be used to extract a feature vector at the current view angle.
Here, the feature vector under the current view angle refers to a feature vector obtained by extracting features from the region of interest under the current view angle, and the feature vector can be used to represent image content and features under the current view angle so as to be used for subsequent image matching tasks.
In the embodiment of the invention, the factors which interfere with the feature extraction in the image can be filtered by determining the region of interest of the image and extracting the important features related to the task, thereby effectively avoiding the problem of failure in vision correction caused by interference of face information of a driver and the like.
Step 130, acquiring a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, wherein the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle;
Specifically, the preset position feature library is a database constructed according to feature vectors and transformation matrixes corresponding to calibration images under various angles of view, and the database records the corresponding relations between the feature vectors and the transformation matrixes under different angles of view of the camera. Here, the calibration image under each view angle refers to an image with specific markers acquired under different view angles of the camera, and the markers have specific shapes and structures so as to provide reliable feature points and edge lines, so that feature point extraction and matching, such as checkerboard, grid, laser lattice and the like, are facilitated. The feature vector corresponding to the calibration image refers to the feature vector extracted from the calibration image, and can be used for describing information such as local texture, shape and the like of the calibration image. The transformation matrix corresponding to the calibration image refers to a transformation matrix corresponding to mapping the calibration image to the standard position image, and the transformation matrix describes transformation relations of the calibration image under different visual angles. The standard position image refers to a reference image used in the process of determining the transformation matrix under different visual angles, and in order to ensure that the final head-up display position is consistent with the driver's visual line direction, the standard position image may be an image under the visual angle consistent with the driver's visual line direction, so as to ensure the optimal display effect and user experience.
After the feature vector under the current view angle is obtained, the similarity between the feature vector under the current view angle and the feature vector under each view angle in the preset position feature library can be calculated, and a transformation matrix corresponding to the feature vector with the largest similarity is selected to be used as a target transformation matrix corresponding to the feature vector under the current view angle, so that the feature vector under the current view angle can be mapped onto a standard position image, and the vision correction is realized. Here, the target transformation matrix refers to a transformation matrix that maps feature vectors at the current view angle onto the standard position image, and describes a transformation relationship between the current view angle and the standard view angle.
Before step 130 is performed, a preset location feature library may be pre-constructed, and specifically, the preset location feature library may be constructed by the following manner: firstly, determining a standard position and HUD position information corresponding to the standard position, acquiring a standard position image containing a checkerboard and a calibration image under each shooting angle, and acquiring a transformation matrix from the calibration image under each shooting angle to the standard position image; and secondly, extracting features of the calibration image under each shooting angle, taking the extracted feature vector as a key and a corresponding transformation matrix as a value, and storing the value into a database to construct a preset position feature library.
And 140, adjusting the sight line data corresponding to the image based on the target transformation matrix, and applying the adjusted sight line data to perform head-up display.
Specifically, after the target transformation matrix corresponding to the feature vector under the current view angle is obtained, the line-of-sight data corresponding to the image under the current view angle may be adjusted based on the target transformation matrix. Here, the line-of-sight data corresponding to the image at the current angle of view refers to coordinate data of a position or focus at which the human eye is gazed, which is determined by the image at the current angle of view, and may include, for example, a line-of-sight direction, a line-of-sight center point, and the like.
Based on the target transformation matrix, when the sight line data corresponding to the image under the current visual angle is adjusted, the target transformation matrix can be directly acted on the sight line data, so that the sight line data is mapped to the standard visual angle, and the correction and alignment of the sight line data are realized. Here, the standard viewing angle is a viewing angle consistent with the direction of the driver's line of sight.
The line of sight data corresponding to the image at the current view angle is adjusted, so that the adjusted line of sight data can be obtained, and head-up display can be performed based on the adjusted line of sight data. The head-up display means that information is displayed at a position where a user's sight is located, so that the user can obtain relevant information without head-down. By applying the adjusted sight line data, the related information can be accurately displayed near the focus of the sight line of the user, so that the user experience and the interaction effect are improved. For example, in the real-time navigation system, the safety and convenience of the driver can be improved by performing head-up display of navigation information according to the line-of-sight data of the user.
According to the display method provided by the embodiment of the invention, the target transformation matrix corresponding to the feature vector under the current view angle is obtained based on the preset position feature library, so that the sight line data corresponding to the image under the current view angle can be adjusted based on the target transformation matrix, and the adjusted sight line data is used for head-up display, so that the influence of camera view angle change on head-up display can be eliminated, the head-up display position is ensured to be consistent with the sight line direction of a driver, and the effect of self-adaptive correction on the head-up display position under the condition that the driver position is fixed but the camera view angle is adjustable is realized. In addition, through carrying out feature extraction based on the interested region of the image and the calibration image under each view angle, the problems of inaccurate line-of-sight data correction caused by unobvious cockpit feature points and interference of the face information of the driver can be effectively solved, and therefore accuracy of head-up display position determination is improved.
Based on the above embodiment, fig. 2 is one of the flow charts of the preset location feature library construction provided by the present invention, and as shown in fig. 2, the construction steps of the preset location feature library include:
step 210, obtaining a standard position image and a calibration image under each view angle;
Specifically, the standard position image refers to a reference image used in the process of determining the transformation matrix under different viewing angles, namely, an image under a standard viewing angle, and the standard viewing angle refers to a viewing angle consistent with the direction of the sight of the driver, so that the best head-up display effect and user experience can be ensured.
Because the fixed features in the original cockpit image containing the driver are not obvious and are influenced by the head features of the driver, feature points with obvious features are difficult to extract for calculating a transformation matrix between the feature points and the standard position image, and the problems that the effect of feature point matching failure is uncontrollable and the like exist. In this regard, the embodiment of the present invention may use a specific identifier (such as a checkerboard) to obtain the feature points of the image under different viewing angles, so that not only the interference introduced by the driver may be avoided, but also the calibration image including the specific identifier may provide stable feature points for calculation, thereby obtaining a stable and accurate transformation relationship.
When the standard position image and the calibration image under each view angle are obtained, a drawing including a specific marker can be firstly manufactured and attached to the surface of a planar object, the drawing is fixed at the position of the face of a driver, and then a group of cockpit images with the markers under different angles are shot through a mobile camera, so that the standard position image and the calibration image under each view angle can be obtained.
220, determining a transformation matrix corresponding to the transformation of the calibration image under each view angle to the standard position image;
specifically, after the standard position image and the calibration image under each view angle are obtained, a feature extraction algorithm may be used to extract feature points in the standard position image and the calibration image under each view angle, where the feature points may be corner points, edges, and the like on the image. And matching the characteristic points under each view angle with the characteristic points of the standard position image by using a characteristic matching algorithm (such as RANSAC), and establishing the corresponding relation between each view angle image and the standard position image through matching. Using the matched pairs of feature points, a transformation matrix of the calibration image to the standard position image at each view angle can be calculated by a least square method or other corresponding algorithm, which describes how the calibration image at each view angle is mapped into the coordinate system of the standard position image.
Step 230, determining the interested region of the calibration image under each view angle, and extracting the characteristics of the interested region of the calibration image under each view angle to obtain the characteristic vector corresponding to the calibration image under each view angle;
it should be noted that, since no identifier is used as a reference in the actual application process, the other area after the area where the specific identifier is located (i.e. the face area) needs to be shielded again to find the features. In this regard, the embodiment of the invention determines the interested region of the calibration image under each view angle, so that the feature extraction is performed based on the interested region, and the interference caused by the face information can be avoided.
Specifically, for the calibration image under each view angle, the region of interest may be determined by an image processing algorithm, such as edge detection, object detection, segmentation, and the like, or may be determined by a manual labeling method, which is not particularly limited in the embodiment of the present invention. For example, for a calibration image containing a checkerboard at any view angle, a checkerboard region in the calibration image can be determined by target detection, the region information is deleted, and the remaining other regions can be determined as the region of interest of the calibration image at the view angle. For another example, on an original gray image containing a face at any view angle, a pre-trained face detector is used or prior information is used to determine a face region, and the face region information is deleted from the original gray image, so that only other regions are reserved, and a region of interest at the view angle is obtained.
For the region of interest at each view angle, various image feature extraction algorithms, such as scale-invariant feature transform (SIFT), direction gradient Histogram (HOG), etc., may be used to extract features, thereby obtaining feature vectors corresponding to the scaled images at each view angle. Here, the feature vector corresponding to the calibration image under each view angle is obtained by extracting features from the region of interest of the calibration image under each view angle, and is a vector composed of feature values, which is used for representing the feature information of the calibration image under different view angles. The feature vector can be used as the feature representation of the calibration image under different visual angles for subsequent analysis and comparison tasks.
Further, for the region of interest of the calibration image at any view angle, it is not desirable to directly extract the feature descriptors because it is other region after the face region is masked and it is considered that no obvious feature points are available in these regions within the cockpit. In contrast, the embodiment of the invention uses the self-encoder based on the deep learning model to extract the characteristics, so that the characteristic significance under the view angle can be kept to the maximum extent.
And 240, constructing a preset position feature library based on feature vectors corresponding to the calibration images under all the view angles and a transformation matrix corresponding to the transformation of the calibration images under all the view angles to the standard position images.
Specifically, after the transformation matrix corresponding to the calibration image under each view angle is determined in the above step 220, and the feature vector corresponding to the calibration image under each view angle is obtained in the above step 230, the feature vector of the calibration image under each view angle may be associated with the corresponding transformation matrix, for example, a data structure (such as a key value pair, a dictionary, a hash table, etc.) may be used to store the correspondence between the feature vector and the transformation matrix, so as to construct and obtain the preset position feature library. It should be appreciated that as much data as possible at different angles may be collected to enable the library of preset location features to cope with all possible situations.
The method provided by the embodiment of the invention can provide an efficient method for matching and positioning by constructing the preset position feature library, and can quickly and accurately find a required transformation matrix by extracting the features of the image under the current view angle and matching the feature vectors in the preset position feature library in the application process, thereby completing the vision correction.
Based on the above embodiment, step 220 specifically includes:
acquiring each characteristic point in the calibration image at any view angle;
matching each characteristic point in the calibration image under any view angle with each target characteristic point in the standard position image;
under the condition that the number of the successfully matched characteristic points reaches a preset threshold value, determining a transformation matrix corresponding to the transformation of the calibration image under any view angle to the standard position image based on each characteristic point in the calibration image under any view angle and each target characteristic point in the standard position image.
Specifically, each feature point in the calibration image at any view angle refers to a salient feature point extracted from the encoder by a feature point detection algorithm or based on a deep learning model in the image shot at the view angle, and the feature points can be corner points, edge points or texture points in the image, so that the feature points have certain uniqueness and stability. Each target feature point in the standard position image refers to a salient feature point extracted from the standard position image, and the target feature points can be obtained by manual labeling or automatic extraction.
After extracting each target characteristic point in the standard position image and each characteristic point in the calibration image under any view angle, each characteristic point in the calibration image under the view angle can be matched with each target characteristic point in the standard position image. For example, feature point matching algorithms, such as feature descriptor based matching algorithms (e.g., nearest neighbor based algorithms, similarity metric based, etc.), may be used to find similar pairs of feature points in the two images.
If the number of the successfully matched feature points reaches a preset threshold, the fact that enough similar feature points exist in the two images is indicated, namely, a better feature corresponding relation exists between the two images, and in this case, a LM (Levenberg-Marquardt) optimization method can be used for calculating and obtaining a homography transformation matrix corresponding to the calibration image to the standard position image under the view angle. Here, the LM optimization method is a nonlinear least squares optimization algorithm. The preset threshold is a preset threshold, and is used for judging the quality of the matching of the feature points.
According to the method provided by the embodiment of the invention, the transformation matrix from the calibration image to the standard position image can be accurately determined by matching the characteristic points in the calibration image and the target characteristic points in the standard position image at any view angle, so that the positioning precision and accuracy are improved, and the self-adaptive correction of the head-up display positions at different view angles is realized.
Based on any of the above embodiments, the standard position image and the calibration image at each viewing angle are both images including a checkerboard.
It should be noted that, since the fixed features in the cockpit image originally including the driver are not obvious and are affected by the head features of the driver, it is difficult to extract feature points with obvious features for calculating the transformation matrix with the standard position image. In addition, even if feature points are successfully extracted and feature matching is performed, the obtained transformation parameters of the vision correction may be unstable, because the body posture, the vision direction, and the like of the driver may be changed during driving, thereby causing frequent adjustment of the transformation parameters and affecting the stability of the vision correction.
In contrast, the embodiment of the invention uses the checkerboard to acquire the characteristic points of the standard position image and the calibration image under each view angle, so that not only can the interference introduced by a driver be avoided, but also the stable characteristic points can be provided for calculating a stable and accurate transformation relation.
Specifically, the image including the checkerboard refers to an image including a checkerboard pattern, wherein the checkerboard pattern is composed of black and white squares which are arranged in a staggered manner, and has certain regularity and identifiability. The corner points of the checkerboard can be easily detected by an image processing algorithm, and can be used as characteristic points for matching and positioning.
When the standard position image and the calibration image under each view angle are obtained, a checkerboard calibration drawing can be manufactured, the checkerboard calibration drawing is attached to the surface of a planar object and fixed at the position of the face of a driver, and then a group of cockpit images with checkerboards under different angles are shot through a mobile camera. For each captured cockpit image, the feature points (i.e. corner points, i.e. black and white square intersections) of all the checkerboards in the image may be detected. The characteristics of all the angular points in the chessboard calibration drawing in the cockpit image shot at different angles are obvious, so that the corresponding angular points of the chessboard in the standard position image are easily matched successfully. And if the obtained corner matching quantity N is more than or equal to 4, obtaining the homography transformation matrix H under the corresponding view angle according to an LM and other optimization methods.
It will be appreciated that the corner matching problem can be converted into a nonlinear least squares problem when using the LM optimization method for corner matching. The optimal homography transformation matrix H can be solved by minimizing the reprojection error, i.e., the distance between the projection point and the true point of the corner matching point under homography transformation. When more than 4 pairs of angular points are matched, an LM optimization method can be used for solving an optimal homography transformation matrix H, so that correction and alignment of images are realized.
According to the method provided by the embodiment of the invention, a more representative characteristic description mode can be provided by using a checkerboard calibration mode, so that the image characteristics under different shooting angles are highly differentiated, and the problem of inaccurate correction of the head-up display position caused by unobvious characteristic points of the cockpit and influence of a driver can be effectively solved.
Based on any of the above embodiments, step 120 specifically includes:
performing face detection on the image, and determining a face area in the image;
deleting the face region from the image to obtain a region of interest of the image;
and extracting the characteristics of the region of interest based on the self-encoder to obtain the characteristic vector under the current view angle.
In particular, face detection refers to the automatic localization and recognition of the position of a face in an image by computer vision algorithms. After the image at the current view angle is obtained, the same feature extraction operation as the calibration image at each view angle can be performed on the image at the current view angle to obtain feature vectors which can be retrieved in a preset position feature library.
To avoid interference caused by driver face information, a pre-trained face detector may be used or a priori information may be used to determine the face region in the image at the current view angle. Here, the face region in the image refers to a rectangular region containing a face determined in the face detection process, and this region is usually marked by a face detection algorithm, which indicates the position where the face exists in the image. After determining the face region in the image, the face region may be deleted from the image, for example, the pixel value of the region may be set to a background value or surrounding pixels may be used to fill in the region according to the position information of the face region, so that the face region is deleted from the image, and the remaining other regions are used as the region of interest of the image.
Since no obvious feature points are available in other areas within the cockpit after the face area is masked, it is not desirable to directly extract the feature sub-description. Therefore, the embodiment of the invention uses the self-encoder based on the deep learning model to extract the features, so that the feature significance under the visual angle can be kept to the maximum extent, and the one-dimensional features which can represent the face information filtered under the visual angle can be extracted by training the self-encoder.
Fig. 3 is a schematic structural diagram of a self-encoder provided by the present invention, as shown in fig. 3, the self-encoder is a neural network model with unsupervised learning, and can be used for feature extraction. It consists of an encoder and a decoder, which reconstruct the training data itself, thereby learning a potential feature representation of the data. The input image is compressed to a D dimension feature vector by the encoder, then the input image is restored by the decoder, and after full training, the D dimension feature vector can be extracted by the encoder for representing the features of the visual angle of the input image.
Based on any of the above embodiments, step 130 specifically includes:
determining a target feature vector corresponding to the feature vector under the current view angle based on the similarity between the feature vector under the current view angle and the feature vector corresponding to the calibration image under each view angle in the preset position feature library;
Based on the target feature vector and a preset position feature library, a transformation matrix corresponding to the target feature vector is obtained, and the transformation matrix corresponding to the target feature vector is used as a target transformation matrix corresponding to the feature vector under the current view angle.
Specifically, after extracting the feature vector under the current view angle, a similarity calculation method may be used to calculate the similarity between the feature vector under the current view angle and the feature vector corresponding to the calibration image under each view angle in the preset position feature library, so as to select the feature vector with the highest similarity as the target feature vector. For example, the euclidean distance between the feature vector under the current view angle and all the feature vectors in the preset position feature library can be calculated, and the feature vector corresponding to the view angle with the minimum euclidean distance is selected as the target feature vector corresponding to the feature vector under the current view angle.
The feature vector and the transformation matrix corresponding to the calibration image under each view angle in the preset position feature library are stored by forming key value pairs, so that after the target feature vector corresponding to the feature vector under the current view angle is determined, the corresponding transformation matrix can be obtained based on the target feature vector, and the transformation matrix can be used as the transformation matrix from the current view angle to the standard view angle.
According to the method provided by the embodiment of the invention, the target feature vector similar to the feature vector under the current view angle can be effectively found by calculating the similarity between the feature vector under the current view angle and the feature vector corresponding to the calibration image under each view angle in the preset position feature library; based on the target feature vector and a preset position feature library, a transformation matrix corresponding to the target feature vector can be obtained, so that the sight line data under the current visual angle can be adjusted to the standard visual angle, and the head-up display effect is ensured.
Based on any of the above embodiments, step 140 specifically includes:
adjusting the sight line data corresponding to the image based on the target transformation matrix to obtain adjusted sight line data;
and determining a display position based on the adjusted sight line data, and performing head-up display by applying the display position.
Specifically, after the target transformation matrix is obtained, line-of-sight data corresponding to the image at the current view angle may be adjusted based on the target transformation matrix. Firstly, line of sight data to be adjusted can be determined based on an image under a current view angle, then a target transformation matrix is applied to the line of sight data to be adjusted, and line of sight coordinate points on the image are subjected to coordinate transformation through the target transformation matrix, so that the line of sight data after adjustment is obtained. Here, the adjusted line-of-sight data refers to a result obtained by transforming line-of-sight data corresponding to an image by a target transformation matrix, and represents an adjusted position at which a line-of-sight point at a current view angle is mapped to a target view angle. The adjusted gaze data may be used to determine a display position at the target viewing angle, thereby enabling a heads-up display.
Based on the adjusted line of sight data, a display position may be determined, where the display position refers to a position where the determined target should appear. After the display position is determined, head-up display can be realized through a graphical interface technology and control of display equipment, so that a user can observe relevant information in a head-up or visual mode.
Based on any one of the embodiments, the embodiment of the present invention provides a display method, which can be applied to adaptively correcting a head-up display position under the condition that a driver position is fixed but a camera view angle can be manually adjusted. The method comprises the following steps:
step S1: construction of preset position feature library
The method comprises the steps of constructing a preset position feature library, and determining a standard position and corresponding HUD position information in advance, wherein the standard position can be customized according to different users. Fig. 4 is a second schematic flow chart of construction of the preset position feature library provided by the invention, as shown in fig. 4, firstly, obtaining an image under each shooting angle and a standard position image, determining a transformation matrix from the image under each shooting angle to the standard position image, and then storing a feature vector obtained by extracting the image under each shooting angle as a key and the transformation matrix as values into a database to obtain the preset position feature library.
S11, obtaining a transformation relation by checkerboard calibration
Because the fixed features in the original cockpit image containing the driver are not obvious and are influenced by the head features of the driver, the feature points with obvious features are difficult to extract for calculating a transformation matrix with the standard position images, the invention uses the checkerboard to acquire the feature points of the images under different visual angles, so that the interference introduced by the driver is avoided on one hand, and on the other hand, the checkerboard can provide stable feature points for calculating a stable and accurate transformation relation.
Firstly, manufacturing a checkerboard calibration drawing, attaching the checkerboard calibration drawing to the surface of a planar object, and fixing the checkerboard calibration drawing at the position of the face of a driver; then a set of cockpit images with checkerboard at different angles are photographed by a mobile camera. For each captured cockpit image, the feature points (i.e. corner points, i.e. black and white checkerboard intersections) of all the checkerboards in the image are detected. Because the characteristics of all the angular points in the chessboard calibration drawing in the cockpit image shot at different angles are very obvious, the corresponding angular points of the chessboard in the standard position image are easily successfully matched, and if the obtained angular point matching quantity N > =4, the homography transformation matrix H of the cockpit image can be obtained according to the LM optimization method.
S12, ROI feature extraction
Since no checkerboard is used as a reference in the practical application process, features need to be found in areas where the checkerboard area (i.e. the face area) is shielded, but no obvious feature points are available in the areas in the cockpit, so that it is not preferable to directly extract feature descriptors. Therefore, the embodiment of the invention uses the self-encoder based on the deep learning model to extract the characteristics, so that the characteristic significance under the visual angle can be kept to the maximum extent. The method comprises the following specific steps: first, a face region is determined on an original gray image (a pre-trained face detector can be used or a priori information can be used to determine the face region), the face region information is deleted, and only other regions are reserved.
Then, the one-dimensional characteristics which can represent the face information filtered under the visual angle are extracted from the encoder by training the convolutional neural network. The self-encoder structure is shown in fig. 3, the input image is compressed to a feature vector of D dimension by the encoder, then the input image is recovered by the decoder, after full training, the feature vector of D dimension can be extracted by the encoder for representing the feature of the view angle of the input image. And (3) storing the key value pairs formed by the characteristic and the homography transformation matrix H obtained in the last step into a preset position characteristic library. The data under different angles are collected as much as possible, so that the preset position feature library can cope with all possible situations.
Step S2: vision correction using a library of preset location features
Fig. 5 is a schematic flow chart of vision data correction provided by the present invention, as shown in fig. 5, keys (i.e. feature vectors) can be obtained in the same processing manner in the application process, so as to obtain a transformation matrix, and the transformation matrix can be used to directly act on vision data for correction.
S21, obtaining a transformation matrix
First, the same feature extraction operation (i.e., ROI feature extraction) as in step S1 is performed on an image at the current view angle to obtain feature vectors retrievable in a preset position feature library. And solving the Euclidean distance between the feature vector of the current view and all the feature vectors in the preset position feature library, and selecting an image corresponding to the view with the minimum Euclidean distance as an estimated value of the current view, thereby indirectly acquiring a homography transformation matrix H from the current view to the standard view.
S22, correcting vision
The homography transformation matrix H obtained in the last step is directly acted on the sight data to be corrected, so that the sight correction can be completed, and head-up display can be performed based on the corrected sight data.
The method provided by the embodiment of the invention can effectively solve the problem of inaccurate HUD position correction caused by unobvious cockpit feature points and driver influence in an intelligent cockpit scene by using a checkerboard calibration mode and combining technical means such as image feature descriptor extraction, image matching, an automatic encoder and the like, and provides a complete implementation method for adaptively correcting the HUD position under the condition that the driver position is fixed but the camera view angle can be manually adjusted, thereby having great significance for realizing a complete and reliable intelligent cockpit system.
Based on any of the above embodiments, fig. 6 is a schematic structural diagram of a display device according to the present invention, as shown in fig. 6, the device includes:
an image acquisition unit 610, configured to acquire an image under a current view angle of the camera;
the feature extraction unit 620 is configured to determine a region of interest of the image, and perform feature extraction on the region of interest to obtain a feature vector under the current view angle;
the matrix obtaining unit 630 is configured to obtain a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, where the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle;
and an adjustment display unit 640, configured to adjust line of sight data corresponding to the image based on the target transformation matrix, and apply the adjusted line of sight data to perform head-up display.
According to the display device provided by the embodiment of the invention, the target transformation matrix corresponding to the feature vector under the current view angle is obtained based on the preset position feature library, so that the sight line data corresponding to the image under the current view angle can be adjusted based on the target transformation matrix, and the adjusted sight line data is used for head-up display, so that the influence of camera view angle change on head-up display can be eliminated, the head-up display position is ensured to be consistent with the sight line direction of a driver, and the effect of self-adaptive correction on the head-up display position under the condition that the driver position is fixed but the camera view angle is adjustable is realized. In addition, through carrying out feature extraction based on the interested region of the image and the calibration image under each view angle, the problems of inaccurate line-of-sight data correction caused by unobvious cockpit feature points and interference of the face information of the driver can be effectively solved, and therefore accuracy of head-up display position determination is improved.
Based on any one of the above embodiments, the apparatus further includes a feature library construction unit, where the feature library construction unit specifically includes:
the acquisition subunit is used for acquiring a standard position image and a calibration image under each view angle;
the determining subunit is used for determining a transformation matrix corresponding to the transformation of the calibration image under each view angle to the standard position image;
the extraction subunit is used for determining the interested region of the calibration image under each view angle, and extracting the characteristics of the interested region of the calibration image under each view angle to obtain the characteristic vector corresponding to the calibration image under each view angle;
the construction subunit is used for constructing a preset position feature library based on the feature vector corresponding to the calibration image under each view angle and the transformation matrix corresponding to the standard position image transformed by the calibration image under each view angle.
Based on any of the above embodiments, the determining subunit is specifically configured to:
acquiring each characteristic point in the calibration image at any view angle;
matching each characteristic point in the calibration image under any view angle with each target characteristic point in the standard position image;
under the condition that the number of the successfully matched characteristic points reaches a preset threshold value, determining a transformation matrix corresponding to the transformation of the calibration image under any view angle to the standard position image based on each characteristic point in the calibration image under any view angle and each target characteristic point in the standard position image.
Based on any of the above embodiments, the standard position image and the calibration image at each viewing angle are both images including a checkerboard.
Based on any of the above embodiments, the feature extraction unit 620 is specifically configured to:
performing face detection on the image, and determining a face area in the image;
deleting the face region from the image to obtain a region of interest of the image;
and extracting the characteristics of the region of interest based on the self-encoder to obtain the characteristic vector under the current view angle.
Based on any of the above embodiments, the matrix acquisition unit 630 is specifically configured to:
determining a target feature vector corresponding to the feature vector under the current view angle based on the similarity between the feature vector under the current view angle and the feature vector corresponding to the calibration image under each view angle in the preset position feature library;
based on the target feature vector and a preset position feature library, a transformation matrix corresponding to the target feature vector is obtained, and the transformation matrix corresponding to the target feature vector is used as a target transformation matrix corresponding to the feature vector under the current view angle.
Based on any of the above embodiments, the adjustment display unit 640 is specifically configured to:
adjusting the sight line data corresponding to the image based on the target transformation matrix to obtain adjusted sight line data;
And determining a display position based on the adjusted sight line data, and performing head-up display by applying the display position.
Fig. 7 illustrates a physical schematic diagram of an electronic device, as shown in fig. 7, which may include: processor 710, communication interface (Communications Interface) 720, memory 730, and communication bus 740, wherein processor 710, communication interface 720, memory 730 communicate with each other via communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a display method comprising: acquiring an image under the current view angle of a camera; determining an interested region of an image, and extracting features of the interested region to obtain a feature vector under the current view angle; acquiring a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, wherein the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle; and adjusting the sight line data corresponding to the image based on the target transformation matrix, and performing head-up display by applying the adjusted sight line data.
Further, the logic instructions in the memory 730 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, is capable of performing the display method provided by the methods described above, the method comprising: acquiring an image under the current view angle of a camera; determining an interested region of an image, and extracting features of the interested region to obtain a feature vector under the current view angle; acquiring a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, wherein the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle; and adjusting the sight line data corresponding to the image based on the target transformation matrix, and performing head-up display by applying the adjusted sight line data.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the display method provided by the above methods, the method comprising: acquiring an image under the current view angle of a camera; determining an interested region of an image, and extracting features of the interested region to obtain a feature vector under the current view angle; acquiring a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, wherein the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle; and adjusting the sight line data corresponding to the image based on the target transformation matrix, and performing head-up display by applying the adjusted sight line data.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A display method, comprising:
acquiring an image under the current view angle of a camera;
determining an interested region of the image, and extracting features of the interested region to obtain a feature vector under the current view angle;
acquiring a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, wherein the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle;
and adjusting the sight line data corresponding to the image based on the target transformation matrix, and applying the adjusted sight line data to perform head-up display.
2. The display method according to claim 1, wherein the step of constructing the preset location feature library includes:
obtaining a standard position image and a calibration image under each view angle;
determining a transformation matrix corresponding to the transformation of the calibration image under each view angle to the standard position image;
determining the interested areas of the calibration images under each view angle, and extracting the characteristics of the interested areas of the calibration images under each view angle to obtain the characteristic vectors corresponding to the calibration images under each view angle;
and constructing the preset position feature library based on the feature vector corresponding to the calibration image under each view angle and a transformation matrix corresponding to the calibration image under each view angle transformed to the standard position image.
3. The display method according to claim 2, wherein determining a transformation matrix corresponding to the calibration image at each viewing angle to the standard position image includes:
acquiring each characteristic point in the calibration image at any view angle;
matching each characteristic point in the calibration image at any view angle with each target characteristic point in the standard position image;
And under the condition that the number of the successfully matched characteristic points reaches a preset threshold value, determining a transformation matrix corresponding to the transformation of the calibration image under any view angle to the standard position image based on each characteristic point in the calibration image under any view angle and each target characteristic point in the standard position image.
4. The display method according to claim 2, wherein the standard-position image and the calibration image at each viewing angle are images including a checkerboard.
5. The display method according to any one of claims 1 to 4, wherein the determining the region of interest of the image and performing feature extraction on the region of interest to obtain the feature vector at the current viewing angle includes:
performing face detection on the image and determining a face area in the image;
deleting the face region from the image to obtain a region of interest of the image;
and based on the self-encoder, extracting the characteristics of the region of interest to obtain the characteristic vector under the current view angle.
6. The display method according to any one of claims 1 to 4, wherein the obtaining, based on a preset location feature library, a target transformation matrix corresponding to a feature vector at the current view angle includes:
Determining a target feature vector corresponding to the feature vector under the current view angle based on the similarity between the feature vector under the current view angle and the feature vector corresponding to the calibration image under each view angle in the preset position feature library;
and based on the target feature vector and the preset position feature library, acquiring a transformation matrix corresponding to the target feature vector, and taking the transformation matrix corresponding to the target feature vector as a target transformation matrix corresponding to the feature vector under the current view angle.
7. The display method according to any one of claims 1 to 4, wherein adjusting line-of-sight data corresponding to the image based on the target transformation matrix and applying the adjusted line-of-sight data to perform head-up display includes:
adjusting the sight line data corresponding to the image based on the target transformation matrix to obtain adjusted sight line data;
and determining a display position based on the adjusted sight line data, and performing head-up display by applying the display position.
8. A display device, comprising:
the image acquisition unit is used for acquiring an image under the current view angle of the camera;
the feature extraction unit is used for determining an interested region of the image, and extracting features of the interested region to obtain a feature vector under the current view angle;
The matrix acquisition unit is used for acquiring a target transformation matrix corresponding to the feature vector under the current view angle based on a preset position feature library, and the preset position feature library is constructed based on the feature vector and the transformation matrix corresponding to the calibration image under each view angle;
and the adjustment display unit is used for adjusting the sight line data corresponding to the image based on the target transformation matrix and performing head-up display by applying the adjusted sight line data.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the display method of any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the display method according to any one of claims 1 to 7.
CN202311812674.7A 2023-12-25 2023-12-25 Display method, display device, electronic equipment and storage medium Pending CN117765080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311812674.7A CN117765080A (en) 2023-12-25 2023-12-25 Display method, display device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311812674.7A CN117765080A (en) 2023-12-25 2023-12-25 Display method, display device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117765080A true CN117765080A (en) 2024-03-26

Family

ID=90321772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311812674.7A Pending CN117765080A (en) 2023-12-25 2023-12-25 Display method, display device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117765080A (en)

Similar Documents

Publication Publication Date Title
US10684681B2 (en) Neural network image processing apparatus
US20190251675A1 (en) Image processing method, image processing device and storage medium
CN109087261B (en) Face correction method based on unlimited acquisition scene
CN109919971B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
KR101510312B1 (en) 3D face-modeling device, system and method using Multiple cameras
CN110780739A (en) Eye control auxiliary input method based on fixation point estimation
EP3534333A1 (en) Method for calibrating the position and orientation of a camera relative to a calibration pattern
EP3506149A1 (en) Method, system and computer program product for eye gaze direction estimation
CN112734832B (en) Method for measuring real size of on-line object in real time
CN112733672A (en) Monocular camera-based three-dimensional target detection method and device and computer equipment
CN111854620A (en) Monocular camera-based actual pupil distance measuring method, device and equipment
CN114494347A (en) Single-camera multi-mode sight tracking method and device and electronic equipment
WO2018222122A1 (en) Methods for perspective correction, computer program products and systems
CN115713794A (en) Image-based sight line drop point estimation method and device
CN116051631A (en) Light spot labeling method and system
CN116342519A (en) Image processing method based on machine learning
CN116993612A (en) Nonlinear distortion correction method for fisheye lens
CN109034137B (en) Head pose flag update method, apparatus, storage medium and terminal device
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN117765080A (en) Display method, display device, electronic equipment and storage medium
CN112800966B (en) Sight tracking method and electronic equipment
CN112304291B (en) HUD-based lane line display method and computer-readable storage medium
CN112804504B (en) Image quality adjusting method, image quality adjusting device, projector and computer readable storage medium
CN114463832A (en) Traffic scene sight tracking method and system based on point cloud
CN109587469B (en) Image processing method and device based on artificial intelligence recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination