CN115202476A - Display image adjusting method and device, electronic equipment and storage medium - Google Patents

Display image adjusting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115202476A
CN115202476A CN202210764474.8A CN202210764474A CN115202476A CN 115202476 A CN115202476 A CN 115202476A CN 202210764474 A CN202210764474 A CN 202210764474A CN 115202476 A CN115202476 A CN 115202476A
Authority
CN
China
Prior art keywords
virtual image
image screen
determining
eye
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210764474.8A
Other languages
Chinese (zh)
Other versions
CN115202476B (en
Inventor
孙孝文
李志勇
吕涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zejing Xi'an Automotive Electronics Co ltd
Original Assignee
Zejing Xi'an Automotive Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zejing Xi'an Automotive Electronics Co ltd filed Critical Zejing Xi'an Automotive Electronics Co ltd
Priority to CN202210764474.8A priority Critical patent/CN115202476B/en
Publication of CN115202476A publication Critical patent/CN115202476A/en
Application granted granted Critical
Publication of CN115202476B publication Critical patent/CN115202476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Instrument Panels (AREA)

Abstract

The application is suitable for the technical field of intelligent automobiles, and provides a display image adjusting method, a display image adjusting device, electronic equipment and a storage medium, wherein the method can be suitable for an augmented reality head-up display (AR-HUD), and comprises the following steps: acquiring the current eye position of a driver; determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current eye position; adjusting the virtual image screen to a first position; adjusting the virtual image displayed in the virtual image screen to a second position; and the adjusted virtual image, the real object corresponding to the virtual image and the human eyes are positioned on the same straight line. By the method, the display effect of the image displayed based on the AR-HUD can be improved, the driver can be ensured to see the complete display image, and the display image and the driving real scene are accurately fused in the visual angle of the driver.

Description

Display image adjusting method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application belongs to the technical field of intelligent automobiles, and particularly relates to a method and a device for adjusting a display image, electronic equipment and a storage medium.
Background
With the development of intelligent automobiles, users have higher and higher requirements on driving safety. An Augmented Reality Head-Up Display (AR-HUD) is an Augmented Reality Head-Up Display capable of fusing and displaying relevant information of driving safety such as a navigation tracing line, an obstacle warning icon, left and right blind area reminding and the like with driving real scenes in an image manner, and can enhance perception of a user on a driving environment, so that driving safety can be improved.
Present AR-HUD is mostly fixed eye box, can not carry out the adjustment that shows the image according to user's actual conditions pertinence, and the display effect is not good, can influence user's driving safety on the contrary.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for adjusting a display image, an electronic device, and a storage medium, which can improve a display effect of an image displayed based on an AR-HUD, and ensure that a driver sees a complete display image, so that the display image and a driving scene are accurately merged in a viewing angle of the driver.
A first aspect of an embodiment of the present application provides a method for adjusting a display image, which may be applied to an augmented reality head-up display AR-HUD, the method including:
acquiring the current eye position of a driver;
determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current position of the human eyes;
adjusting the virtual image screen to the first position; and (c) a second step of,
adjusting the virtual image displayed in the virtual image screen to the second position; and the virtual image after adjustment, the real object corresponding to the virtual image and the human eyes are positioned on the same straight line.
A second aspect of the embodiments of the present application provides an adjusting device for displaying an image, which may be suitable for an augmented reality head-up display AR-HUD. The device is including obtaining module, confirming module, virtual image screen adjustment module and virtual image adjustment module, wherein:
the acquisition module is used for acquiring the current human eye position of the driver;
the determining module is used for determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current eye position;
a virtual image screen adjusting module, configured to adjust the virtual image screen to the first position; and (c) a second step of,
a virtual image adjusting module, configured to adjust the virtual image displayed in the virtual image screen to the second position; and the virtual image after adjustment, the real object corresponding to the virtual image and the human eyes are positioned on the same straight line.
A third aspect of embodiments of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method in the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the method as in the first aspect above.
Compared with the prior art, the embodiment of the application has the following advantages:
the embodiment of the application provides a method for adjusting display image, when adopting this method to adjust display image, can at first acquire the current people's eye position of driver, then according to current people's eye position, confirm the first position of the virtual image screen that waits to adjust and the second position of the virtual image that shows in the virtual image screen, adjust the virtual image screen to the first position again, adjust the virtual image that shows in the virtual image screen to the second position at last, and, virtual image after the adjustment, the real object and the people's eye that the virtual image corresponds are located same straight line. The specific content of the display image that the driver can see has been decided to the current people's eye position of driver, and the concrete position of virtual image that shows is adjusted in real time in virtual image screen and the virtual image screen according to driver's eyes position in this application, can improve the display effect based on the image that AR-HUD shows, ensures that the driver can see complete display image. For example, for drivers with different heights or different sitting postures, the virtual image screen and the virtual image displayed in the virtual image screen can be adjusted to the position corresponding to the height or the sitting posture of the driver in a targeted manner by adopting the method, so that the display image and the driving live-action scene are accurately fused in the visual angle of the driver, the situation that partial drivers cannot see the complete image due to the difference of the heights or the sitting postures is avoided, and the driving safety is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic diagram of an adjustment method for a display image according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of another adjustment method for a display image according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an implementation manner of S201 in an adjustment method of a display image according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating the principle of S202 in an adjustment method for a display image according to an embodiment of the present application;
fig. 5 is a schematic diagram of an implementation manner of S202 in an adjustment method of a display image according to an embodiment of the present application;
fig. 6 is a schematic diagram of an implementation manner of S2023 in an adjustment method of a display image according to an embodiment of the present application;
FIG. 7 is a diagram illustrating a further method for adjusting a display image according to an embodiment of the present application;
fig. 8 is a schematic diagram of a system to which an adjustment method for a display image according to an embodiment of the present disclosure is applied;
FIG. 9 is a diagram illustrating a further method for adjusting a display image according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an apparatus for adjusting a display image according to an embodiment of the present disclosure;
fig. 11 is a schematic view of an electronic device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather mean "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless otherwise specifically stated. The technical solution of the present application will be described below by way of specific examples.
Fig. 1 is a schematic diagram of an adjusting method for a display image according to an embodiment of the present application, as shown in fig. 1, the method includes:
and S101, acquiring the current eye position of the driver.
The method in the embodiment can be applied to the AR-HUD, and the display image of the AR-HUD can be adjusted.
The AR-HUD is a novel vehicle head-up display combining an augmented reality technology and a head-up display, and can fuse a display image with a driving real scene. The AR-HUD can display current vehicle conditions and driving information, display information of nearby living service classes, remind and early warn drivers of pedestrians and the like.
In this embodiment, the current eye position of the driver can be acquired in real time during driving.
Specifically, the current position of the eyes of the driver is obtained, the face of the driver can be photographed in real time through image acquisition modules such as a camera and the like provided by the AR-HUD to obtain a plurality of images, and then the photographed images are recognized through a human eye recognition module provided by the AR-HUD to obtain the current position of the eyes of the driver.
The obtained current eye position of the driver may be represented by coordinates or in other forms, which is not limited in this application.
Because different drivers have different heights, the sitting postures of the drivers in the cab are different, and correspondingly, the positions of the obtained human eyes are different. When a driver drives a car in a cab, the driver can adjust the sitting posture of the driver, so that the current eye position of the driver acquired in real time can be different at different moments.
And S102, determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current eye position.
The AR-HUD may include a host, sets of mirrors, and a virtual image screen. The image that the host computer formed can show a real-time virtual image in the virtual image screen in front of the windscreen of car to the windscreen of car through the reflection of multiunit speculum. The virtual image screen is not a physical display screen but may project an area displaying a virtual image.
The current position of people's eye of the driver who obtains is probably different at the moment of difference, when the position of virtual image screen and the position of the virtual image that shows in the virtual image screen are fixed unchangeable, because different drivers, the height is different, the position of sitting when the driver's cabin is different, the image display effect that leads to the driver to see is relatively poor, and, the condition that the driver can not see complete image can not appear, also can be in driver's visual angle, the inaccurate condition of display image and driving real scene fusion appears.
Whether the driver can see the complete display image is related to the position of the virtual image screen. In this embodiment, in order to improve the display effect of the image displayed based on the AR-HUD and ensure that the driver sees the complete display image, the position of the virtual image screen may be adjusted in real time.
In the visual angle of the driver, whether the display image is accurately fused with the driving real scene is related to the position of the virtual image screen and the position of the virtual image displayed in the virtual image screen. In this embodiment, in order to enable the display image to be accurately fused with the driving real-time scene in the viewing angle of the driver, the position of the virtual image displayed in the virtual image screen may also be adjusted in real time under the condition that the driver can see the complete display image.
Specifically, the first position of the virtual image screen to be adjusted and the second position of the virtual image displayed in the virtual image screen may be determined in real time according to the current eye position of the driver acquired in S101.
The obtained current human eye position of the driver may be different at different moments, in order to improve the display effect of the image displayed based on the AR-HUD and ensure that the driver sees a complete display image, the first position of the virtual image screen to be adjusted may be related to the current human eye position, and the first position of the virtual image screen to be adjusted may be determined according to the current human eye position.
On the other hand, in order to allow the display image to be accurately merged with the driving real scene in the viewing angle of the driver, the second position of the virtual image displayed in the virtual image screen to be adjusted may be related to the current eye position and the first position of the virtual image screen. After the first position of the virtual image screen to be adjusted is determined in real time according to the obtained current eye position of the driver, the second position of the virtual image displayed in the virtual image screen to be adjusted can be determined according to the current eye position and the first position of the virtual image screen.
In the embodiment of the present application, when the virtual image screen is at the first position and the virtual image displayed in the virtual image screen is at the second position, the driver can completely see the image (virtual image) displayed in the virtual image screen.
And S103, adjusting the virtual image screen to a first position.
The first position of the virtual image screen to be adjusted may represent a position where the virtual image screen after adjustment is located. Therefore, in this embodiment, the virtual image screen may be adjusted to the first position.
And S104, adjusting the virtual image displayed in the virtual image screen to a second position.
The second position of the virtual image displayed in the virtual image screen to be adjusted may represent a position of the adjusted virtual image in the virtual image screen. Therefore, in this embodiment, the virtual image displayed in the virtual image screen may be adjusted to the second position.
In the process of displaying images, when the virtual images, the real objects corresponding to the virtual images and the eyes of a driver are positioned on the same straight line, the displayed images and the driving real scenes can be accurately fused in the visual angle of the driver.
Therefore, in order to enable the display image to be accurately fused with the driving real scene in the visual angle of the driver, in this embodiment, when the position of the virtual image screen and the position of the virtual image displayed in the virtual image screen are adjusted according to the current position of the human eyes, the adjusted virtual image, the real object corresponding to the virtual image and the human eyes can be located on the same straight line.
In this embodiment, according to the current position of people's eye of the driver who obtains, confirm the first position of the virtual image screen of treating the adjustment, adjust the virtual image screen to first position, can improve the display effect based on the image that AR-HUD shows, can be at different drivers, the height is different, under the different circumstances of the position of sitting when the driver's cabin, ensures that the driver sees complete display image. After the virtual image screen is adjusted to the first position, the virtual image displayed in the virtual image screen is adjusted to the second position determined according to the current human eye position of the acquired driver, and the adjusted virtual image, the real object corresponding to the virtual image and the human eyes are positioned on the same straight line, so that the display effect of the image displayed based on the AR-HUD can be improved, and the displayed image and the driving real scene can be accurately fused in the visual angle of the driver.
Fig. 2 is a schematic diagram of another adjustment method for a display image according to an embodiment of the present application, and as shown in fig. 2, the method includes:
s201, establishing a first corresponding relation between a plurality of human eye position coordinate intervals and a plurality of virtual image screen height gears.
When the first position of the virtual image screen to be adjusted is determined according to the current eye position of the driver, the adjustment can be performed according to the relationship between the eye position and the position of the virtual image screen. Therefore, the first corresponding relation between the plurality of human eye position coordinate intervals and the plurality of virtual image screen height gears can be established, so that the position of the virtual image screen can be accurately adjusted, the display effect of the image displayed based on the AR-HUD can be improved, and the driver can be ensured to see the complete display image.
Fig. 3 is a schematic diagram of an implementation manner of S201 in an adjustment method of a display image according to an embodiment of the present application. Specifically, establishing a first corresponding relationship between a plurality of eye position coordinate intervals and a plurality of virtual image screen height gears may include the following steps S2011 to S2014:
s2011, a human eye position coordinate range and a virtual image screen height range may be determined.
The eye-box of the AR-HUD is the area where the eyes of the driver can move. If the eye position is within this area, the complete image can be seen, whereas if the eye position is not within this area, the complete image cannot be seen. Therefore, the human eye position coordinate range can be determined from the range of the eye box of the AR-HUD.
The position of the virtual image displayed in the virtual image screen of the AR-HUD can be adjusted, but only the position of the virtual image in the longitudinal direction may be adjusted, and the position of the virtual image in the lateral direction may not be adjusted. Thus, the determined eye position coordinate range may be a vertical coordinate range of the eye position.
Specifically, if the eye box of the AR-HUD has a range of 130mm 50mm,130mm is the lateral range of the eye box, and 50mm is the longitudinal range of the eye box, it can be determined that the human eye position coordinate range is (-25mm, 25mm).
The position of the virtual image displayed in the virtual image screen of the AR-HUD can be adjusted, but only the position of the virtual image in the longitudinal direction may be adjusted, and the position of the virtual image in the lateral direction may not be adjusted. Thus, the screen height range of the AR-HUD can be determined.
S2012, the human eye position coordinate range may be divided into a plurality of human eye position coordinate intervals.
Specifically, the ordinate range of the human eye position may be divided into ordinate intervals of a plurality of human eye positions. The lengths of the intervals may be equal or different, and are not limited in this application.
The coordinate range of the human eye position is divided into a plurality of coordinate intervals, so that the position of the virtual image screen and the position of the virtual image displayed in the virtual image screen can be accurately and quantitatively adjusted according to the current human eye position of the driver.
S2013, the virtual image screen height range may be divided into a plurality of virtual image screen height steps.
When dividing the height gears of the virtual image screen, the number of the divided height gears of the virtual image screen can be determined according to the number of the coordinate intervals divided by the coordinate range of the human eye position, and the number of the divided height gears of the virtual image screen can be the same as the number of the coordinate intervals divided by the coordinate range of the human eye position.
And S2014, performing one-to-one matching on each human eye position coordinate interval and each virtual image screen height gear in sequence to obtain a first corresponding relation.
In the first corresponding relation, the human eye position coordinate interval and the virtual image screen height gear are in one-to-one correspondence. When the first corresponding relation is determined, one-to-one matching can be sequentially carried out on each human eye position coordinate interval and each virtual image screen height gear.
In the first corresponding relation, when the current position of the eyes of the driver is in the coordinate interval of the position of the eyes in the first corresponding relation, the corresponding virtual image screen height gear can ensure that the displayed image is complete in the visual angle of the driver.
According to the first corresponding relation that obtains, can generate the corresponding relation table between preset people's eye position and the virtual image screen height gear, as shown in table 1:
coordinate interval of human eye position Virtual image screen height gear
0 0
0~5 1
5~10 2
10~15 3
-5~0 -1
-10~-5 -2``
-15~-10` -3
Table 1 table of correspondence between preset eye positions and virtual image screen height gears
S202, determining a second corresponding relation between each virtual image screen height gear and a plurality of mapping eyepoint coordinates according to the first corresponding relation.
When the human eyes observe an object, the size of the object, the distance between the object and the human eyes and other information can be determined. The determined information of the size of the object and the distance between the object and the human eyes can be called as a projection matrix of the eyepoint.
Correspondingly, in the process of displaying the image by the AR-HUD, the virtual camera in the AR-HUD can determine and display the information such as the size, the distance and the like of the object consistent with the object observed by human eyes by adjusting the information such as the direction, the size and the position. Wherein the position, direction and size information of the virtual camera is called a mapping matrix of the mapping eyepoint.
In this embodiment, when the second position of the virtual image displayed in the virtual image screen to be adjusted is determined according to the current eye position of the driver, the adjustment may be performed according to the relationship between the position of the virtual image screen and the position of the mapping eyepoint. The position of the virtual image screen is determined by the first corresponding relation of a plurality of human eye position coordinate intervals and a plurality of virtual image screen height gears.
Therefore, the second corresponding relation between each virtual image screen height gear and a plurality of mapping eyepoint coordinates can be determined according to the first corresponding relation, so that the position of the virtual image displayed in the virtual image screen can be accurately adjusted, the display effect of the image displayed based on the AR-HUD can be improved, and the displayed image and the driving real scene can be accurately fused in the visual angle of the driver.
Fig. 4 is a schematic diagram illustrating the principle of S202 in an adjustment method for a display image according to an embodiment of the present application.
Where 401 is a human eye position, 402 is a front windshield, 403 is a virtual image screen, 404 is a screen positioning mark, 405 is a ground positioning mark, and 406 is a ground.
Fig. 5 is a schematic diagram of an implementation manner of S202 in a method for adjusting a display image according to an embodiment of the present application. As explained in conjunction with fig. 4, determining the second corresponding relationship between each virtual image screen height gear and the plurality of mapped eyepoint coordinates according to the first corresponding relationship may include the following steps S2021 to S2023:
s2021, determining initial eye position coordinates, determining an initial virtual image screen height gear, and determining initial eye point coordinates.
The determined initial eye position 401 coordinates may be one coordinate specified by human, the determined initial virtual image screen 403 height position may be one position specified by human, and the determined initial eye point coordinates may be one coordinate specified by human.
S2022, determining an initial human eye position coordinate interval, a screen positioning identifier of the virtual image screen and a ground positioning identifier, wherein the ground positioning identifier is an intersection point of an extension line of any point in the initial human eye position coordinate interval and a screen positioning identifier connection line and the ground.
Specifically, the initial eye position 401 coordinate interval corresponding to the initial virtual image screen 403 height gear may be determined according to the first corresponding relationship.
A screen location indicator 404 may be artificially determined in the virtual image screen 403, and the screen location indicator 404 may be located anywhere in the virtual image screen 403.
Based on the principle that when the virtual image, the real object corresponding to the virtual image and the eyes are located on the same straight line and in the visual angle of the driver, the display image and the driving real scene can be accurately fused, the intersection point of the extension line of the connection line of any point in the coordinate interval of the initial eye position 401 and the screen positioning identifier 404 and the ground 406 can be determined as the ground positioning identifier 405.
And S2023, determining mapping eye point coordinates corresponding to the height gears of each virtual image screen by adjusting the height gears of the virtual image screens according to the first corresponding relation, and obtaining a second corresponding relation.
In driver's visual angle, when the display image can accurately fuse with the driving outdoor scene, different drivers or different driver's position of sitting, the high gear of virtual image screen 403 that corresponds is probably different, and the mapping eye point coordinate that corresponds is also probably different to, there is the second corresponding relation between the high gear of virtual image screen 403 and the mapping eye point coordinate.
Therefore, when the second relationship is determined, the mapping eyepoint coordinates corresponding to the height gears of the virtual image screens 403 are determined to obtain the second relationship by adjusting the height gears of the virtual image screens 403 when the display image and the driving real scene can be accurately integrated in the visual angle of the driver.
Fig. 6 is a schematic diagram of an implementation manner of S2023 in an adjustment method of a display image according to an embodiment of the present application. Specifically, according to the first corresponding relationship, by adjusting the virtual image screen height gears, determining mapping eye coordinates corresponding to each virtual image screen height gear to obtain a second corresponding relationship, the method may include the following steps S231 to S234:
s231, aiming at any virtual image screen height gear, determining a human eye position coordinate target interval corresponding to the virtual image screen height gear according to the first corresponding relation.
Specifically, according to the one-to-one correspondence relationship between the height gear of the virtual image screen 403 and the coordinate target interval of the human eye position 401 in the first correspondence relationship, the coordinate target interval of the human eye position 401 corresponding to any height gear of the virtual image screen 403 may be determined.
S232, determining whether the real object position point corresponding to the screen positioning mark is overlapped with the ground positioning mark when the current eye position of the driver is within the eye position coordinate target interval range.
The real object position point corresponding to the screen positioning mark 404 may be an intersection point between an extension line of a connection line between any point in the target interval of the coordinates of the human eye position 401 and the screen positioning mark 404 and the ground 406. When the current eye position 401 of the driver is within the eye position 401 coordinate target interval, it can be determined whether the intersection point coincides with the ground positioning mark 405.
Based on at the virtual image, when the real object and the people's eye that the virtual image corresponds are located same straight line, in driver's visual angle, the principle that display image and driving real scene can accurately fuse, when judging driver's current people's eye position 401 when people's eye position 401 coordinate target interval within range, whether real object position point that screen location sign 404 corresponds and ground location sign 405 coincide, can confirm in driver's visual angle, whether screen location sign 404 and ground location sign 405 coincide, thereby can confirm in driver's visual angle, when display image and driving real scene can accurately fuse, the mapping eyepoint coordinate that virtual image screen height 403 gear corresponds.
And S233, if the real object position point corresponding to the screen positioning identifier is overlapped with the ground positioning identifier, taking the current mapping eye point coordinate as the mapping eye point coordinate corresponding to the virtual image screen height gear.
The real object position point corresponding to the screen positioning identifier 404 may be an intersection point of an extension line of a connection line between any one point in the target interval of the coordinates of the human eye position 401 and the screen positioning identifier 404 and the ground 406, and if the intersection point is overlapped with the ground positioning identifier 405, the current mapping eye point coordinate is used as the mapping eye point coordinate corresponding to the virtual image screen height 403 gear.
The intersection point coincides with the ground positioning mark 405, which shows that the current mapping eyepoint can accurately fuse the display image and the driving live-action in the visual angle of the driver.
And S234, if the real object position point corresponding to the screen positioning identifier is not overlapped with the ground positioning identifier, adjusting the mapping eye point coordinate, so that the real object position point corresponding to the screen positioning identifier is overlapped with the ground positioning identifier, and taking the overlapped mapping eye point coordinate as the mapping eye point coordinate corresponding to the virtual image screen height gear.
The real object position point corresponding to the screen positioning identifier 404 may be an intersection point between an extension line of a connection line between any point in the target interval of the coordinates of the human eye position 401 and the screen positioning identifier 404 and the ground 406, if the intersection point is not coincident with the ground positioning identifier 405, the mapping eye point coordinates are adjusted until the intersection point is coincident with the ground positioning identifier 405, and the coincident mapping eye point coordinates are used as the mapping eye point coordinates corresponding to the virtual image screen height 403 gear.
And S203, generating a corresponding relation table among preset human eye positions, virtual image screen height gears and mapping eyepoint positions according to the first corresponding relation and the second corresponding relation.
In the first corresponding relation, the human eye position coordinate interval and the virtual image screen height gear are in one-to-one correspondence. In the second correspondence, the virtual image screen height gear corresponds to the mapping eye point coordinates one to one.
According to first corresponding relation and second corresponding relation, can carry out one-to-one matching to each human eye position coordinate interval and each mapping eyepoint coordinate in proper order, can obtain the corresponding relation between predetermined human eye position, virtual image screen height gear and the mapping eyepoint position, and generate the corresponding relation table between predetermined human eye position, virtual image screen height gear and the mapping eyepoint position, as shown in table 2:
human eye position ordinate Virtual image screen height gear Mapping eyepoint ordinate
0 0 0
0~5 1 10
Table 2 table of correspondence among preset eye positions, virtual image screen height gears, and mapped eyepoint positions
And S204, acquiring the current eye position of the driver.
And S205, determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current eye position.
And S206, adjusting the virtual image screen to a first position.
And S207, adjusting the virtual image displayed in the virtual image screen to a second position.
Since S204 to S207 in this embodiment are similar to S101 to S104 in the foregoing embodiment, they can refer to each other, and are not described again in this embodiment.
The correspondence table between the preset eye position, virtual image screen height gear and mapping eyepoint position generated in S203 can represent the correspondence between the eye position, virtual image screen height gear and mapping eyepoint position when the display image is complete and the display image and the driving real scene can be accurately integrated in the visual angle of the driver. Therefore, the table can be looked up according to the current eye position of the driver, and when the display image is complete and the display image and the driving live-action can be accurately integrated in the visual angle of the driver, the first position of the virtual image screen to be adjusted and the second position of the virtual image displayed in the virtual image screen are determined.
Specifically, when determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to a current eye position of a driver, a corresponding relation table among a preset eye position, a virtual image screen height gear and a mapping eye point position may be first checked, and a virtual image screen height gear and a mapping eye point coordinate corresponding to an eye position coordinate interval where the current eye position of the driver is located may be set. And then, determining a first position of a virtual image screen to be adjusted according to the height gear of the virtual image screen corresponding to the eye position coordinate interval where the current eye position is located, and determining a second position of a virtual image displayed in the virtual image screen to be adjusted according to the mapping eye point coordinate corresponding to the eye position coordinate interval where the current eye position is located.
In this embodiment, the divided eye position coordinate intervals and virtual image screen height gears are subjected to one-to-one matching, so as to obtain a first corresponding relationship. And obtaining a second corresponding relation according to the first corresponding relation based on the principle that the display image and the driving real scene can be accurately fused in the visual angle of the driver when the virtual image, the real object corresponding to the virtual image and the human eyes are positioned on the same straight line. And generating a corresponding relation table among the preset human eye position, the virtual image screen height gear and the mapping eyepoint position according to the first corresponding relation and the second corresponding relation. Can be according to the present people's eye position table of driver, confirm in driver's visual angle, the display image is complete, when display image can accurately fuse with the driving outdoor scene, treat the first position of the virtual image screen of adjustment and the second position of the virtual image that shows in the virtual image screen to can improve the display effect based on the image that AR-HUD shows, ensure that the driver sees complete display image, in driver's visual angle, display image can accurately fuse with the driving outdoor scene.
Fig. 7 is a schematic diagram of another adjustment method for a display image according to an embodiment of the present application, which can be implemented based on the system shown in fig. 8. Next, a method for adjusting a display image will be described with reference to the system shown in fig. 8, the method including:
s701, acquiring the current human eye position of the driver.
Specifically, the current position of the eyes of the driver is obtained, the face of the driver can be photographed in real time through image acquisition modules such as a camera provided by the AR-HUD to obtain a plurality of images, and then the photographed images are recognized through the eye recognition module 802 provided by the AR-HUD to obtain the current position of the eyes of the driver. In addition, the acquired current eye position of the driver CAN be sent to the CAN module 804 through the entire vehicle CAN bus 803, and the CAN module 804 sends the current eye position to the data processing module 806.
S702, determining the relative position of the current human eye position and the display effective area of the AR-HUD.
The data processing module 806 may determine the display effective area of the AR-HUD based on the extent of the eye-box of the AR-HUD stored by the data storage module 805.
Specifically, if the range of the eye box of the AR-HUD is 130mm × 50mm,130mm is the range of the eye box in the lateral direction, and 50mm is the range of the eye box in the longitudinal direction, it can be determined that the display effective region of the AR-HUD is (-25mm, 25mm).
After the display effective area of the AR-HUD is determined, the relative position of the current human eye position to the display effective area of the AR-HUD is determined.
The current eye position relative to the display effective area of the AR-HUD includes the eye position being within the display effective area and the eye position being above or below the display effective area.
Specifically, when the position of the human eye is 15mm, the position of the human eye is located in the display effective area; when the position of the human eye is 35mm, the position of the human eye is positioned above the display effective area; when the eye position is-43 mm, the eye position is positioned below the display effective area.
And S703, if the position of the human eye is above the display effective area, adjusting the virtual image screen to the highest position. And if the position of the human eyes is positioned below the display effective area, adjusting the virtual image screen to the lowest position.
If the position of the human eyes is above the display effective area, the driver cannot see the complete display image no matter how to adjust the position of the virtual image screen. Accordingly, the motor module 808 adjusts the virtual image screen to the highest position, which may enable the driver to see the maximum range of the displayed image.
If the position of the human eye is located below the display effective area, the driver cannot see the complete display image no matter how to adjust the position of the virtual image screen. Accordingly, the motor module 808 adjusts the virtual image screen to the lowest position, which may enable the driver to see the maximum range of the display image.
S704, acquiring a corresponding relation table among preset human eye positions, virtual image screen height gears and mapping eyepoint positions.
If the position of the human eye is within the display effective region, it indicates that the driver can see the entire display image if the position of the virtual image screen and the position of the virtual image are properly adjusted. At this time, the data processing module 806 may obtain a correspondence table among preset human eye positions, virtual image screen height gears, and mapped eyepoint positions from the mapping adjustment module 807.
S705, according to the corresponding relation table, determining a virtual image screen height target gear and a mapping eye point target position corresponding to the current eye position information.
The corresponding relation table among the preset eye position, the virtual image screen height gear and the mapping eyepoint position can represent the corresponding relation among the eye position, the virtual image screen height gear and the mapping eyepoint position when the display image is complete and the display image and the driving real scene can be accurately integrated in the visual angle of the driver.
According to the corresponding relationship table, the data processing module 806 may determine a virtual image screen height target gear and a mapping eye point target position corresponding to the current eye position information, so that the display image is complete and the display image and the driving real scene may be accurately fused in the viewing angle of the driver.
And S706, determining a first position of the virtual image screen according to the target gear of the virtual image screen height.
According to the virtual image screen height target gear, the first position of the virtual image screen, which enables the display image to be complete and the display image and the driving real scene to be accurately fused and adjusted, in the visual angle of a driver, can be determined.
The data processing module 806 can send the first position of the virtual image screen to the motor module 808.
And S707, determining a second position of the virtual image according to the mapping eye point target position.
And determining a second position of the virtual image which enables the displayed image to be complete and the displayed image and the driving real scene to be accurately fused and adjusted in the visual angle of the driver according to the mapping eyepoint target position.
The data processing module 806 can send the second location of the virtual image to the display module 809.
And S708, adjusting the virtual image screen to a first position.
The motor module 808 adjusts the virtual image screen to a first position.
And S709, adjusting the virtual image displayed in the virtual image screen to a second position.
The display module 809 adjusts the virtual image displayed in the virtual image screen to a second position.
In this embodiment, the relative position of the current eye position and the display effective area of the AR-HUD is determined, the virtual image screen when the eye position is located above the display effective area is adjusted to the highest position, and the virtual image screen when the eye position is located below the display effective area is adjusted to the lowest position, so that the driver can see the display image in the largest range. According to the corresponding relation table between the preset human eye position, the virtual image screen height gear and the mapping eyepoint position, the virtual image screen height target gear and the mapping eyepoint target position corresponding to the current human eye position information are determined, the position of the virtual image screen and the position of the virtual image displayed in the virtual image screen are adjusted, and therefore the display image is complete, and the display image and the driving real scene can be accurately fused in the visual angle of a driver.
For convenience of understanding, the following describes a method for adjusting a display image according to an embodiment of the present application with reference to a specific example.
Referring to fig. 9, when adjusting the display image, a correspondence table between preset eye positions, virtual image screen height gears, and mapped eyepoint positions is first obtained. The corresponding relation table may be established in the manner described in the foregoing method embodiments.
Then, the current eye position of the driver is obtained, and whether the current eye position is in the display effective area of the AR-HUD or not is judged.
And if the current eye position is in the display effective area of the AR-HUD, determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current eye position. Then, the virtual image screen is adjusted to a first position, and the virtual image displayed in the virtual image screen is adjusted to a second position.
And if the position of the human eyes is above the display effective area, adjusting the virtual image screen to the highest position, and displaying the image.
And if the eye position is positioned below the display effective area, adjusting the virtual image screen to the lowest position, and displaying the image.
Referring to fig. 10, which shows a schematic diagram of an adjusting apparatus for displaying an image according to an embodiment of the present application, the apparatus 1000 may include an obtaining module 1001, a determining module 1002, a virtual image screen adjusting module 1003, and a virtual image adjusting module 1004, where:
an obtaining module 1001 is configured to obtain a current eye position of a driver.
The determining module 1002 is configured to determine, according to the current eye position, a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen.
The virtual image screen adjusting module 1003 is configured to adjust a virtual image screen to a first position.
A virtual image module adjustment 1004 for adjusting a virtual image displayed in the virtual image screen to a second position; and the adjusted virtual image, the real object corresponding to the virtual image and the human eyes are positioned on the same straight line.
The apparatus for adjusting a display image may further include:
the first corresponding relation establishing module is used for establishing a first corresponding relation between a plurality of human eye position coordinate intervals and a plurality of virtual image screen height gears.
And the second corresponding relation determining module is used for determining a second corresponding relation between each virtual image screen height gear and the plurality of mapping eye point coordinates according to the first corresponding relation.
And the corresponding relation table generating module is used for generating a corresponding relation table among the preset human eye position, the virtual image screen height gear and the mapping eyepoint position according to the first corresponding relation and the second corresponding relation.
In this embodiment of the application, the first correspondence relationship establishing module may be further configured to: determining a coordinate range of human eye positions and a height range of a virtual image screen; dividing the coordinate range of the human eye position into a plurality of coordinate intervals of the human eye position; dividing the virtual image screen height range into a plurality of virtual image screen height gears; and sequentially carrying out one-to-one matching on each human eye position coordinate interval and each virtual image screen height gear to obtain a first corresponding relation.
In this embodiment of the application, the second correspondence determining module may be further configured to: determining an initial eye position coordinate, determining an initial virtual image screen height gear, and determining an initial eye point coordinate; determining an initial human eye position coordinate interval, a screen positioning identifier of a virtual image screen and a ground positioning identifier, wherein the ground positioning identifier is an intersection point of an extension line of a connecting line of any point in the initial human eye position coordinate interval and the screen positioning identifier and the ground; and according to the first corresponding relation, determining mapping eye point coordinates corresponding to the height gears of the virtual images by adjusting the height gears of the virtual images, and obtaining a second corresponding relation.
In this embodiment of the application, the second correspondence determining module may be further configured to: aiming at any virtual image screen height gear, determining a human eye position coordinate target interval corresponding to the virtual image screen height gear according to a first corresponding relation; determining whether a real object position point corresponding to the screen positioning mark is overlapped with the ground positioning mark when the current eye position of the driver is within the eye position coordinate target interval range; if the real object position point corresponding to the screen positioning identifier is overlapped with the ground positioning identifier, taking the current mapping eye point coordinate as the mapping eye point coordinate corresponding to the virtual image screen height gear; and if the real object position point corresponding to the screen positioning identifier is not coincident with the ground positioning identifier, adjusting the mapping eye point coordinate, so that the real object position point corresponding to the screen positioning identifier is coincident with the ground positioning identifier, and taking the coincident mapping eye point coordinate as the mapping eye point coordinate corresponding to the virtual image screen height gear.
In this embodiment of the present application, the determining module may be further configured to: acquiring a corresponding relation table among preset human eye positions, virtual image screen height gears and mapping eyepoint positions; determining a virtual image screen height target gear and a mapping eye point target position corresponding to the current eye position information according to the corresponding relation table; determining a first position of the virtual image screen according to the target gear of the virtual image screen height; and determining a second position of the virtual image according to the mapping eye point target position.
In an embodiment of the present application, the apparatus for adjusting a display image may further include a relative position determining module, where the relative position determining module is specifically configured to: determining a relative position of the current eye position and a display active area of the AR-HUD, the relative position including the eye position being within the display active area and the eye position being above or below the display active area.
The virtual image screen adjusting module 1003 may further be configured to: if the position of the human eyes is above the display effective area, adjusting the virtual image screen to the highest position; and if the position of the human eyes is positioned below the display effective area, adjusting the virtual image screen to the lowest position.
Referring to fig. 11, a schematic diagram of an electronic device provided in an embodiment of the present application is shown. As shown in fig. 11, the electronic device 1100 in the embodiment of the present application includes: a processor 1110, a memory 1120, and computer programs 1121 stored in the memory 1120 and operable on the processor 1110. The processor 1110, when executing the computer program 1121, implements the steps of the display image adjustment method in various embodiments, such as steps S101 to S104 shown in fig. 1.
Illustratively, the computer programs 1121 can be divided into one or more modules/units that are stored in the memory 1120 and executed by the processor 1110 to accomplish the present application. The one or more modules/units can be a series of computer program instruction segments capable of performing specific functions, which can be used to describe the execution process of the computer program 1121 in the electronic device 1100. For example, the computer program 1121 may be divided into an acquisition module, a determination module, a virtual image screen adjustment module, and a virtual image adjustment module, and the specific functions of each module are as follows:
the acquisition module is used for acquiring the current human eye position of the driver.
The determining module is used for determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current eye position.
And the virtual image screen adjusting module is used for adjusting the virtual image screen to a first position.
The virtual image module is used for adjusting a virtual image displayed in the virtual image screen to a second position; and the adjusted virtual image, the real object corresponding to the virtual image and the human eyes are positioned on the same straight line.
The electronic device 1100 may be an electronic device for implementing the method for adjusting the display image in the foregoing embodiments, and the electronic device 1100 may be a desktop computer, a cloud server, or other computing devices. The electronic device 1100 may include, but is not limited to, a processor 1110, a memory 1120. Those skilled in the art will appreciate that fig. 11 is merely an example of an electronic device 1100 and does not constitute a limitation of the electronic device 1100 and may include more or fewer components than illustrated, or some components may be combined, or different components, e.g., the electronic device 1100 may also include input-output devices, network access devices, buses, etc.
The Processor 1110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 1120 may be an internal storage unit of the electronic device 1100, such as a hard disk or a memory of the electronic device 800. The memory 1120 may also be an external storage device of the electronic device 1100, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and so on, provided on the electronic device 1100. Further, the memory 1120 may also include both an internal storage unit and an external storage device of the electronic device 1100. The memory 1120 is used for storing the computer program 1121 and other programs and data required by the electronic device 1100. The memory 1120 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments are implemented.
The embodiments of the present application further provide a computer program product, which when run on a computer, causes the computer to execute the steps implementing the above method embodiments.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same. Although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for adjusting a display image, the method comprising:
acquiring the current eye position of a driver;
determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current position of the human eyes;
adjusting the virtual image screen to the first position; and (c) a second step of,
adjusting the virtual image displayed in the virtual image screen to the second position; and the virtual image, the real object corresponding to the virtual image and the human eyes after adjustment are positioned on the same straight line.
2. The method of claim 1, prior to obtaining the driver's current eye position, further comprising:
establishing a first corresponding relation between a plurality of human eye position coordinate intervals and a plurality of virtual image screen height gears;
determining a second corresponding relation between each virtual image screen height gear and a plurality of mapping eyepoint coordinates according to the first corresponding relation;
and generating a corresponding relation table among preset human eye positions, virtual image screen height gears and mapping eye point positions according to the first corresponding relation and the second corresponding relation.
3. The method of claim 2, wherein establishing a first correspondence of the plurality of eye position coordinate intervals and the plurality of virtual image screen height steps comprises:
determining a coordinate range of human eye positions and a height range of a virtual image screen;
dividing the human eye position coordinate range into a plurality of human eye position coordinate intervals;
dividing the virtual image screen height range into a plurality of virtual image screen height gears;
and sequentially carrying out one-to-one matching on each human eye position coordinate interval and each virtual image screen height gear to obtain the first corresponding relation.
4. The method as claimed in claim 2, wherein the determining a second corresponding relationship between each virtual image screen height level and a plurality of mapping eyepoint coordinates according to the first corresponding relationship comprises:
determining an initial eye position coordinate, determining an initial virtual image screen height gear, and determining an initial eye point coordinate;
determining an initial human eye position coordinate interval, a screen positioning identifier of the virtual image screen and a ground positioning identifier, wherein the ground positioning identifier is an intersection point of an extension line of a connection line of any one point in the initial human eye position coordinate interval and the screen positioning identifier and the ground;
and according to the first corresponding relation, determining mapping eye point coordinates corresponding to the virtual image screen height gears by adjusting the virtual image screen height gears, and obtaining the second corresponding relation.
5. The method of claim 4, wherein the determining, according to the first correspondence, mapping eye point coordinates corresponding to each virtual image screen height gear by adjusting the virtual image screen height gear to obtain the second correspondence comprises:
aiming at any virtual image screen height gear, determining a human eye position coordinate target interval corresponding to the virtual image screen height gear according to the first corresponding relation;
determining whether a real object position point corresponding to the screen positioning mark is overlapped with the ground positioning mark when the current eye position of the driver is within the eye position coordinate target interval range;
if the real object position point corresponding to the screen positioning identifier is overlapped with the ground positioning identifier, taking the current mapping eye point coordinate as the mapping eye point coordinate corresponding to the virtual image screen height gear;
and if the real object position point corresponding to the screen positioning identifier is not coincident with the ground positioning identifier, adjusting the mapping eye point coordinate to enable the real object position point corresponding to the screen positioning identifier to be coincident with the ground positioning identifier, and taking the coincident mapping eye point coordinate as the mapping eye point coordinate corresponding to the virtual image screen height gear.
6. The method of any one of claims 1 to 5, wherein the determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current position of the human eyes comprises:
acquiring a corresponding relation table among preset human eye positions, virtual image screen height gears and mapping eyepoint positions;
determining a virtual image screen height target gear and a mapping eye point target position corresponding to the current human eye position information according to the corresponding relation table;
determining a first position of the virtual image screen according to the virtual image screen height target gear;
and determining a second position of the virtual image according to the mapping eye point target position.
7. The method of claim 6, after obtaining the driver's current eye position, further comprising:
determining a relative position of the current human eye position to a display active area of the AR-HUD, the relative position including the human eye position being within the display active area and the human eye position being above or below the display active area;
if the position of the human eyes is located above the display effective area, adjusting the virtual image screen to the highest position;
and if the eye position is positioned below the display effective area, adjusting the virtual image screen to the lowest position.
8. An adjustment apparatus for displaying an image, the apparatus comprising:
the acquisition module is used for acquiring the current human eye position of the driver;
the determining module is used for determining a first position of a virtual image screen to be adjusted and a second position of a virtual image displayed in the virtual image screen according to the current position of the human eyes;
the virtual image screen adjusting module is used for adjusting the virtual image screen to the first position; and the number of the first and second groups,
a virtual image adjusting module, configured to adjust the virtual image displayed in the virtual image screen to the second position; and the virtual image, the real object corresponding to the virtual image and the human eyes are positioned on the same straight line after adjustment.
9. An electronic device, characterized in that the electronic device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202210764474.8A 2022-06-30 2022-06-30 Display image adjusting method and device, electronic equipment and storage medium Active CN115202476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210764474.8A CN115202476B (en) 2022-06-30 2022-06-30 Display image adjusting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210764474.8A CN115202476B (en) 2022-06-30 2022-06-30 Display image adjusting method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115202476A true CN115202476A (en) 2022-10-18
CN115202476B CN115202476B (en) 2023-04-11

Family

ID=83577496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210764474.8A Active CN115202476B (en) 2022-06-30 2022-06-30 Display image adjusting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115202476B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024109886A1 (en) * 2022-11-25 2024-05-30 北京罗克维尔斯科技有限公司 Information display method and apparatus, device, storage medium, and vehicle

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016113873A1 (en) * 2015-01-15 2016-07-21 パイオニア株式会社 Display device, control method, program, and storage medium
CN108473055A (en) * 2016-02-05 2018-08-31 麦克赛尔株式会社 head-up display device
CN109643016A (en) * 2016-09-01 2019-04-16 三菱电机株式会社 Display device and method of adjustment
CN110546026A (en) * 2017-05-01 2019-12-06 三菱电机株式会社 Adjusting device, display system and adjusting method
CN110573369A (en) * 2017-04-19 2019-12-13 麦克赛尔株式会社 Head-up display device and display control method thereof
CN111267616A (en) * 2020-02-28 2020-06-12 华域视觉科技(上海)有限公司 Vehicle-mounted head-up display module and method and vehicle
CN111443490A (en) * 2020-04-15 2020-07-24 诸暨市华鲟电子科技有限公司 Virtual image display area adjusting method of AR HUD
CN111476104A (en) * 2020-03-17 2020-07-31 重庆邮电大学 AR-HUD image distortion correction method, device and system under dynamic eye position
CN112130325A (en) * 2020-09-25 2020-12-25 东风汽车有限公司 Parallax correction system and method for vehicle-mounted head-up display, storage medium and electronic device
WO2021132555A1 (en) * 2019-12-27 2021-07-01 日本精機株式会社 Display control device, head-up display device, and method
CN113240592A (en) * 2021-04-14 2021-08-10 重庆利龙科技产业(集团)有限公司 Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position
WO2021197189A1 (en) * 2020-03-31 2021-10-07 深圳光峰科技股份有限公司 Augmented reality-based information display method, system and apparatus, and projection device
CN114200675A (en) * 2021-12-09 2022-03-18 奇瑞汽车股份有限公司 Display method and device, head-up display system and vehicle
WO2022111067A1 (en) * 2020-11-27 2022-06-02 奇瑞汽车股份有限公司 Head-up display parameter adjusting method and apparatus, head-up display, and vehicle

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016113873A1 (en) * 2015-01-15 2016-07-21 パイオニア株式会社 Display device, control method, program, and storage medium
CN108473055A (en) * 2016-02-05 2018-08-31 麦克赛尔株式会社 head-up display device
CN109643016A (en) * 2016-09-01 2019-04-16 三菱电机株式会社 Display device and method of adjustment
EP3508903A1 (en) * 2016-09-01 2019-07-10 Mitsubishi Electric Corporation Display device and adjustment method
CN110573369A (en) * 2017-04-19 2019-12-13 麦克赛尔株式会社 Head-up display device and display control method thereof
CN110546026A (en) * 2017-05-01 2019-12-06 三菱电机株式会社 Adjusting device, display system and adjusting method
WO2021132555A1 (en) * 2019-12-27 2021-07-01 日本精機株式会社 Display control device, head-up display device, and method
CN111267616A (en) * 2020-02-28 2020-06-12 华域视觉科技(上海)有限公司 Vehicle-mounted head-up display module and method and vehicle
CN111476104A (en) * 2020-03-17 2020-07-31 重庆邮电大学 AR-HUD image distortion correction method, device and system under dynamic eye position
WO2021197189A1 (en) * 2020-03-31 2021-10-07 深圳光峰科技股份有限公司 Augmented reality-based information display method, system and apparatus, and projection device
CN111443490A (en) * 2020-04-15 2020-07-24 诸暨市华鲟电子科技有限公司 Virtual image display area adjusting method of AR HUD
CN112130325A (en) * 2020-09-25 2020-12-25 东风汽车有限公司 Parallax correction system and method for vehicle-mounted head-up display, storage medium and electronic device
WO2022111067A1 (en) * 2020-11-27 2022-06-02 奇瑞汽车股份有限公司 Head-up display parameter adjusting method and apparatus, head-up display, and vehicle
CN113240592A (en) * 2021-04-14 2021-08-10 重庆利龙科技产业(集团)有限公司 Distortion correction method for calculating virtual image plane based on AR-HUD dynamic eye position
CN114200675A (en) * 2021-12-09 2022-03-18 奇瑞汽车股份有限公司 Display method and device, head-up display system and vehicle

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHE AN: "A Real-Time Three-Dimensional Tracking and Registration Method in the AR-HUD System" *
李卓 等: "基于AR-HUD的汽车驾驶辅助***设计研究" *
陈璐玲;: "某车载抬头显示光学***设计研究" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024109886A1 (en) * 2022-11-25 2024-05-30 北京罗克维尔斯科技有限公司 Information display method and apparatus, device, storage medium, and vehicle

Also Published As

Publication number Publication date
CN115202476B (en) 2023-04-11

Similar Documents

Publication Publication Date Title
CN109690553A (en) The system and method for executing eye gaze tracking
US20210042955A1 (en) Distance estimation apparatus and operating method thereof
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN109889807A (en) Vehicle-mounted projection adjusting method, device, equipment and storage medium
CN109849788B (en) Information providing method, device and system
DE102020212226A1 (en) Fusion of automotive sensors
US10672269B2 (en) Display control assembly and control method therefor, head-up display system, and vehicle
CN115202476B (en) Display image adjusting method and device, electronic equipment and storage medium
CN111435269A (en) Display adjusting method, system, medium and terminal of vehicle head-up display device
WO2018222122A1 (en) Methods for perspective correction, computer program products and systems
US11935262B2 (en) Method and device for determining a probability with which an object will be located in a field of view of a driver of a vehicle
CN110949272A (en) Vehicle-mounted display equipment adjusting method and device, vehicle, medium and equipment
DE102014207398A1 (en) Object association for contact-analogue display on an HMD
CN114782911A (en) Image processing method, device, equipment, medium, chip and vehicle
CN111263133B (en) Information processing method and system
CN114290998B (en) Skylight display control device, method and equipment
JP2020135866A (en) Object detection method, detection device and electronic apparatus
CN116486351A (en) Driving early warning method, device, equipment and storage medium
CN113066158A (en) Vehicle-mounted all-round looking method and device
CN110827337A (en) Method and device for determining posture of vehicle-mounted camera and electronic equipment
CN113727094A (en) Camera in-loop test equipment and system
CN111241946A (en) Method and system for increasing FOV (field of view) based on single DLP (digital light processing) optical machine
Yoon et al. Augmented reality information registration for head-up display
WO2019127224A1 (en) Focusing method and apparatus, and head-up display device
CN116597425B (en) Method and device for determining sample tag data of driver and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant