CN118163716A - Imaging method, imaging device, vehicle and storage medium - Google Patents

Imaging method, imaging device, vehicle and storage medium Download PDF

Info

Publication number
CN118163716A
CN118163716A CN202410148311.6A CN202410148311A CN118163716A CN 118163716 A CN118163716 A CN 118163716A CN 202410148311 A CN202410148311 A CN 202410148311A CN 118163716 A CN118163716 A CN 118163716A
Authority
CN
China
Prior art keywords
vehicle
image
shielding area
human eye
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410148311.6A
Other languages
Chinese (zh)
Inventor
史时旭
娄小宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Zhejiang Zeekr Intelligent Technology Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Zhejiang Zeekr Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Zhejiang Zeekr Intelligent Technology Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202410148311.6A priority Critical patent/CN118163716A/en
Publication of CN118163716A publication Critical patent/CN118163716A/en
Pending legal-status Critical Current

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The application discloses an imaging method, an imaging device, a vehicle and a storage medium. The imaging method of the embodiment of the application is used for a vehicle and comprises the following steps of tracking the human eye position of an occupant in the vehicle; under the condition that eyes face to a shielding area, an external environment image corresponding to the shielding area is acquired, and the shielding area comprises a part of B column and a part of door frame; and controlling the display device of the shielding area to display the external environment image of the vehicle. According to the imaging method, whether the passenger looks at the shielding area can be determined by tracking the human eye position of the member in the vehicle, when the passenger looks at the shielding area, the outside environment image shielded by the shielding area is obtained, and the outside image is displayed by the display device arranged in the shielding area, so that the passenger can observe the area shielded by the B column and the door frame, and the riding experience of the passenger is improved.

Description

Imaging method, imaging device, vehicle and storage medium
Technical Field
The present invention relates to the field of vehicle technologies, and in particular, to an imaging method, an imaging apparatus, a vehicle, and a storage medium.
Background
At present, the quantity of the automobile in China keeps rising along the straight line, and the automobile gradually goes into the life of common people. Automobiles have become an indispensable tool for people to go out. Currently, in order to improve collision performance and safety of a vehicle, the vehicle is provided with a B-pillar supporting roof and front and rear doors, and receives an impact from a side surface to protect personnel in the vehicle.
However, the B pillar is located between the front door and the rear door of the vehicle, and the B pillar blocks the line of sight of the vehicle occupant from outside the vehicle, so that the occupant cannot observe the entire environment outside the vehicle, and the riding experience of the occupant is affected.
Disclosure of Invention
The invention provides an imaging method, an imaging device, a vehicle and a storage medium.
An embodiment of the present invention provides an imaging method for a vehicle, including:
Tracking a human eye position of an occupant in the vehicle;
Under the condition that the human eyes face to a shielding area, acquiring an external environment image corresponding to the shielding area, wherein the shielding area comprises a part of B column and a part of door frame;
and controlling a display device of the shielding area to display the image of the outside environment of the vehicle.
In some embodiments, the tracking the position of the human eye of the occupant in the vehicle comprises:
Acquiring at least two pieces of in-vehicle image information photographed based on different angles;
And overlapping the overlapping areas of the image information in the vehicle to obtain an in-vehicle image with human eye image information, and determining the human eye position according to the in-vehicle image.
In some embodiments, when the human eyes face the shielding area, acquiring the image of the vehicle exterior environment corresponding to the shielding area includes:
Acquiring the distance between the human eye and the shielding area under the condition that the human eye faces the shielding area;
and acquiring an external environment image of the vehicle corresponding to the shielding area based on the distance.
In some embodiments, the acquiring an image of an exterior environment of the vehicle corresponding to the occlusion region based on the distance includes:
acquiring a first vehicle exterior environment image corresponding to the shielding area under the condition that the distance is smaller than a preset distance;
and under the condition that the distance is larger than or equal to a preset distance, acquiring a second vehicle exterior environment image corresponding to the shielding area, wherein the second vehicle exterior environment image is different from the first vehicle exterior environment image.
In certain embodiments, the imaging method comprises:
And enabling the external environment image of the vehicle to be in butt joint with the environment of the vehicle window perspective to form a complete element.
In certain embodiments, the imaging method further comprises:
and under the condition that the human eyes are not facing the shielding area, stopping acquiring the external environment image corresponding to the shielding area.
An image forming apparatus according to an embodiment of the present invention includes:
the tracking module is used for tracking the human eye position of the passenger in the vehicle;
The acquisition module is used for acquiring an external environment image corresponding to a shielding area under the condition that the human eyes face the shielding area, and the shielding area comprises a part of B column and a part of door frame;
and the control module is used for controlling the display device of the shielding area to display the image outside the vehicle.
The vehicle comprises a first camera shooting assembly, a second camera shooting assembly, a display device and a controller, wherein the controller is used for tracking the position of human eyes of passengers in the vehicle through the first camera shooting assembly, and obtaining an external environment image corresponding to a shielding area through the second camera shooting assembly under the condition that the human eyes face the shielding area, the shielding area comprises a part of B column and a part of door frame, and the display device used for controlling the shielding area is used for displaying the external environment image.
In some embodiments, the controller is configured to obtain at least two in-vehicle image information captured based on different angles; and the overlapping areas of the image information in the vehicles are overlapped to obtain an in-vehicle image with human eye image information, and the human eye position is determined according to the in-vehicle image.
In some embodiments, the controller is configured to acquire a distance between the human eye and the occlusion region when the human eye is directed toward the occlusion region, and to acquire an exterior vehicle image corresponding to the occlusion region based on the distance.
In some embodiments, the controller is configured to obtain a first vehicle exterior environment image corresponding to the occlusion region if the distance is less than a preset distance, and to obtain a second vehicle exterior environment image corresponding to the occlusion region if the distance is greater than or equal to the preset distance, the second vehicle exterior environment image being different from the first vehicle exterior environment image.
In some embodiments, the controller is configured to interface the vehicle exterior environment image with the environment of the vehicle window perspective to form a complete element.
In some embodiments, the controller is configured to stop acquiring the image of the exterior environment corresponding to the occlusion region when the human eye is not facing the occlusion region.
A non-transitory computer readable storage medium storing a computer program of an embodiment of the present invention, which when executed by one or more processors, implements the imaging method of any of the above embodiments.
According to the imaging method, whether the passenger looks at the shielding area can be determined by tracking the human eye position of the member in the vehicle, when the passenger looks at the shielding area, the outside image shielded by the shielding area is acquired, and the outside image is displayed by the display device arranged in the shielding area, so that the passenger can comprehensively observe the surrounding environment before getting off, the blind area of the sight is avoided, the passenger can observe the area shielded by the B column and the door frame, and the riding experience of the passenger is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of an imaging method according to an embodiment of the present invention;
fig. 2 is a schematic view of the structure of the outside of the vehicle according to the embodiment of the invention;
fig. 3 is a schematic view of the structure of the vehicle interior according to the embodiment of the invention;
fig. 4 is a block diagram of an image forming apparatus according to an embodiment of the present invention;
FIG. 5 is a flow chart of an imaging method according to an embodiment of the present invention;
FIG. 6 is a flow chart of an imaging method according to an embodiment of the present invention;
FIG. 7 is a flow chart of an imaging method according to an embodiment of the present invention;
FIG. 8 is a flow chart of an imaging method according to an embodiment of the present invention;
FIG. 9 is a flow chart of an imaging method according to an embodiment of the present invention;
FIG. 10 is a schematic view of a display device and window according to an embodiment of the present invention;
fig. 11 is a flow chart of an imaging method according to an embodiment of the present invention.
Description of main reference numerals:
The vehicle 100, the first camera assembly 10, the second camera assembly 20, the display device 30, the controller 40, the shielding area 50, the B-pillar 60, the door frame 70, the imaging device 110, the tracking module 101, the acquisition module 102, and the control module 103.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present invention and are not to be construed as limiting the present invention.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically connected, electrically connected or can be communicated with each other; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "above" or "below" a second feature may include both the first and second features being in direct contact, as well as the first and second features not being in direct contact but being in contact with each other through additional features therebetween. Moreover, a first feature being "above," "over" and "on" a second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature includes the first feature being directly under and obliquely below the second feature, or simply means that the first feature is less level than the second feature.
The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. In order to simplify the present disclosure, components and arrangements of specific examples are described below. They are, of course, merely examples and are not intended to limit the invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, which are for the purpose of brevity and clarity, and which do not themselves indicate the relationship between the various embodiments and/or arrangements discussed. In addition, the present invention provides examples of various specific processes and materials, but one of ordinary skill in the art will recognize the application of other processes and/or the use of other materials.
Referring to fig. 1, an embodiment of the present invention provides an imaging method for a vehicle 100, the imaging method including:
s10: tracking a human eye position of an occupant in the vehicle 100;
S20: under the condition that the human eyes face the shielding area 50, acquiring an external environment image corresponding to the shielding area 50, wherein the shielding area 50 comprises a part of B column and a part of door frame 70;
S30: the display device 30 controlling the shielding area 50 displays an image of the outside environment of the vehicle.
Referring to fig. 2 and 3, an embodiment of the present invention provides a vehicle 100, where the vehicle 100 includes a first camera assembly 10, a second camera assembly 20, a display device 30 and a controller 40, the controller 40 is configured to track a position of a human eye of an occupant in the vehicle 100 through the first camera assembly 10, and to acquire an image of an exterior environment of the vehicle corresponding to an occlusion region 50 through the second camera assembly 20 when the human eye faces the occlusion region 50, the occlusion region 50 includes a portion B pillar and a portion of a door frame 70, and the display device 30 is configured to control the occlusion region 50 to display the image of the exterior environment of the vehicle.
Referring to fig. 4, an embodiment of the present invention provides an imaging device 110, where the imaging device 110 includes a tracking module 101, an obtaining module 102 and a control module 103, the tracking module 101 is configured to track a position of a human eye of an occupant in a vehicle 100, the obtaining module 102 is configured to obtain an image of an exterior environment of the vehicle corresponding to an occlusion region 50 when the human eye faces the occlusion region 50, the occlusion region 50 includes a portion B pillar and a portion of a door frame 70, and the control module 103 is configured to control a display device 30 of the occlusion region 50 to display the image of the exterior environment of the vehicle.
In the imaging method of the application, whether the passenger looks at the shielding region 50 can be determined by tracking the human eye position of the member in the vehicle 100, when the passenger looks at the shielding region 50, the vehicle exterior image shielded by the shielding region 50 is acquired, and the vehicle exterior image is displayed by the display device 30 arranged in the shielding region 50, so that the passenger can comprehensively observe the surrounding environment before getting off, the blind area of the sight is avoided, the passenger can observe the region shielded by the B column and the door frame 70, and the riding experience of the passenger is improved.
Specifically, the vehicle 100 may be a small vehicle, a medium vehicle, a large vehicle, etc., fig. 2 is an external schematic view of the vehicle 100, and fig. 3 is an internal schematic view of the vehicle 100. The first camera assembly 10 is mounted inside the vehicle 100, for example, the first camera assembly 10 may be mounted on a front door, a rear door, or a B-pillar of the vehicle 100. The first camera assembly 10 may acquire image information of the interior of the vehicle 100. In an embodiment of the present application, the first camera assembly 10 is mainly used to acquire image information of the rear seat of the vehicle 100 and track the position of the eyes of the occupant in the vehicle 100 by analyzing the image information of the rear seat.
When the first camera assembly 10 tracks the position of the eyes of the occupant, it may be determined that the eyes are facing the blocking area 50, and the blocking area 50 includes a portion of the B pillar and a portion of the door frame 70, for example, when the occupant in the rear seat of the vehicle 100 looks outside the window, the door frame 70 and the B pillar between the front door window and the rear door window are the blocking area 50, and the blocking area 50 obstructs the view of the occupant, affecting the area outside the vehicle.
Further, the second camera assembly 20 may be mounted outside the vehicle 100, for example, the second camera assembly 20 may be mounted on a front door frame 70, a rear door frame 70, or a B-pillar of the vehicle 100. In the case that it is determined that the human eye is facing the shielding region 50, the controller 40 may control the second camera assembly 20 to be turned on, so that the second camera assembly 20 acquires an image of the outside environment of the vehicle corresponding to the shielding region 50. The display device 30 is mounted in the shielding area 50 in the vehicle 100, the second camera assembly 20 is electrically connected with the display device 30, and in the case that the second camera assembly 20 acquires an external environment image, the second camera assembly 20 can send the external environment image to the display device 30, and the controller 40 can control the display device 30 to display the external environment image, so that a user can observe the area shielded by the shielding area 50 through the display device 30.
It should be noted that the embodiment of the present application is illustrated with only one side of the vehicle 100, for example, the present embodiment is illustrated with only the front passenger door, the rear passenger door of the corresponding rear seat, and the B pillar therebetween, and it is understood that the main driving side and the passenger side are symmetrically disposed.
In some embodiments, the number of the second camera assemblies 20 may be two, the two second camera assemblies 20 are respectively installed on the front door and the rear door outside the vehicle 100, and the two second camera assemblies 20 together acquire an external environment image, compared with the case where one second camera assembly 20 is provided, the two second camera assemblies 20 can acquire two external environment images, and the processor processes the two external environment images so that the external environment image displayed by the display device 30 and the window environment are combined to form a complete scene, thereby improving the riding experience of passengers. The processor may be a processing device of the second camera assembly 20 or may be a processor of the vehicle 100 for processing an image of the exterior environment of the vehicle.
Referring to fig. 5, in some embodiments, tracking the human eye position of an occupant in the vehicle 100 (step S10) includes:
S11: acquiring at least two pieces of in-vehicle image information shot based on different angles;
s12: and overlapping the overlapping areas of the image information in the vehicle to obtain an in-vehicle image with human eye image information, and determining the human eye position according to the in-vehicle image.
In some embodiments, the controller 40 is configured to obtain at least two in-vehicle image information captured based on different angles, and to superimpose overlapping areas of the in-vehicle image information to obtain an in-vehicle image having human eye image information, and determine the human eye position according to the in-vehicle image.
In this way, by acquiring the image information in the vehicle 100, overlapping the overlapping areas of the image information in each vehicle is performed, so that the image stitching of the image information in the vehicle is realized, the vehicle image containing the human eye image information is obtained, the human eye image information included in the vehicle image is more complete, and when the root performs eye tracking, the probability of inaccurate eye tracking caused by the lack of the human eye image information is reduced, so that the accuracy of eye tracking can be improved. Specifically, the number of the first image capturing assemblies 10 may be two, the two first image capturing assemblies 10 are respectively mounted on the front door and the rear door inside the vehicle 100, the two first image capturing assemblies 10 are arranged towards the rear seat, compared with the arrangement of one first image capturing assembly 10, the two first image capturing assemblies 10 can acquire image information in two vehicles 100, and overlapping areas of the image information in the respective vehicles are overlapped by the processor to obtain an in-vehicle image with human eye image information, so that the position and the gaze point of human eyes in the in-vehicle image can be determined, and the human eye position can be accurately captured according to the image information, so that tracking of human eyes is realized. In addition, the second image pickup device 20 is configured according to the human eye position, so that the second image pickup device 20 can acquire different vehicle exterior environment images according to the human eye position.
Of course, in other embodiments, tracking of the eye position may be performed by infrared equipment. For example, the position of the human eye can be calculated by emitting infrared rays to a person through an infrared device and by the infrared rays reflected by the human eye.
Referring to fig. 6, in some embodiments, in a case where the human eye faces the shielding region 50, acquiring the vehicle exterior environment image corresponding to the shielding region 50 (step S20) includes:
s21: under the condition that the human eyes face the shielding area 50, the distance between the human eyes and the shielding area 50 is acquired;
s22: an image of the exterior environment of the vehicle corresponding to the occlusion region 50 is acquired based on the distance.
In some embodiments, the controller 40 is configured to acquire a distance of a human eye from the occlusion region 50 in a case where the human eye is directed toward the occlusion region 50, and to acquire an exterior vehicle image corresponding to the occlusion region 50 based on the distance.
In some embodiments, the second camera assembly 20 is configured to acquire a distance between a human eye and the occlusion region 50 when the human eye is facing the occlusion region 50, and to acquire an exterior vehicle image corresponding to the occlusion region 50 based on the distance.
In this way, the distance between the eyes and the shielding area 50 is calculated through the first camera assembly 10, so that the vehicle exterior environment corresponding to the eyes of the passenger can be calculated, the second camera assembly 20 can acquire the vehicle exterior environment image shielded by the shielding area 50 according to the distance, and the vehicle exterior environment image is transmitted to the display device 30, so that the display device 30 can display the corresponding vehicle exterior environment image according to the eyes, and the viewing experience of the user is improved.
Specifically, the first camera assembly 10 may calculate a distance between a human eye of an occupant and the shielding region 50 according to the acquired image information in the vehicle 100, and may calculate a human eye position according to the distance between the shielding region 50 and the human eye, and the second camera assembly 20 may acquire an outside environment image shielded by the shielding region 50 corresponding to the human eye position according to the human eye position, and transmit the outside environment image to the display device 30, and display the outside environment image through the display device 30, so that the occupant may realize viewing of a landscape outside the vehicle through the shielding region 50.
Referring to fig. 7 and 8, in some embodiments, acquiring an exterior vehicle image (S22) corresponding to the occlusion region 50 based on the distance includes:
s221: acquiring a first vehicle exterior environment image corresponding to the occlusion region 50 if the distance is less than a preset distance;
S222: in the case where the distance is greater than or equal to the preset distance, a second outside environment image corresponding to the shielding region 50 is acquired, the second outside environment image being different from the first outside environment image.
In some embodiments, the controller 40 is configured to acquire a first vehicle exterior environment image corresponding to the occlusion region 50 if the distance is less than a preset distance, and to acquire a second vehicle exterior environment image corresponding to the occlusion region 50 if the distance is greater than or equal to the preset distance, the second vehicle exterior environment image being different from the first vehicle exterior environment image.
In some embodiments, the second camera assembly 20 is configured to acquire a first vehicle exterior environment image corresponding to the occlusion region 50 if the distance is less than a preset distance, and to acquire a second vehicle exterior environment image corresponding to the occlusion region 50 if the distance is greater than or equal to the preset distance, the second vehicle exterior environment image being different from the first vehicle exterior environment image.
In this way, the second camera assembly 20 may acquire the first external environment image or the second external environment image according to the distance between the human eye and the shielding region 50, when the distance is smaller than the preset distance, the second camera assembly 20 acquires the first external environment image, and when the distance is greater than or equal to the preset distance, the second camera assembly 20 acquires the second external environment image, so that the passenger can see the external environment of the vehicle, which is shielded by the shielding region 50, at different positions in the vehicle 100.
Specifically, the rear seats of the vehicle 100 may be generally divided into a main driving rear seat and a co-driving rear seat, and the preset distance may be a distance from a position between the main driving rear seat and the co-driving rear seat to the shielding region 50, for example, taking the shielding region 50 on the co-driving side as an example, in the case where the distance is smaller than the preset distance, it may be determined that the occupant is located in the co-driving rear seat, and the second image capturing assembly 20 may acquire the first vehicle exterior environment image corresponding to the shielding region 50 according to the co-driving rear seat; in the case where the distance is greater than or equal to the preset distance, it may be determined that the occupant is located in the main driving rear seat, and the second image capturing assembly 20 may acquire the second external environment image corresponding to the shielding region 50 according to the main driving rear seat.
It will be appreciated that the line of sight angles of the primary and secondary rear seats looking toward the occlusion region 50 are different, resulting in a different vehicle exterior environment image for the occlusion region 50, i.e., the second vehicle exterior environment is different from the first vehicle exterior environment.
Referring to fig. 9 and 10, in some embodiments, the imaging method further comprises:
S40: the external environment image of the car is abutted with the perspective environment of the car window to form a complete element.
In some embodiments, the controller 40 is configured to interface the vehicle exterior environment image with the vehicle window see-through environment to form a complete element.
In this way, the controller 40 interfaces the external environment image displayed by the display device 30 with the environment formed by the perspective of the vehicle window, so that the display device 30 and the vehicle window can completely display the scenery outside the vehicle 100, that is, the passenger can observe the complete external area through the display device 30 and the vehicle window, and the riding experience of the passenger is improved.
Specifically, the number of the display devices 30 may be plural, the plural display devices 30 may be respectively mounted on the shielding areas 50 of the door frame 70 and the B pillar, for example, the number of the display devices 30 may be three, the door frame 70 of the front door, the door frame 70 of the rear door and the B pillar may be respectively mounted with one display device 30, the three display devices 30 are arranged side by side, the vehicle exterior environment image acquired by the second camera assembly 20 may be displayed by the three display devices 30, and the vehicle exterior environment image displayed by the three display devices 30 may be respectively docked with the environment of the front window perspective and the environment of the rear window perspective, so that the occupant may observe the consecutive vehicle exterior area.
Referring to fig. 11, in some embodiments, the imaging method further comprises:
S50: when the human eyes are not facing the shielding region 50, the acquisition of the image of the outside environment of the vehicle corresponding to the shielding region 50 is stopped.
In some embodiments, the controller 40 is configured to stop acquiring the image of the outside environment of the vehicle corresponding to the occlusion region 50 when the human eye is not facing the occlusion region 50.
In this manner, when the first image capturing assembly 10 does not track the human eyes, it may be determined that the human eyes are not facing the shielding area 50, it may be determined that no occupant or no desire for the occupant to observe the scenery outside the vehicle 100 is present in the vehicle, and the controller 40 may control the second image capturing assembly 20 and the display device 30 to be turned off, so that the second image capturing assembly 20 stops acquiring the environmental image outside the vehicle, and the display device 30 stops displaying the environmental image outside the vehicle, thereby reducing the power consumption of the vehicle 100.
Specifically, the first image capturing assembly 10 may continuously capture images toward the rear seat of the vehicle 100 to acquire image information in the vehicle 100 in real time, the processor may process the image information to determine whether the eyes of the occupant are facing the shielding region 50 according to the image information, and in the case where the eyes are not detected, it may be determined that the occupant is not looking at the landscape, or the rear seat of the vehicle 100 is not having the occupant, the first image capturing assembly 10 may generate a closing signal to the controller 40, and the controller 40 may control the second image capturing assembly 20 and the display device 30 to be closed according to the closing signal.
A non-transitory computer-readable storage medium storing a computer program according to an embodiment of the present invention is characterized in that the imaging method of any of the above embodiments is implemented when the computer program is executed by one or more processors.
In particular, the processor may perform any of the steps of the imaging method.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of embodiments of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps are represented in a flowchart or otherwise described herein. For example, a ordered listing of executable instructions for implementing logical functions can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, system that includes a processing module, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
The processor may be a central processing unit (CentralProcessingUnit, CPU), but may also be other general purpose processors, digital signal processors (DigitalSignalProcessor, DSP), application specific integrated circuits (ApplicationSpecificIntegratedCircuit, ASIC), off-the-shelf programmable gate arrays (Field-ProgrammableGateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that portions of embodiments of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Furthermore, functional units in various embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, reference to the terms "one embodiment," "certain embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (14)

1. An imaging method for a vehicle, the imaging method comprising:
Tracking a human eye position of an occupant in the vehicle;
Under the condition that the human eyes face to a shielding area, acquiring an external environment image corresponding to the shielding area, wherein the shielding area comprises a part of B column and a part of door frame;
and controlling a display device of the shielding area to display the image of the outside environment of the vehicle.
2. The imaging method of claim 1, wherein said tracking the position of the human eye of an occupant in the vehicle comprises:
Acquiring at least two pieces of in-vehicle image information photographed based on different angles;
And overlapping the overlapping areas of the image information in the vehicle to obtain an in-vehicle image with human eye image information, and determining the human eye position according to the in-vehicle image.
3. The imaging method according to claim 1, wherein, in the case where the human eye faces an occlusion region, acquiring an image of an exterior environment of the vehicle corresponding to the occlusion region includes:
Acquiring the distance between the human eye and the shielding area under the condition that the human eye faces the shielding area;
and acquiring an external environment image of the vehicle corresponding to the shielding area based on the distance.
4. The imaging method according to claim 3, wherein the acquiring an outside environment image of the vehicle corresponding to the occlusion region based on the distance includes:
acquiring a first vehicle exterior environment image corresponding to the shielding area under the condition that the distance is smaller than a preset distance;
and under the condition that the distance is larger than or equal to a preset distance, acquiring a second vehicle exterior environment image corresponding to the shielding area, wherein the second vehicle exterior environment image is different from the first vehicle exterior environment image.
5. The imaging method according to claim 1, characterized in that the imaging method comprises:
And enabling the external environment image of the vehicle to be in butt joint with the environment of the vehicle window perspective to form a complete element.
6. The imaging method of claim 1, wherein the imaging method further comprises:
and under the condition that the human eyes are not facing the shielding area, stopping acquiring the external environment image corresponding to the shielding area.
7. An image forming apparatus, characterized in that the image forming apparatus comprises:
the tracking module is used for tracking the human eye position of the passenger in the vehicle;
The acquisition module is used for acquiring an external environment image corresponding to a shielding area under the condition that the human eyes face the shielding area, and the shielding area comprises a part of B column and a part of door frame;
and the control module is used for controlling the display device of the shielding area to display the image outside the vehicle.
8. The vehicle is characterized by comprising a first camera shooting assembly, a second camera shooting assembly, a display device and a controller, wherein the controller is used for tracking the position of human eyes of passengers in the vehicle through the first camera shooting assembly, acquiring an external environment image corresponding to a shielding area through the second camera shooting assembly under the condition that the human eyes face the shielding area, and controlling the display device of the shielding area to display the external environment image.
9. The vehicle of claim 8, wherein the controller is configured to acquire at least two of the in-vehicle image information captured based on different angles; and the overlapping areas of the image information in the vehicles are overlapped to obtain an in-vehicle image with human eye image information, and the human eye position is determined according to the in-vehicle image.
10. The vehicle according to claim 8, wherein the controller is configured to acquire a distance of the human eye from the shielding region with the human eye facing the shielding region, and to acquire an outside-vehicle environment image corresponding to the shielding region based on the distance.
11. The vehicle of claim 10, wherein the controller is configured to obtain a first vehicle exterior environment image corresponding to the occlusion region if the distance is less than a preset distance, and to obtain a second vehicle exterior environment image corresponding to the occlusion region if the distance is greater than or equal to a preset distance, the second vehicle exterior environment image being different from the first vehicle exterior environment image.
12. The vehicle of claim 8, wherein the controller is configured to interface the exterior vehicle environment image with a window see-through environment to form a complete element.
13. The vehicle of claim 8, wherein the controller is configured to stop acquiring an image of an exterior environment of the vehicle corresponding to the occlusion region if the human eye is not facing the occlusion region.
14. A non-transitory computer readable storage medium of a computer program, characterized in that the imaging method of any of claims 1-6 is implemented when the computer program is executed by one or more processors.
CN202410148311.6A 2024-02-01 2024-02-01 Imaging method, imaging device, vehicle and storage medium Pending CN118163716A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410148311.6A CN118163716A (en) 2024-02-01 2024-02-01 Imaging method, imaging device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410148311.6A CN118163716A (en) 2024-02-01 2024-02-01 Imaging method, imaging device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN118163716A true CN118163716A (en) 2024-06-11

Family

ID=91351809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410148311.6A Pending CN118163716A (en) 2024-02-01 2024-02-01 Imaging method, imaging device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN118163716A (en)

Similar Documents

Publication Publication Date Title
CN108621937B (en) In-vehicle display apparatus, control method of in-vehicle display apparatus, and storage medium storing control program of in-vehicle display apparatus
US8570188B2 (en) Driver vision support system and vehicle including the system
US20130096820A1 (en) Virtual display system for a vehicle
JP3800172B2 (en) Vehicle periphery monitoring device
CN103171489B (en) For driving the method for the indicating device of controlling motor vehicle
JP5093611B2 (en) Vehicle periphery confirmation device
US20120002028A1 (en) Face image pick-up apparatus for vehicle
JP5092776B2 (en) Gaze direction detection device and gaze direction detection method
CN109314765B (en) Display control device for vehicle, display system, display control method, and program
US20150124097A1 (en) Optical reproduction and detection system in a vehicle
US11999299B2 (en) Camera mirror system display camera calibration
JP4760562B2 (en) Vehicle periphery information presentation device and vehicle periphery information presentation method
KR101980966B1 (en) Method and device for representing the surroundings of a motor vehicle
JP2018022958A (en) Vehicle display controller and vehicle monitor system
CN111347977B (en) Vehicle blind spot image display method, device and system
JP4720979B2 (en) Vehicle monitoring device
CN118163716A (en) Imaging method, imaging device, vehicle and storage medium
WO2013114871A1 (en) Driving assistance device and driving assistance method
US10933812B1 (en) Outside-vehicle environment monitoring apparatus
CN113682232A (en) Image display device
CN220996266U (en) Display system and vehicle
JP4930728B2 (en) Display device
JP2007045209A (en) On-vehicle monitor camera
CN109141461A (en) Automobile digital map navigation control system and method
JP2018101904A (en) Video system for vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination