CN113507569A - Control method and device of vehicle-mounted camera, equipment and medium - Google Patents

Control method and device of vehicle-mounted camera, equipment and medium Download PDF

Info

Publication number
CN113507569A
CN113507569A CN202110737681.XA CN202110737681A CN113507569A CN 113507569 A CN113507569 A CN 113507569A CN 202110737681 A CN202110737681 A CN 202110737681A CN 113507569 A CN113507569 A CN 113507569A
Authority
CN
China
Prior art keywords
image
quality parameter
region
image quality
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110737681.XA
Other languages
Chinese (zh)
Inventor
陈舒
陈栋梁
吴阳平
许亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202110737681.XA priority Critical patent/CN113507569A/en
Publication of CN113507569A publication Critical patent/CN113507569A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure relates to a control method, a control device, equipment and a control medium for a vehicle-mounted camera. The method comprises the following steps: acquiring a first image acquired by a vehicle-mounted camera; determining a first image quality parameter value for a region of interest in the first image; determining a second image quality parameter value for the first image based on the first image quality parameter value; and responding to the situation that the second image quality parameter value does not meet the preset condition, and generating the control parameter of the vehicle-mounted camera according to the second image quality parameter value.

Description

Control method and device of vehicle-mounted camera, equipment and medium
Technical Field
The disclosure relates to the technical field of intelligent vehicle cabins, in particular to a control method, a control device, control equipment and a control medium for a vehicle-mounted camera.
Background
In intelligent vehicle cabins, there are a number of computer vision based functions, and most computer vision based functions have certain requirements on image quality. In other words, the imaging quality of the vehicle-mounted camera will affect various functions based on computer vision in the intelligent vehicle cabin. Therefore, the improvement of the imaging quality of the vehicle-mounted camera has important significance.
Disclosure of Invention
The present disclosure provides a control technical scheme of a vehicle-mounted camera.
According to an aspect of the present disclosure, there is provided a control method of an in-vehicle camera, including:
acquiring a first image acquired by a vehicle-mounted camera;
determining a first image quality parameter value for a region of interest in the first image;
determining a second image quality parameter value for the first image based on the first image quality parameter value;
and responding to the situation that the second image quality parameter value does not meet the preset condition, and generating the control parameter of the vehicle-mounted camera according to the second image quality parameter value.
In one possible implementation, the method further includes:
and determining the region of interest in the first image according to the type of the vehicle-mounted camera.
In a possible implementation manner, the determining a region of interest in the first image according to the type of the vehicle-mounted camera includes:
determining a region of interest in the first image based on a driver seat region in the first image in response to the vehicle-mounted camera being a Driver Monitoring System (DMS) camera;
and/or the presence of a gas in the gas,
determining a region of interest in the first image based on a plurality of seating regions in the first image in response to the in-vehicle camera being an Occupant Monitoring System (OMS) camera.
In a possible implementation manner, the determining a region of interest in the first image according to the type of the vehicle-mounted camera includes:
determining a candidate area in the first image according to the type of the vehicle-mounted camera;
and determining the region of interest in the first image based on the region where the preset target object in the candidate region is located.
In one possible implementation, the method further includes:
and determining the region of interest in the first image based on the region of the preset target object in the first image.
In one possible implementation, the method further includes:
and determining the region of interest in the first image according to the region of interest of the last frame of image of the first image in the image sequence acquired by the vehicle-mounted camera.
In one possible implementation, the determining a second image quality parameter value of the first image based on the first image quality parameter value includes:
acquiring first weight information corresponding to the region of interest and second weight information corresponding to a region of no interest in the first image, wherein the region of no interest represents a region outside the region of interest in the first image;
determining a second image quality parameter value of the first image according to the first image quality parameter value, the first weight information, a third image quality parameter value of the non-interesting region and the second weight information.
In a possible implementation manner, the non-interest region includes a plurality of sub-regions, and the second weight information includes weight values corresponding to the plurality of sub-regions one to one; the weight value corresponding to any one of the plurality of sub-regions is inversely related to the distance between the sub-region and the region of interest.
In one possible implementation, the control parameters include hardware control parameters, and the method further includes:
and writing the hardware control parameters into a register of the vehicle-mounted camera.
In a possible implementation manner, the generating, in response to that the second image quality parameter value does not satisfy a preset condition, a control parameter of the vehicle-mounted camera according to the second image quality parameter value includes:
obtaining a target value of the image quality parameter of the first image in response to the second image quality parameter value not meeting a preset condition;
determining an imaging parameter of the vehicle-mounted camera according to a difference value between the second image quality parameter value and the target value;
and converting the imaging parameters into the hardware control parameters.
In a possible implementation manner, the obtaining a target value of the image quality parameter of the first image in response to the second image quality parameter value not satisfying a preset condition includes:
responding to the second image quality parameter value not meeting the preset condition, and acquiring the target value range of the image quality parameter of the first image;
and determining the target value of the image quality parameter of the first image according to the second image quality parameter value and the target value range.
In a possible implementation manner, the determining an imaging parameter of the vehicle-mounted camera according to a difference between the second image quality parameter value and the target value includes:
inputting the difference value between the second image quality parameter value and the target value into a preset proportional-integral-derivative controller, and predicting the imaging parameter of the vehicle-mounted camera through the preset proportional-integral-derivative controller.
In one possible implementation, the image quality parameter includes exposure, and the imaging parameter includes at least one of: exposure time, analog gain, digital gain, aperture size.
In one possible implementation, after the determining the second image quality parameter value of the first image, the method further includes:
and in response to the second image quality parameter value not meeting the preset condition, optimizing the first image to obtain an optimized first image.
In one possible implementation, the method further includes:
adding 1 to the camera adjusting frame number in response to the second image quality parameter value not meeting the preset condition, wherein the initial value of the camera adjusting frame number is 0;
and under the condition that the number of the camera adjusting frames reaches a preset threshold value, optimizing the first image to obtain an optimized first image.
In a possible implementation manner, optimizing the first image to obtain an optimized first image includes:
and optimizing the region of interest to obtain an optimized first image.
In one possible implementation, the optimization process includes at least one of: bayer domain denoising, YUV domain denoising, color noise point elimination, dead point elimination, super-resolution reconstruction, gamma correction, color correction and brightness adjustment.
According to an aspect of the present disclosure, there is provided a control device of an in-vehicle camera, including:
the acquisition module is used for acquiring a first image acquired by the vehicle-mounted camera;
a first determination module for determining a first image quality parameter value for a region of interest in the first image;
a second determining module for determining a second image quality parameter value for the first image based on the first image quality parameter value;
and the generating module is used for responding to the condition that the second image quality parameter value does not meet the preset condition, and generating the control parameter of the vehicle-mounted camera according to the second image quality parameter value.
In one possible implementation, the apparatus further includes:
and the third determining module is used for determining the region of interest in the first image according to the type of the vehicle-mounted camera.
In one possible implementation manner, the third determining module is configured to:
determining a region of interest in the first image based on a driver seat region in the first image in response to the vehicle-mounted camera being a Driver Monitoring System (DMS) camera;
and/or the presence of a gas in the gas,
determining a region of interest in the first image based on a plurality of seating regions in the first image in response to the in-vehicle camera being an Occupant Monitoring System (OMS) camera.
In one possible implementation manner, the third determining module is configured to:
determining a candidate area in the first image according to the type of the vehicle-mounted camera;
and determining the region of interest in the first image based on the region where the preset target object in the candidate region is located.
In one possible implementation, the apparatus further includes:
and the fourth determining module is used for determining the region of interest in the first image based on the region of the preset target object in the first image.
In one possible implementation, the apparatus further includes:
and the fifth determining module is used for determining the region of interest in the first image according to the region of interest of the previous frame of image of the first image in the image sequence acquired by the vehicle-mounted camera.
In one possible implementation, the determining a second image quality parameter value of the first image based on the first image quality parameter value includes:
acquiring first weight information corresponding to the region of interest and second weight information corresponding to a region of no interest in the first image, wherein the region of no interest represents a region outside the region of interest in the first image;
determining a second image quality parameter value of the first image according to the first image quality parameter value, the first weight information, a third image quality parameter value of the non-interesting region and the second weight information.
In a possible implementation manner, the non-interest region includes a plurality of sub-regions, and the second weight information includes weight values corresponding to the plurality of sub-regions one to one; the weight value corresponding to any one of the plurality of sub-regions is inversely related to the distance between the sub-region and the region of interest.
In one possible implementation, the control parameter includes a hardware control parameter, and the apparatus further includes:
and the writing module is used for writing the hardware control parameters into a register of the vehicle-mounted camera.
In one possible implementation, the generating module is configured to:
obtaining a target value of the image quality parameter of the first image in response to the second image quality parameter value not meeting a preset condition;
determining an imaging parameter of the vehicle-mounted camera according to a difference value between the second image quality parameter value and the target value;
and converting the imaging parameters into the hardware control parameters.
In one possible implementation, the generating module is configured to:
responding to the second image quality parameter value not meeting the preset condition, and acquiring the target value range of the image quality parameter of the first image;
and determining the target value of the image quality parameter of the first image according to the second image quality parameter value and the target value range.
In one possible implementation, the generating module is configured to:
inputting the difference value between the second image quality parameter value and the target value into a preset proportional-integral-derivative controller, and predicting the imaging parameter of the vehicle-mounted camera through the preset proportional-integral-derivative controller.
In one possible implementation, the image quality parameter includes exposure, and the imaging parameter includes at least one of: exposure time, analog gain, digital gain, aperture size.
In one possible implementation, the apparatus further includes:
and the first optimization module is used for responding to the condition that the second image quality parameter value does not meet the preset condition, and optimizing the first image to obtain an optimized first image.
In a possible implementation manner, the apparatus further includes a second optimization module, and the second optimization module is configured to:
adding 1 to the camera adjusting frame number in response to the second image quality parameter value not meeting the preset condition, wherein the initial value of the camera adjusting frame number is 0;
and under the condition that the number of the camera adjusting frames reaches a preset threshold value, optimizing the first image to obtain an optimized first image.
In a possible implementation manner, the first optimization module and/or the second optimization module is configured to:
and optimizing the region of interest to obtain an optimized first image.
In one possible implementation, the optimization process includes at least one of: bayer domain denoising, YUV domain denoising, color noise point elimination, dead point elimination, super-resolution reconstruction, gamma correction, color correction and brightness adjustment.
According to an aspect of the present disclosure, there is provided an electronic device including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a first image quality parameter value of an interested area in a first image is determined by acquiring the first image acquired by a vehicle-mounted camera, a second image quality parameter value of the first image is determined based on the first image quality parameter value, and a control parameter of the vehicle-mounted camera is generated according to the second image quality parameter value in response to the second image quality parameter value not meeting a preset condition, so that the parameter of the vehicle-mounted camera is adjusted based on the image quality parameter value of the interested area in the image acquired by the vehicle-mounted camera, the imaging quality of the vehicle-mounted camera can be improved aiming at the interested area, the intelligent vehicle cabin function based on computer vision can be effectively improved, and the possibility of failure of the intelligent vehicle cabin function due to poor imaging quality of the vehicle-mounted camera can be reduced. In addition, the imaging quality of the vehicle-mounted camera is improved in a software mode, and the increase of the hardware cost of the intelligent vehicle cabin is avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a control method of an in-vehicle camera provided in an embodiment of the present disclosure.
Fig. 2 shows a block diagram of a control device of an in-vehicle camera provided in an embodiment of the present disclosure.
Fig. 3 illustrates a block diagram of an electronic device 800 provided by an embodiment of the disclosure.
Fig. 4 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In the related art, in an intelligent vehicle cabin, an on-vehicle camera not equipped with a hardware ISP (Image Signal Processor) cannot improve imaging quality, and when the imaging quality of the on-vehicle camera is poor due to poor illumination conditions, high temperature, and the like, related functions of the intelligent vehicle cabin are at risk of failure. In the related art, in order to improve the imaging quality of the vehicle-mounted camera, a hardware ISP needs to be configured, which results in increased hardware cost, once the hardware is finalized, the related functions of the hardware ISP are difficult to flexibly adjust for different application scenes, and the space for subsequent optimization is limited.
In the embodiment of the disclosure, a first image quality parameter value of an interested area in a first image is determined by acquiring the first image acquired by a vehicle-mounted camera, a second image quality parameter value of the first image is determined based on the first image quality parameter value, and a control parameter of the vehicle-mounted camera is generated according to the second image quality parameter value in response to the second image quality parameter value not meeting a preset condition, so that the parameter of the vehicle-mounted camera is adjusted based on the image quality parameter value of the interested area in the image acquired by the vehicle-mounted camera, the imaging quality of the vehicle-mounted camera can be improved aiming at the interested area, the intelligent vehicle cabin function based on computer vision can be effectively improved, and the possibility of failure of the intelligent vehicle cabin function due to poor imaging quality of the vehicle-mounted camera can be reduced. In addition, the imaging quality of the vehicle-mounted camera is improved in a software mode, and the increase of the hardware cost of the intelligent vehicle cabin is avoided.
The following describes a control method of a vehicle-mounted camera according to an embodiment of the present disclosure in detail with reference to the accompanying drawings. Fig. 1 shows a flowchart of a control method of an in-vehicle camera provided in an embodiment of the present disclosure. In a possible implementation manner, the control method of the vehicle-mounted camera may be executed by a terminal device or a server or other processing devices. The terminal device may be a vehicle-mounted device, a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, or a wearable device. The on-board device may be a vehicle, a domain controller or a processor in a vehicle cabin, and may also be a device host for performing data processing operations such as images in a DMS (Driver Monitoring System) or an OMS (Occupant Monitoring System). In some possible implementation manners, the control method of the vehicle-mounted camera can be realized by means of calling computer readable instructions stored in a memory by a processor. As shown in fig. 1, the control method of the in-vehicle camera includes steps S11 to S14.
In step S11, a first image captured by the in-vehicle camera is acquired.
In step S12, a first image quality parameter value for a region of interest in the first image is determined.
In step S13, a second image quality parameter value for the first image is determined based on the first image quality parameter value.
In step S14, in response to that the second image quality parameter value does not satisfy a preset condition, a control parameter of the in-vehicle camera is generated according to the second image quality parameter value.
Embodiments of the present disclosure may be applied to any type of vehicle, such as passenger cars, taxis, network cars, shared cars, buses, and the like. In the embodiment of the present disclosure, the vehicle-mounted camera may be any camera provided in a vehicle. The number of the vehicle-mounted cameras can be one or more than two. The vehicle-mounted camera can be mounted in the vehicle cabin and/or outside the vehicle cabin. The vehicle-mounted camera can be a vehicle-mounted camera provided with a hardware ISP (internet service provider) or a vehicle-mounted camera not provided with the hardware ISP. The first image may represent any one of the images captured by the onboard camera. The image acquired by the vehicle-mounted camera in the embodiment of the disclosure can be used in application scenarios such as face recognition, driver monitoring, passenger monitoring, and the like, and is not limited herein. For example, driver monitoring may include functions such as distraction detection, fatigue detection, dangerous motion recognition, and occupant monitoring may include functions such as age detection, mood detection, and the like.
In the disclosed embodiment, the region of interest in the first image may represent an area of significant interest for the intelligent cabin functionality to which the first image is applied. Wherein the number of the regions of interest in the first image may be one or more than two.
In one possible implementation, the method further includes: and determining the region of interest in the first image according to the type of the vehicle-mounted camera. In this implementation, the type of onboard camera may be a DMS camera, an OMS camera, a general camera, or the like. The images collected by the vehicle-mounted cameras of different types can be used for different intelligent vehicle cabin functions, and of course, the images collected by the same vehicle-mounted camera or the same type of vehicle-mounted camera can also be used for multiple intelligent vehicle cabin functions. In this implementation manner, the region of interest in the first image may be determined according to the type of the vehicle-mounted camera acquiring the first image and the correspondence between the type of the vehicle-mounted camera and the image coordinate range of the region of interest. The region of interest in the first image is determined according to the type of the vehicle-mounted camera acquiring the first image, so that the region of interest in the first image can be determined quickly and accurately.
As an example of this implementation, the determining, according to the type of the in-vehicle camera, the region of interest in the first image includes: determining a region of interest in the first image based on a driver seat region in the first image in response to the vehicle-mounted camera being a Driver Monitoring System (DMS) camera; and/or, in response to the in-vehicle camera being an Occupant Monitoring System (OMS) camera, determining a region of interest in the first image based on a plurality of seating regions in the first image.
In one example, in a case that the vehicle-mounted camera acquiring the first image is a DMS camera, the region of interest in the first image may be determined according to a first preset image coordinate range corresponding to the DMS camera. For example, a region in the first image surrounded by a first preset image coordinate range may be determined as a region of interest in the first image; for another example, a partial region in a region surrounded by the first preset image coordinate range in the first image may be determined as the region of interest in the first image. The first preset image coordinate range may represent an image coordinate range corresponding to a preset main driver seat area. In the case where the onboard camera acquiring the first image is the DMS camera, the number of regions of interest in the first image may be one.
In another example, in a case that the vehicle-mounted camera acquiring the first image is an OMS camera, the region of interest in the first image may be determined according to a plurality of second preset image coordinate ranges corresponding to the OMS camera. For example, regions surrounded by a plurality of second preset image coordinate ranges in the first image may be respectively determined as regions of interest in the first image; for another example, a partial region of the region in the first image surrounded by the plurality of second preset image coordinate ranges may be determined as the region of interest in the first image. The second preset image coordinate ranges may respectively represent image coordinate ranges corresponding to a plurality of seat regions that are set in advance, for example, the plurality of seat regions may include a front passenger seat region, a rear left seat region, a rear middle seat region, and a rear seat region. In the case where the vehicle-mounted camera acquiring the first image is an OMS camera, the number of regions of interest in the first image may be plural.
In this example, by responding to the vehicle-mounted camera being a driver monitoring system DMS camera, and determining the region of interest in the first image based on the driver seat area in the first image, in the case that the vehicle-mounted camera acquiring the first image is the DMS camera, the parameters of the DMS camera can be adjusted based on the image quality parameter values of the driver seat area in the first image, the imaging quality of the DMS camera can be improved for the driver seat area in the first image, so that the driver monitoring function can be effectively improved, and the possibility of failure of the driver monitoring function due to poor imaging quality of the DMS camera can be reduced. By responding to the fact that the vehicle-mounted camera is an OMS camera, and determining the region of interest in the first image based on the plurality of seat areas in the first image, when the vehicle-mounted camera for acquiring the first image is the OMS camera, the parameters of the OMS camera can be adjusted based on the image quality parameter values of the plurality of seat areas in the first image, the imaging quality of the OMS camera can be improved aiming at the plurality of seat areas in the first image, the passenger monitoring function can be effectively improved, and the possibility of passenger monitoring function failure caused by poor imaging quality of the OMS camera can be reduced.
As an example of this implementation, the determining, according to the type of the in-vehicle camera, the region of interest in the first image includes: determining a candidate area in the first image according to the type of the vehicle-mounted camera; and determining the region of interest in the first image based on the region where the preset target object in the candidate region is located. In one example, the preset target object may include one or more objects specified. For example, the specified object may be a specified user, such as user A. In another example, the preset target object may include a specified one or more types of objects. For example, the preset target object may include at least one of a human face, eyes, mouth, nose, facial muscles, and the like.
In one example, in a case where the on-vehicle camera that acquires the first image is a DMS camera, the driver seat region in the first image may be taken as a candidate region in the first image, and the region of interest in the first image may be determined based on a region in which a preset target object is located in the candidate region. For example, if the preset target object is a human face, the region where the human face is located in the driver seat region in the first image may be determined as the region of interest in the first image.
In another example, in a case where the in-vehicle camera acquiring the first image is an OMS camera, a plurality of seat regions in the first image may be respectively taken as candidate regions in the first image, and the region of interest in the first image may be determined based on a region in which a preset target object is located among the plurality of candidate regions. For example, if the preset target object is an eye, the area where the eye is located in the plurality of seat areas in the first image may be respectively determined as the region of interest in the first image.
In this example, the candidate region in the first image is determined according to the type of the vehicle-mounted camera, and the region of interest in the first image is determined based on the region where the preset target object is located in the candidate region, so that the region of interest can be further determined based on the region where the preset target object is located on the basis of the candidate region determined based on the type of the vehicle-mounted camera, the range of the determined region of interest can be narrowed, the accuracy of subsequent intelligent vehicle cabin functions can be improved, the calculation amount of the subsequent intelligent vehicle cabin functions can be reduced, and the running speed of the intelligent vehicle cabin functions can be improved.
In another possible implementation manner, the method further includes: and determining the region of interest in the first image based on the region of the preset target object in the first image. For example, in an application scenario where it is necessary to determine whether to wear a mask, the preset target object may include a mouth and a nose, that is, a region in the first image where the mouth and the nose are located may be determined as a region of interest in the first image. For another example, in an application scenario where it is required to determine whether sunglasses are worn, the preset target object may include an eye, that is, a region where the eye is located in the first image may be determined as the region of interest in the first image. For another example, in an application scenario of determining a dangerous action, the preset target object may include at least one of a cup, a cigarette, a phone, and the like, that is, a region in which at least one of the cup, the cigarette, the phone, and the like in the first image is located may be determined as the region of interest in the first image. In the implementation mode, the region of interest in the first image is determined based on the region where the preset target object in the first image is located, so that the region of interest in the first image can be flexibly and accurately determined based on different application scenes, the range of the determined region of interest can be reduced, the accuracy of subsequent intelligent vehicle cabin functions can be improved, the calculated amount of the subsequent intelligent vehicle cabin functions can be reduced, and the running speed of the intelligent vehicle cabin functions can be improved.
In another possible implementation manner, the method further includes: and determining the region of interest in the first image according to the region of interest of the last frame of image of the first image in the image sequence acquired by the vehicle-mounted camera. Generally, the positions of the regions of interest in the adjacent frames acquired by the vehicle-mounted camera are relatively close, so in this implementation, when the regions of interest have been determined in the previous frame image of the first image, the regions of interest in the previous frame image of the first image can be determined as the regions of interest in the first image, and thus the regions of interest in the first image can be determined quickly.
In the disclosed embodiments, the image quality parameter may represent any parameter that affects image quality. For example, the image quality parameters may include any of the parameters of the image that may have an effect on subsequent intelligent cabin functions. The number of image quality parameters may be one or more than two. For example, the image quality parameter may include at least one of exposure, sharpness, saturation, white balance, noise, and the like.
In an embodiment of the disclosure, the first image quality parameter value may represent a value of an image quality parameter of a region of interest in the first image, the second image quality parameter value may represent a value of an image quality parameter of the first image, the second image quality parameter value being determined based on at least the first image quality parameter value.
In one possible implementation, the determining a second image quality parameter value of the first image based on the first image quality parameter value includes: acquiring first weight information corresponding to the region of interest and second weight information corresponding to a region of no interest in the first image, wherein the region of no interest represents a region outside the region of interest in the first image; determining a second image quality parameter value of the first image according to the first image quality parameter value, the first weight information, a third image quality parameter value of the non-interesting region and the second weight information. The third image quality parameter value may represent a value of an image quality parameter of a region of no interest of the first image, the first weight information may represent weight information corresponding to the region of interest in the first image, and the second weight information may represent weight information corresponding to the region of no interest in the first image.
In this implementation, the first weight information may include one or more weight values. As an example of this implementation, the first weight information may include only one weight value, i.e., the region of interest corresponds to only one weight value, and the weight values corresponding to different sub-regions in the region of interest are the same. As another example of this implementation, the first weight information may include a plurality of weight values, i.e., different sub-regions in the region of interest may correspond to different weight values, and accordingly, image quality parameter values of the different sub-regions in the region of interest may be determined separately. For example, the region of interest is a driver seat region, and a weight value corresponding to a region where eyes are located in the region of interest is greater than a weight value corresponding to a region where eyes are not located.
In this implementation, the second weight information may include one or more weight values. As an example of this implementation, the second weight information may include only one weight value, that is, the non-interest region corresponds to only one weight value, and the weight values corresponding to different sub-regions in the non-interest region are the same. As another example of this implementation, the second weight information may include a plurality of weight values, i.e., different sub-regions in the non-region of interest may correspond to different weight values, and accordingly, image quality parameter values of the different sub-regions in the non-region of interest may be respectively determined.
In this implementation, the weight value corresponding to the region of interest in the first image is greater than the weight value corresponding to the region of non-interest. In the case where the first weight information includes only one weight value and the second weight information includes only one weight value, the only weight value in the first weight information is greater than the only weight value in the second weight information; in a case where the first weight information includes only one weight value and the second weight information includes a plurality of weight values, a unique weight value in the first weight information is larger than a largest weight value in the second weight information; in a case where the first weight information includes a plurality of weight values and the second weight information includes only one weight value, a smallest weight value in the first weight information is larger than a unique weight value in the second weight information; in a case where the first weight information includes a plurality of weight values and the second weight information includes a plurality of weight values, a smallest weight value in the first weight information is larger than a largest weight value in the second weight information.
In this implementation manner, by acquiring first weight information corresponding to the region of interest and second weight information corresponding to a region of no interest in the first image, and determining a second image quality parameter value of the first image according to the first image quality parameter value, the first weight information, a third image quality parameter value of the region of no interest, and the second weight information, the image quality parameter of the region of interest can be focused on the premise of comprehensively considering the image quality parameter values of the entire first image, thereby contributing to overall improvement of imaging quality of the vehicle-mounted camera.
In one example, equation 1 may be employed to determine a second image quality parameter value P for the first image:
P=α1p12p2in the formula 1, the compound is shown in the specification,
wherein p is1A first image quality parameter value, alpha, representing a region of interest in a first image1Representing first weight information, p, corresponding to a region of interest in a first image2A third image quality parameter value, alpha, representing a region of non-interest in the first image2Second weight information representing correspondence of the region of non-interest in the first image.
As an example of this implementation, the region of non-interest includes a plurality of sub-regions, and the second weight information includes weight values corresponding to the plurality of sub-regions one to one; the weight value corresponding to any one of the plurality of sub-regions is inversely related to the distance between the sub-region and the region of interest. In this example, for any one of the sub-regions of the region of non-interest, the larger the distance between the sub-region and the region of interest is, the smaller the weight value corresponding to the sub-region is, and the smaller the distance between the sub-region and the region of interest is, the larger the weight value corresponding to the sub-region is. The distance between any one of the sub-regions of the non-region of interest and the region of interest may be a distance between a geometric center of the sub-region and a geometric center of the region of interest, or may be a minimum distance between an edge point of the sub-region and an edge point of the region of interest, and the like, which are not limited herein. According to the example, the accuracy of the obtained second image quality parameter is further improved, and the imaging quality of the vehicle-mounted camera is further improved.
In another possible implementation manner, the determining a second image quality parameter value of the first image based on the first image quality parameter value includes: determining the first image quality parameter value as a second image quality parameter value of the first image. In this implementation, the second image quality parameter value of the first image may be determined from only the first image quality parameter value of the region of interest in the first image without considering the third image quality parameter value of the non-region of interest in the first image. For example, a first image quality parameter value for a region of interest in the first image may be determined as a second image quality parameter value for the first image.
In the embodiment of the present disclosure, the preset condition may represent a preset condition for determining whether an image quality parameter of an image acquired by the vehicle-mounted camera meets a requirement. If the second image quality parameter value of the first image meets the preset condition, the image quality parameter of the image currently acquired by the vehicle-mounted camera can be represented to meet the requirement; if the second image quality parameter value of the first image does not meet the preset condition, it can be shown that the image quality parameter of the image currently acquired by the vehicle-mounted camera does not meet the requirement, and the parameter adjustment of the vehicle-mounted camera is required. In the embodiment of the disclosure, different preset conditions can be set for different intelligent vehicle cabin functions. Of course, different intelligent vehicle cabin functions may correspond to the same preset conditions, and are not limited herein. In the embodiment of the disclosure, if the second image quality parameter value of the first image does not satisfy the preset condition, the control parameter of the vehicle-mounted camera may be generated. Wherein the control parameter may represent a parameter for controlling the in-vehicle camera.
In one possible implementation, the control parameters include hardware control parameters, and the method further includes: and writing the hardware control parameters into a register of the vehicle-mounted camera. In this implementation, the hardware control parameter may represent a parameter for configuring a register of the in-vehicle camera. In this implementation manner, in response to that the second image quality parameter value of the first image does not satisfy the preset condition, the hardware control parameter of the vehicle-mounted camera is generated according to the second image quality parameter value, and the generated hardware control parameter is written into the register of the vehicle-mounted camera, so that the vehicle-mounted camera can adjust the parameter in a manner of configuring the register, and thus, the hardware ISP does not need to be configured for the vehicle-mounted camera.
As an example of this implementation, the generating, in response to the second image quality parameter value not meeting a preset condition, a control parameter of the vehicle-mounted camera according to the second image quality parameter value includes: obtaining a target value of the image quality parameter of the first image in response to the second image quality parameter value not meeting a preset condition; determining an imaging parameter of the vehicle-mounted camera according to a difference value between the second image quality parameter value and the target value; and converting the imaging parameters into the hardware control parameters. In this example, the target value of the image quality parameter of the first image may be a preset fixed value, or may be a value determined from a preset value range. According to the corresponding relation between the parameters of the vehicle-mounted camera and the image quality parameters of the first image and the difference value between the second image quality parameter values and the target values, the imaging parameters of the vehicle-mounted camera can be determined. In this example, the imaging parameters of the vehicle-mounted camera can be directly converted into the hardware control parameters, so that after the hardware control parameters are written into the register of the vehicle-mounted camera, the vehicle-mounted camera can adjust the parameters in a manner of configuring the register.
In one example, the obtaining the target value of the image quality parameter of the first image in response to the second image quality parameter value not satisfying a preset condition includes: responding to the second image quality parameter value not meeting the preset condition, and acquiring the target value range of the image quality parameter of the first image; and determining the target value of the image quality parameter of the first image according to the second image quality parameter value and the target value range. The target value range may represent a preset value range of the image quality parameter. The range of target values of the image quality parameter may include at least one value. In this example, a mapping relationship between the current value and the target value of the image quality parameter may be set in advance, whereby the target value of the image quality parameter of the first image may be determined from the range of the target value based on the second image quality parameter value of the first image and the mapping relationship. In this example, by acquiring a range of target values of the image quality parameter of the first image in response to the second image quality parameter value not satisfying a preset condition, and determining the target value of the image quality parameter of the first image based on the second image quality parameter value and the range of target values, it is possible to contribute to efficiently and stably adjusting the parameter of the in-vehicle camera.
In another example, it may be unnecessary to set a range of target values of the image quality parameters in advance, and only the target values of the image quality parameters may be set. In this example, the preset target value may be directly acquired in response to the second image quality parameter value of the first image not satisfying the preset condition.
In one example, the determining the imaging parameter of the vehicle-mounted camera according to the difference between the second image quality parameter value and the target value includes: inputting a difference value between the second image quality parameter value and the target value into a preset proportional-Integral-Differential (PID) controller, and predicting an imaging parameter of the vehicle-mounted camera through the preset PID controller. In this example, the imaging parameter of the in-vehicle camera is predicted based on the difference between the second image quality parameter value of the first image and the target value by using the preset proportional-integral-derivative controller, whereby the accuracy of predicting the imaging parameter of the in-vehicle camera can be improved.
In other examples, a preset function may be used to calculate the imaging parameter of the vehicle-mounted camera based on the difference between the second image quality parameter value of the first image and the target value, or another type of controller may be used to predict the imaging parameter of the vehicle-mounted camera based on the difference between the second image quality parameter value of the first image and the target value.
As one example of this implementation, the image quality parameter includes exposure, and the imaging parameter includes at least one of: exposure time, analog gain, digital gain, aperture size. In one example, the first image may be converted into a gray scale image, and the exposure of the first image may be calculated according to the gray scale value of the region of interest and the gray scale value of the region of non-interest in the first image. The exposure of the first image is positively correlated with the gray value of the region of interest in the first image, and the exposure of the first image is positively correlated with the gray value of the region of no interest in the first image. In the example, the exposure of the first image is determined according to the exposure of the interested area in the first image acquired by the vehicle-mounted camera, and at least one of the exposure time, the analog gain, the digital gain and the aperture size of the vehicle-mounted camera is adjusted according to the exposure of the first image in response to the fact that the exposure of the first image does not meet the preset condition, so that the exposure parameter of the vehicle-mounted camera is adjusted according to the exposure of the interested area in the image acquired by the vehicle-mounted camera, the imaging quality of the vehicle-mounted camera can be improved aiming at the interested area under different illumination conditions, the intelligent vehicle cabin function based on computer vision can be effectively improved, and the possibility that the intelligent vehicle cabin function fails due to underexposure or overexposure of the vehicle-mounted camera can be reduced.
In one possible implementation, after the determining the second image quality parameter value of the first image, the method further includes: and in response to the second image quality parameter value not meeting the preset condition, optimizing the first image to obtain an optimized first image. In the implementation mode, the first image is optimized by responding that the second image quality parameter value of the first image does not meet the preset condition, so that the optimized first image is obtained, the parameter adjustment of the vehicle-mounted camera can be carried out based on the image quality parameter value of the image acquired by the vehicle-mounted camera, the acquired image can be subjected to subsequent optimization processing, and the quality of the image acquired by the vehicle-mounted camera can be improved.
In one possible implementation, the method further includes: adding 1 to the camera adjusting frame number in response to the second image quality parameter value not meeting the preset condition, wherein the initial value of the camera adjusting frame number is 0; and under the condition that the number of the camera adjusting frames reaches a preset threshold value, optimizing the first image to obtain an optimized first image. For example, if the preset threshold is N, in this implementation manner, the parameter adjustment of the onboard camera may be performed first in response to that the image quality parameter value of the image acquired by the onboard camera does not satisfy the preset condition, and the optimization process is automatically started under the condition that the image quality parameter value of the continuous N frames of images acquired by the onboard camera does not satisfy the preset condition, so as to further improve the quality of the image acquired by the onboard camera. For example, in an application scenario in which the face id (faceid) is registered, if the exposure amount of a face area (region of interest) in an image acquired by a vehicle-mounted camera does not satisfy a preset condition, the exposure parameters of the vehicle-mounted camera may be adjusted first, and then the image is optimized under the condition that the exposure amounts of consecutive N frames of images acquired by the vehicle-mounted camera do not satisfy the preset condition.
As an example of this implementation, performing optimization processing on the first image to obtain an optimized first image includes: and optimizing the region of interest to obtain an optimized first image. In this example, optimization processing may be performed only on the region of interest in the first image, and thus the amount of calculation for performing optimization processing on the first image can be reduced, and the speed of performing optimization processing on the first image can be increased.
As another example of this implementation, the optimization process may be performed on the entire first image, i.e., the optimization process may be performed on the region of interest and the region of non-interest in the first image simultaneously.
As an example of this implementation, the optimization process includes at least one of: bayer domain denoising, YUV domain denoising, color noise point elimination, dead point elimination, super-resolution reconstruction, Gamma (Gamma) correction, color correction and brightness adjustment. According to the example, the image quality of the first image can be further improved, so that the possibility of failure of the intelligent vehicle cabin due to poor imaging quality of the vehicle-mounted camera is further reduced.
Of course, the first image may be optimized by adopting other image optimization methods according to the actual requirements of the intelligent vehicle cabin function, which is not limited herein.
The control method of the vehicle-mounted camera provided by the embodiment of the disclosure can be applied to the technical fields of intelligent vehicle cabins, intelligent automobiles and the like. The embodiment of the disclosure can be applied to the vehicle-mounted camera not provided with the hardware ISP, the imaging quality of the vehicle-mounted camera is improved in a software mode on the premise of not increasing the hardware cost, and the embodiment of the disclosure can also be applied to the vehicle-mounted camera provided with the hardware ISP, and the embodiment of the disclosure can jointly act with the existing hardware ISP to expand the capability of image optimization.
The following describes a control method of a vehicle-mounted camera provided by the embodiment of the present disclosure through three specific application scenarios.
The application scene one: in hot summer, the direct solar radiation is on the DMS camera installed in the cabin, which causes the temperature of the DMS camera to increase, the quality of the image collected by the DMS camera decreases (noise increases, the frame rate of the image is decreased, etc.), and the driver monitoring functions such as fatigue detection and distraction detection of the DMS face the risk of failure. By adopting the control method of the vehicle-mounted camera provided by the embodiment of the disclosure, the frame rate of the DMS camera is reduced and/or the exposure time of the DMS camera is shortened, and the temperature of the DMS camera can be reduced, so that the noise in the image acquired by the DMS camera can be reduced, the quality of the image acquired by the DMS camera is improved, and the driver monitoring function of the intelligent vehicle cabin can be normally operated.
Application scenario two: when the vehicle runs on a shade road, sunlight shines on the face of a driver through gaps among leaves to form a block of light spots, and the face detection, the fatigue detection, the distraction detection, the watching area, the dangerous action and other functions of the DMS are mistakenly identified. By adopting the control method of the vehicle-mounted camera provided by the embodiment of the disclosure, the exposure time of the DMS camera can be automatically adjusted, and the over-bright area in the image acquired by the DMS camera can be smoothed through optimization processing, so that the image quality is further improved, and the probability of functional misrecognition such as face detection, fatigue detection, distraction detection, watching area, dangerous action and the like of the DMS can be reduced.
Application scenario three: in the corner of underground parking garage, the driver's face region can't be lighted to light, leads to unable discernment driver, and the driver can't accomplish through the face and logs on. By adopting the control method of the vehicle-mounted camera provided by the embodiment of the disclosure, the gain coefficient of the vehicle-mounted camera can be automatically increased, the brightness of the image acquired by the vehicle-mounted camera is improved, and noise caused by the gain increase can be removed through optimization processing, so that the registration and login functions of the human face can be normally operated.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a control device of a vehicle-mounted camera, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the control methods of a vehicle-mounted camera provided by the present disclosure, and corresponding technical solutions and technical effects can be referred to in corresponding descriptions of the method sections and are not described again.
Fig. 2 shows a block diagram of a control device of an in-vehicle camera provided in an embodiment of the present disclosure. As shown in fig. 2, the control device of the in-vehicle camera includes:
the acquisition module 21 is used for acquiring a first image acquired by the vehicle-mounted camera;
a first determination module 22 for determining a first image quality parameter value for a region of interest in the first image;
a second determining module 23, configured to determine a second image quality parameter value of the first image based on the first image quality parameter value;
and the generating module 24 is configured to generate a control parameter of the vehicle-mounted camera according to the second image quality parameter value in response to that the second image quality parameter value does not satisfy a preset condition.
In one possible implementation, the apparatus further includes:
and the third determining module is used for determining the region of interest in the first image according to the type of the vehicle-mounted camera.
In one possible implementation manner, the third determining module is configured to:
determining a region of interest in the first image based on a driver seat region in the first image in response to the vehicle-mounted camera being a Driver Monitoring System (DMS) camera;
and/or the presence of a gas in the gas,
determining a region of interest in the first image based on a plurality of seating regions in the first image in response to the in-vehicle camera being an Occupant Monitoring System (OMS) camera.
In one possible implementation manner, the third determining module is configured to:
determining a candidate area in the first image according to the type of the vehicle-mounted camera;
and determining the region of interest in the first image based on the region where the preset target object in the candidate region is located.
In one possible implementation, the apparatus further includes:
and the fourth determining module is used for determining the region of interest in the first image based on the region of the preset target object in the first image.
In one possible implementation, the apparatus further includes:
and the fifth determining module is used for determining the region of interest in the first image according to the region of interest of the previous frame of image of the first image in the image sequence acquired by the vehicle-mounted camera.
In one possible implementation, the determining a second image quality parameter value of the first image based on the first image quality parameter value includes:
acquiring first weight information corresponding to the region of interest and second weight information corresponding to a region of no interest in the first image, wherein the region of no interest represents a region outside the region of interest in the first image;
determining a second image quality parameter value of the first image according to the first image quality parameter value, the first weight information, a third image quality parameter value of the non-interesting region and the second weight information.
In a possible implementation manner, the non-interest region includes a plurality of sub-regions, and the second weight information includes weight values corresponding to the plurality of sub-regions one to one; the weight value corresponding to any one of the plurality of sub-regions is inversely related to the distance between the sub-region and the region of interest.
In one possible implementation, the control parameter includes a hardware control parameter, and the apparatus further includes:
and the writing module is used for writing the hardware control parameters into a register of the vehicle-mounted camera.
In one possible implementation, the generating module 24 is configured to:
obtaining a target value of the image quality parameter of the first image in response to the second image quality parameter value not meeting a preset condition;
determining an imaging parameter of the vehicle-mounted camera according to a difference value between the second image quality parameter value and the target value;
and converting the imaging parameters into the hardware control parameters.
In one possible implementation, the generating module 24 is configured to:
responding to the second image quality parameter value not meeting the preset condition, and acquiring the target value range of the image quality parameter of the first image;
and determining the target value of the image quality parameter of the first image according to the second image quality parameter value and the target value range.
In one possible implementation, the generating module 24 is configured to:
inputting the difference value between the second image quality parameter value and the target value into a preset proportional-integral-derivative controller, and predicting the imaging parameter of the vehicle-mounted camera through the preset proportional-integral-derivative controller.
In one possible implementation, the image quality parameter includes exposure, and the imaging parameter includes at least one of: exposure time, analog gain, digital gain, aperture size.
In one possible implementation, the apparatus further includes:
and the first optimization module is used for responding to the condition that the second image quality parameter value does not meet the preset condition, and optimizing the first image to obtain an optimized first image.
In a possible implementation manner, the apparatus further includes a second optimization module, and the second optimization module is configured to:
adding 1 to the camera adjusting frame number in response to the second image quality parameter value not meeting the preset condition, wherein the initial value of the camera adjusting frame number is 0;
and under the condition that the number of the camera adjusting frames reaches a preset threshold value, optimizing the first image to obtain an optimized first image.
In a possible implementation manner, the first optimization module and/or the second optimization module is configured to:
and optimizing the region of interest to obtain an optimized first image.
In one possible implementation, the optimization process includes at least one of: bayer domain denoising, YUV domain denoising, color noise point elimination, dead point elimination, super-resolution reconstruction, gamma correction, color correction and brightness adjustment.
In the embodiment of the disclosure, a first image quality parameter value of an interested area in a first image is determined by acquiring the first image acquired by a vehicle-mounted camera, a second image quality parameter value of the first image is determined based on the first image quality parameter value, and a control parameter of the vehicle-mounted camera is generated according to the second image quality parameter value in response to the second image quality parameter value not meeting a preset condition, so that the parameter of the vehicle-mounted camera is adjusted based on the image quality parameter value of the interested area in the image acquired by the vehicle-mounted camera, the imaging quality of the vehicle-mounted camera can be improved aiming at the interested area, the intelligent vehicle cabin function based on computer vision can be effectively improved, and the possibility of failure of the intelligent vehicle cabin function due to poor imaging quality of the vehicle-mounted camera can be reduced. In addition, the imaging quality of the vehicle-mounted camera is improved in a software mode, and the increase of the hardware cost of the intelligent vehicle cabin is avoided.
In some embodiments, functions or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementations and technical effects thereof may refer to the description of the above method embodiments, which are not described herein again for brevity.
The embodiment of the disclosure further provides a vehicle, which comprises a vehicle machine and a vehicle-mounted camera which are connected with each other; the vehicle-mounted camera is used for acquiring a first image; the vehicle-mounted camera is used for acquiring a first image from the vehicle-mounted camera, determining a first image quality parameter value of an interested area in the first image, determining a second image quality parameter value of the first image based on the first image quality parameter value, responding to the fact that the second image quality parameter value does not meet a preset condition, generating a control parameter of the vehicle-mounted camera according to the second image quality parameter value, and carrying out parameter adjustment on the vehicle-mounted camera according to the control parameter.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-described method. The computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
Embodiments of the present disclosure also provide a computer program, which includes computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes the above method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-volatile computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
An embodiment of the present disclosure further provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 3 illustrates a block diagram of an electronic device 800 provided by an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 3, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G)/long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 4 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 4, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may further include a power supply component 1926 configured to execute electronicsPower management for the device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

1. A control method of a vehicle-mounted camera is characterized by comprising the following steps:
acquiring a first image acquired by a vehicle-mounted camera;
determining a first image quality parameter value for a region of interest in the first image;
determining a second image quality parameter value for the first image based on the first image quality parameter value;
and responding to the situation that the second image quality parameter value does not meet the preset condition, and generating the control parameter of the vehicle-mounted camera according to the second image quality parameter value.
2. The method of claim 1, further comprising:
and determining the region of interest in the first image according to the type of the vehicle-mounted camera.
3. The method of claim 2, wherein determining the region of interest in the first image according to the type of the onboard camera comprises:
determining a region of interest in the first image based on a driver seat region in the first image in response to the vehicle-mounted camera being a Driver Monitoring System (DMS) camera;
and/or the presence of a gas in the gas,
determining a region of interest in the first image based on a plurality of seating regions in the first image in response to the in-vehicle camera being an Occupant Monitoring System (OMS) camera.
4. The method according to claim 2 or 3, wherein the determining the region of interest in the first image according to the type of the onboard camera comprises:
determining a candidate area in the first image according to the type of the vehicle-mounted camera;
and determining the region of interest in the first image based on the region where the preset target object in the candidate region is located.
5. The method of claim 1, further comprising:
and determining the region of interest in the first image based on the region of the preset target object in the first image.
6. The method according to any one of claims 1 to 5, further comprising:
and determining the region of interest in the first image according to the region of interest of the last frame of image of the first image in the image sequence acquired by the vehicle-mounted camera.
7. The method of any of claims 1 to 6, wherein determining a second image quality parameter value for the first image based on the first image quality parameter value comprises:
acquiring first weight information corresponding to the region of interest and second weight information corresponding to a region of no interest in the first image, wherein the region of no interest represents a region outside the region of interest in the first image;
determining a second image quality parameter value of the first image according to the first image quality parameter value, the first weight information, a third image quality parameter value of the non-interesting region and the second weight information.
8. The method according to claim 7, wherein the region of non-interest includes a plurality of sub-regions, and the second weight information includes weight values corresponding to the plurality of sub-regions one to one; the weight value corresponding to any one of the plurality of sub-regions is inversely related to the distance between the sub-region and the region of interest.
9. The method of any of claims 1 to 8, wherein the control parameters comprise hardware control parameters, the method further comprising:
and writing the hardware control parameters into a register of the vehicle-mounted camera.
10. The method according to claim 9, wherein the generating control parameters for the vehicle-mounted camera according to the second image quality parameter value in response to the second image quality parameter value not satisfying a preset condition comprises:
obtaining a target value of the image quality parameter of the first image in response to the second image quality parameter value not meeting a preset condition;
determining an imaging parameter of the vehicle-mounted camera according to a difference value between the second image quality parameter value and the target value;
and converting the imaging parameters into the hardware control parameters.
11. The method according to claim 10, wherein said obtaining a target value of an image quality parameter of the first image in response to the second image quality parameter value not satisfying a preset condition comprises:
responding to the second image quality parameter value not meeting the preset condition, and acquiring the target value range of the image quality parameter of the first image;
and determining the target value of the image quality parameter of the first image according to the second image quality parameter value and the target value range.
12. The method according to claim 10 or 11, wherein determining the imaging parameters of the vehicle-mounted camera according to the difference between the second image quality parameter value and the target value comprises:
inputting the difference value between the second image quality parameter value and the target value into a preset proportional-integral-derivative controller, and predicting the imaging parameter of the vehicle-mounted camera through the preset proportional-integral-derivative controller.
13. The method of any of claims 9 to 12, wherein the image quality parameter comprises exposure, and the imaging parameter comprises at least one of: exposure time, analog gain, digital gain, aperture size.
14. The method according to any of claims 1 to 13, wherein after said determining a second image quality parameter value for said first image, the method further comprises:
and in response to the second image quality parameter value not meeting the preset condition, optimizing the first image to obtain an optimized first image.
15. The method according to any one of claims 1 to 14, further comprising:
adding 1 to the camera adjusting frame number in response to the second image quality parameter value not meeting the preset condition, wherein the initial value of the camera adjusting frame number is 0;
and under the condition that the number of the camera adjusting frames reaches a preset threshold value, optimizing the first image to obtain an optimized first image.
16. The method according to claim 14 or 15, wherein performing an optimization process on the first image to obtain an optimized first image comprises:
and optimizing the region of interest to obtain an optimized first image.
17. The method according to any of claims 14 to 16, wherein the optimization process comprises at least one of: bayer domain denoising, YUV domain denoising, color noise point elimination, dead point elimination, super-resolution reconstruction, gamma correction, color correction and brightness adjustment.
18. The control device for the vehicle-mounted camera is characterized by comprising:
the acquisition module is used for acquiring a first image acquired by the vehicle-mounted camera;
a first determination module for determining a first image quality parameter value for a region of interest in the first image;
a second determining module for determining a second image quality parameter value for the first image based on the first image quality parameter value;
and the generating module is used for responding to the condition that the second image quality parameter value does not meet the preset condition, and generating the control parameter of the vehicle-mounted camera according to the second image quality parameter value.
19. An electronic device, comprising:
one or more processors;
a memory for storing executable instructions;
wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any one of claims 1 to 17.
20. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 17.
CN202110737681.XA 2021-06-30 2021-06-30 Control method and device of vehicle-mounted camera, equipment and medium Pending CN113507569A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110737681.XA CN113507569A (en) 2021-06-30 2021-06-30 Control method and device of vehicle-mounted camera, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110737681.XA CN113507569A (en) 2021-06-30 2021-06-30 Control method and device of vehicle-mounted camera, equipment and medium

Publications (1)

Publication Number Publication Date
CN113507569A true CN113507569A (en) 2021-10-15

Family

ID=78009694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110737681.XA Pending CN113507569A (en) 2021-06-30 2021-06-30 Control method and device of vehicle-mounted camera, equipment and medium

Country Status (1)

Country Link
CN (1) CN113507569A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390323A (en) * 2022-01-04 2022-04-22 亿咖通(湖北)技术有限公司 Vehicle-mounted image transmission method and electronic equipment
CN115115531A (en) * 2022-01-14 2022-09-27 长城汽车股份有限公司 Image denoising method and device, vehicle and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140085473A1 (en) * 2011-06-16 2014-03-27 Aisin Seiki Kabushiki Kaisha In-vehicle camera apparatus
CN104243835A (en) * 2013-06-19 2014-12-24 华为技术有限公司 Automatic diaphragm control method and system
CN110620881A (en) * 2019-10-31 2019-12-27 北京猎户智芯科技有限公司 License plate exposure compensation method and device, computer equipment and storage medium
CN111491103A (en) * 2020-04-23 2020-08-04 浙江大华技术股份有限公司 Image brightness adjusting method, monitoring equipment and storage medium
CN112055961A (en) * 2020-08-06 2020-12-08 深圳市锐明技术股份有限公司 Shooting method, shooting device and terminal equipment
CN112616028A (en) * 2020-12-15 2021-04-06 深兰人工智能(深圳)有限公司 Vehicle-mounted camera parameter adjusting method and device, electronic equipment and storage medium
CN112702489A (en) * 2020-12-24 2021-04-23 上海商汤临港智能科技有限公司 Camera module, control method and device thereof, vehicle and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140085473A1 (en) * 2011-06-16 2014-03-27 Aisin Seiki Kabushiki Kaisha In-vehicle camera apparatus
CN104243835A (en) * 2013-06-19 2014-12-24 华为技术有限公司 Automatic diaphragm control method and system
CN110620881A (en) * 2019-10-31 2019-12-27 北京猎户智芯科技有限公司 License plate exposure compensation method and device, computer equipment and storage medium
CN111491103A (en) * 2020-04-23 2020-08-04 浙江大华技术股份有限公司 Image brightness adjusting method, monitoring equipment and storage medium
CN112055961A (en) * 2020-08-06 2020-12-08 深圳市锐明技术股份有限公司 Shooting method, shooting device and terminal equipment
CN112616028A (en) * 2020-12-15 2021-04-06 深兰人工智能(深圳)有限公司 Vehicle-mounted camera parameter adjusting method and device, electronic equipment and storage medium
CN112702489A (en) * 2020-12-24 2021-04-23 上海商汤临港智能科技有限公司 Camera module, control method and device thereof, vehicle and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390323A (en) * 2022-01-04 2022-04-22 亿咖通(湖北)技术有限公司 Vehicle-mounted image transmission method and electronic equipment
CN114390323B (en) * 2022-01-04 2023-12-01 亿咖通(湖北)技术有限公司 Vehicle-mounted image transmission method and electronic equipment
CN115115531A (en) * 2022-01-14 2022-09-27 长城汽车股份有限公司 Image denoising method and device, vehicle and storage medium

Similar Documents

Publication Publication Date Title
US20210118112A1 (en) Image processing method and device, and storage medium
CN109829501B (en) Image processing method and device, electronic equipment and storage medium
CN107205125B (en) A kind of image processing method, device, terminal and computer readable storage medium
CN107692997B (en) Heart rate detection method and device
US9924226B2 (en) Method and device for processing identification of video file
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
CN109859144B (en) Image processing method and device, electronic equipment and storage medium
CN110287671B (en) Verification method and device, electronic equipment and storage medium
CN109840939B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium
CN107539209B (en) Method and device for controlling vehicle light
US20210012091A1 (en) Method and apparatus for image processing, electronic device, and storage medium
CN106131441B (en) Photographing method and device and electronic equipment
CN110532957B (en) Face recognition method and device, electronic equipment and storage medium
CN113507569A (en) Control method and device of vehicle-mounted camera, equipment and medium
CN111435422B (en) Action recognition method, control method and device, electronic equipment and storage medium
CN112819714A (en) Target object exposure method, device, storage medium and equipment
CN111626086A (en) Living body detection method, living body detection device, living body detection system, electronic device, and storage medium
CN113989889A (en) Shading plate adjusting method and device, electronic equipment and storage medium
CN113689361B (en) Image processing method and device, electronic equipment and storage medium
CN113177890B (en) Image processing method and device, electronic equipment and storage medium
CN111192218A (en) Image processing method and device, electronic equipment and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN113505674B (en) Face image processing method and device, electronic equipment and storage medium
CN108597456B (en) Backlight brightness adjusting method and device
CN111275641A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211015

RJ01 Rejection of invention patent application after publication