CN114708821B - Intelligent LED display screen system based on multi-sensor data fusion - Google Patents

Intelligent LED display screen system based on multi-sensor data fusion Download PDF

Info

Publication number
CN114708821B
CN114708821B CN202210278615.5A CN202210278615A CN114708821B CN 114708821 B CN114708821 B CN 114708821B CN 202210278615 A CN202210278615 A CN 202210278615A CN 114708821 B CN114708821 B CN 114708821B
Authority
CN
China
Prior art keywords
data
display screen
led display
determining
intelligent led
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210278615.5A
Other languages
Chinese (zh)
Other versions
CN114708821A (en
Inventor
杨敏娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Space Display Shenzhen Co ltd
Original Assignee
Space Display Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Space Display Shenzhen Co ltd filed Critical Space Display Shenzhen Co ltd
Priority to CN202210278615.5A priority Critical patent/CN114708821B/en
Publication of CN114708821A publication Critical patent/CN114708821A/en
Application granted granted Critical
Publication of CN114708821B publication Critical patent/CN114708821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/22Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources
    • G09G3/30Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels
    • G09G3/32Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

The invention provides an intelligent LED display screen system based on multi-sensor data fusion, which comprises: the data sensing terminal is used for acquiring various sensing data in real time based on a plurality of data acquisition terminals, and the various sensing data comprise light sensing data and video data; the fusion determining end is used for determining a corresponding working mode based on the multiple sensing data, a preset decision mode and a user control instruction; the state control end is used for controlling the working state of the intelligent LED display screen based on the working mode; when the using environment changes, the intelligent automatic adjustment of the intelligent LED display screen can be realized, and the working mode and the display parameters do not need to be set manually.

Description

Intelligent LED display screen system based on multi-sensor data fusion
Technical Field
The invention relates to the technical field of intelligent control, in particular to an intelligent LED display screen system based on multi-sensor data fusion.
Background
At present, an LED Display screen System (LED Display Control System) is a System for controlling an LED large screen to Display correctly according to user requirements, and is classified into a halftone screen and a single screen according to a networking mode. The networking version is also called as an LED information release control system, and can control each LED terminal through a cloud system. The single board is also called an LED display screen controller and an LED display screen control card, and is a device which forms a core component of the LED display screen and is mainly responsible for converting an external video input signal or onboard multimedia file into a digital signal which is easy to identify by the LED large screen so as to lighten the LED large screen.
However, most of the display control of the current display screen system is artificial intelligent control, and when the illumination condition or the use environment changes, the working mode and the display parameters need to be set manually, so that the intelligent automatic adjustment of the intelligent LED display screen cannot be realized.
Therefore, the invention provides an intelligent LED display screen system for multi-sensor data fusion.
Disclosure of Invention
The invention provides an intelligent LED display screen system based on multi-sensor data fusion, which is used for realizing intelligent automatic adjustment of a display intelligent LED display screen when the use environment changes without manually setting a working mode and display parameters.
The invention provides an intelligent LED display screen system based on multi-sensor data fusion, which comprises:
the data sensing terminal is used for acquiring various sensing data in real time based on a plurality of data acquisition terminals, and the various sensing data comprise light sensing data and video data;
the fusion determining end is used for determining a corresponding working mode based on the multiple sensing data, a preset decision mode and a user control instruction;
and the state control end is used for controlling the working state of the intelligent LED display screen based on the working mode.
Preferably, the data sensing terminal includes:
the video acquisition module is used for acquiring a monitoring video in a preset range in real time based on a camera arranged on the intelligent LED display screen;
the illumination acquisition module is used for acquiring light sensation data in real time based on a light sensor arranged on the intelligent LED display screen;
and the state monitoring module is used for judging whether the data sensing end has a fault or not based on the monitoring video and the light sensing data, obtaining a judgment result and sending a corresponding fault instruction based on the judgment result.
Preferably, the status monitoring module includes:
the curve fitting unit is used for fitting and obtaining a corresponding light sensation data dynamic curve based on the light sensation data obtained in real time;
the data fusion unit is used for aligning and fusing the light sensation dynamic curve and the monitoring video to obtain corresponding total dynamic data;
the first monitoring unit is used for judging that the monitoring video in the total dynamic data is paused, if so, a first fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
the second monitoring unit is used for judging whether a light sensation data dynamic curve in the total dynamic data is broken or not, if so, a second fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
wherein the fault instruction comprises: a first fault instruction and a second fault instruction.
Preferably, the fusion-determining end comprises:
the first determining module is used for determining a corresponding working mode based on the user control instruction when the user control instruction is received;
and the second determining module is used for determining a corresponding working mode based on the multiple sensing data and the preset decision mode when the user control instruction is not received.
Preferably, the first determining module includes:
the first analysis unit is used for analyzing the user control instruction to obtain a corresponding control parameter when receiving the user control instruction;
and the first determining unit is used for determining the corresponding working mode based on the control parameter.
Preferably, the second determining module includes:
the first analysis unit is used for carrying out primary analysis on the monitoring video in a preset period to obtain a corresponding primary analysis result;
the state judging unit is used for judging to start the intelligent LED display screen when the primary analysis result is that a pre-stored user exists in a preset range, and otherwise, judging not to start the intelligent LED display screen;
the second analysis unit is used for carrying out secondary analysis on the monitoring video in a preset period to obtain a corresponding secondary analysis result when the intelligent LED display screen is judged to be started;
the second determining unit is used for determining a display mode of the intelligent LED display screen based on the light sensation data and the secondary analysis result, and taking the display mode as a working mode of the intelligent LED display screen;
and the third determining unit is used for turning off the intelligent LED display screen as a corresponding working mode when the intelligent LED display screen is judged not to be started.
Preferably, the first analysis unit includes:
the image comparison subunit is used for comparing each frame of video frame contained in the monitoring video in a preset period with a prestored background image and determining a difference image corresponding to each frame of video frame;
a preliminary screening subunit, configured to screen a human body image from the difference image based on a preliminary screening method;
the face identification subunit is used for identifying the human body image based on a machine learning self-adaptive algorithm to obtain a corresponding face area;
a reference point determining subunit, configured to determine a corresponding reference point in the face image based on a preset determination method;
the image cutting subunit is used for cutting the face image based on the reference point to obtain a corresponding complete face area;
the image standardization subunit is used for carrying out standardization processing on the complete face area based on the distance between the reference points to obtain a corresponding standard-size face area and a corresponding standard-size face area set;
the image dividing subunit is used for dividing the standard-size face area into a preset number of sub-areas;
the image sampling subunit is used for sequentially performing sliding window sampling on the sub-regions in the standard-size face region to obtain corresponding sampling data;
the degree determining subunit is configured to determine, based on the sampling data, whether symmetric sub-regions corresponding to the sub-regions are included in the standard-size face region, and determine, based on the total number of sub-regions in which corresponding symmetric sub-regions exist in the standard-size face region, a forward degree of a corresponding face;
the mean value determining subunit is used for screening out a standard size face region corresponding to the maximum face forward degree from the standard size face region set to serve as an image to be corrected, and calculating a corresponding visual mean value based on visual data corresponding to each pixel point in the image to be corrected;
a first region determining subunit, configured to use a region formed by pixels whose visual data in the image to be corrected is greater than the visual mean as a sub-region to be attenuated, and use a region formed by pixels whose visual data in the image to be corrected is less than the visual mean as an sub-region to be enhanced;
the data determining subunit is used for determining first reflection data corresponding to the sub-region to be weakened and second reflection data corresponding to the sub-region to be enhanced;
the first normalization subunit is used for reducing the visual data corresponding to each pixel point in the sub-area to be weakened based on the first reflection data and the corresponding visual mean value, and obtaining a corresponding first standard sub-area;
the second normalization subunit is used for increasing the visual data corresponding to each pixel point in the sub-region to be enhanced based on the second reflection data and the corresponding visual mean value to obtain a corresponding second standard sub-region;
the region fusion subunit is configured to fuse the first standard sub-region, the second standard sub-region and the image to be corrected to obtain a corresponding standard face image;
and the first result determining subunit is used for judging whether a pre-stored face image matched with the standard face image exists in a pre-stored user library or not, if so, taking the pre-stored user in a preset range as a primary analysis result, and otherwise, taking the non-stored user in the preset range as the primary analysis result.
Preferably, the second analysis unit includes:
the edge determining subunit is used for determining an edge line in each frame of video frame in the monitoring video in a preset period based on a preset determining mode when the intelligent LED display screen is judged to be started;
a video frame dividing subunit, configured to divide the corresponding video frame into a plurality of sub-blocks based on the edge lines;
a spatial data determining subunit, configured to determine corresponding spatial data based on the sub-block;
a curve determining subunit, configured to generate a corresponding component histogram distribution curve based on the visual data component corresponding to each pixel point in the sub-block;
a curve fusion subunit, configured to fuse the component histogram distribution curves to obtain a corresponding total histogram distribution curve;
a second region determining subunit, configured to determine a minimum value in the total histogram distribution curve, count a total number of pixels between adjacent minimum values as a corresponding pixel capacity, and use an image region corresponding to the sub-block in a curve segment corresponding to the maximum pixel capacity in the total histogram distribution curve as a corresponding sampling region;
the data calculation subunit is used for calculating corresponding relative illumination data based on the visual data corresponding to the sampling area;
and the second result determining subunit is used for taking the spatial data and the relative illumination data as corresponding secondary analysis results.
Preferably, the state control terminal includes:
the state monitoring module is used for monitoring the current working state of the intelligent LED display screen;
the first judgment module is used for judging whether the current working state is closed, if so, judging whether the working mode is the closing of the intelligent LED display screen, if so, sending a holding instruction, and otherwise, sending a starting instruction;
the first control module is used for controlling the working state of the intelligent LED display screen based on the current working state and the working mode when the current working state is not closed;
and the second control module is used for controlling the working state of the intelligent LED display screen based on the working mode when the starting instruction is received.
Preferably, the first control module includes:
the parameter determining unit is used for determining a corresponding first working parameter based on the current working state when the current working state is not closed, and determining a corresponding second working parameter based on the working mode;
the instruction generating unit is used for generating a corresponding parameter adjusting instruction based on the difference value of the first working parameter and the second working parameter;
and the state adjusting unit is used for adjusting the working state of the intelligent LED display screen based on the parameter adjusting instruction.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of an intelligent LED display screen system based on multi-sensor data fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data sensing terminal according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a status monitoring module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a fusion determining end according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a first determining module according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a second determining module according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a first analysis unit according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a second analysis unit according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a status control node according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a first control module according to an embodiment of the invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
the invention provides an intelligent LED display screen system based on multi-sensor data fusion, which comprises the following components in part by weight with reference to FIG. 2:
the data sensing terminal is used for acquiring various sensing data in real time based on a plurality of data acquisition terminals, and the various sensing data comprise light sensing data and video data;
the fusion determining end is used for determining a corresponding working mode based on the multiple sensing data, a preset decision mode and a user control instruction;
and the state control end is used for controlling the working state of the intelligent LED display screen based on the working mode.
In this embodiment, the data acquisition terminal includes: a camera and a light sensor.
In this embodiment, sensing data includes: based on the video data that the camera obtained, based on the light sense data that photosensitive sensor obtained.
In this embodiment, the light sensing data is data obtained by the light sensor, that is, light sensing data within a preset range of the setting position of the light sensor.
In this embodiment, the video data is data obtained by the camera, that is, a video obtained within a preset range of the setting position of the camera.
In this embodiment, the preset decision manner is a method for determining the working mode of the intelligent LED display screen based on the sensing data.
In this embodiment, the working mode is a display mode (working mode corresponding to different display parameters) corresponding to whether the intelligent LED display screen is turned on or off and turned on.
In this embodiment, the user control instruction is an instruction for setting the operating mode of the intelligent LED display screen, which is input by a user based on a remote control device or program control.
The beneficial effects of the above technology are: when the service environment changes, the light sensation data and the video data which are acquired by the plurality of sensors arranged on the intelligent LED display screen and the preset decision-making mode are combined with the user control instruction, the intelligent automatic adjustment of the intelligent LED display screen can be realized, the working mode and the display parameters do not need to be set manually, the automatic intelligent adjustment of the intelligent LED display screen is realized, and the automation degree and the service performance of the intelligent LED display screen are improved.
Example 2:
on the basis of the embodiment 1, the data sensing terminal, referring to fig. 2, includes:
the video acquisition module is used for acquiring a monitoring video in a preset range in real time based on a camera arranged on the intelligent LED display screen;
the illumination acquisition module is used for acquiring light sensation data in real time based on a light sensor arranged on the intelligent LED display screen;
and the state monitoring module is used for judging whether the data sensing end has a fault or not based on the monitoring video and the light sensing data, obtaining a judgment result and sending a corresponding fault instruction based on the judgment result.
In this embodiment, the determination result is a result obtained by determining whether the data sensing end has a fault based on the monitoring video and the light sensing data.
In this embodiment, the fault instruction is an instruction for reminding a user that a fault occurs at the corresponding sensor terminal.
The beneficial effects of the above technology are: the light sensation data and the video data in the use environment on the intelligent LED display screen can be acquired in real time based on each sensor end arranged on the intelligent LED display screen, a foundation is provided for the follow-up automatic adjustment of the intelligent LED display screen, the fault monitoring of the sensor end is realized based on the set state monitoring module, the normal use of the sensor end is ensured, and the effectiveness of the automatic adjustment function of the intelligent LED display screen is also ensured.
Example 3:
on the basis of embodiment 2, the status monitoring module, referring to fig. 3, includes:
the curve fitting unit is used for fitting and obtaining a corresponding light sensation data dynamic curve based on the light sensation data obtained in real time;
the data fusion unit is used for aligning and fusing the light sensation dynamic curve and the monitoring video to obtain corresponding total dynamic data;
the first monitoring unit is used for judging that the monitoring video in the total dynamic data is paused, if so, a first fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
the second monitoring unit is used for judging whether a light sensation data dynamic curve in the total dynamic data is broken or not, if so, a second fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
wherein the fault instruction comprises: a first fault instruction and a second fault instruction.
In this embodiment, the dynamic curve of the light sensation data is a curve obtained by fitting based on the light sensation data acquired in real time.
In this embodiment, the total dynamic data is data obtained by aligning and fusing the light sensation dynamic curve and the monitoring video.
In this embodiment, the first failure instruction is an instruction for reminding a user that the corresponding camera head end fails.
In this embodiment, the first failure instruction is an instruction for reminding a user that a failure occurs at a corresponding optical sensor end.
The beneficial effects of the above technology are: the corresponding total dynamic data are obtained based on the fact that the light sensation data obtained in real time are fitted into a curve and the monitoring video obtained in real time is aligned and fused, whether the corresponding sensor end breaks down or not can be judged by judging whether the data in the total dynamic data break or not, normal operation of the corresponding sensor end is further guaranteed, the sensing data can be successfully obtained, and a foundation is provided for guaranteeing that the intelligent LED display screen can be normally and intelligently adjusted.
Example 4:
on the basis of example 1, the fusion-determining end, with reference to fig. 4, includes:
the first determining module is used for determining a corresponding working mode based on the user control instruction when the user control instruction is received;
and the second determining module is used for determining a corresponding working mode based on the multiple sensing data and the preset decision mode when the user control instruction is not received.
The beneficial effects of the above technology are: the corresponding working mode is determined based on the user control instruction or multiple sensing data and the preset decision mode when the user control instruction is judged to be received or not, so that the intelligent LED display screen realizes two control modes of setting the working mode based on the user control and automatically setting the working mode based on the sensing data, and the performance of the intelligent LED display screen is improved.
Example 5:
on the basis of embodiment 4, the first determining module, referring to fig. 5, includes:
the first analysis unit is used for analyzing the user control instruction to obtain a corresponding control parameter when receiving the user control instruction;
and the first determining unit is used for determining the corresponding working mode based on the control parameter.
In this embodiment, the control parameter is a setting parameter for controlling the working mode of the intelligent LED display screen obtained by analyzing the user control instruction.
The beneficial effects of the above technology are: the working mode of the intelligent LED display screen is controlled based on the user control instruction, and the control precision of the intelligent LED display screen is ensured.
Example 6:
on the basis of embodiment 4, the second determining module, referring to fig. 6, includes:
the first analysis unit is used for carrying out primary analysis on the monitoring video in a preset period to obtain a corresponding primary analysis result;
the state judging unit is used for judging to start the intelligent LED display screen when the primary analysis result is that a pre-stored user exists in a preset range, and otherwise, judging not to start the intelligent LED display screen;
the second analysis unit is used for carrying out secondary analysis on the monitoring video in a preset period when the intelligent LED display screen is judged to be started, and obtaining a corresponding secondary analysis result;
the second determining unit is used for determining a display mode of the intelligent LED display screen based on the light sensation data and the secondary analysis result, and taking the display mode as a working mode of the intelligent LED display screen;
and the third determining unit is used for turning off the intelligent LED display screen as a corresponding working mode when the intelligent LED display screen is judged not to be started.
In this embodiment, the primary analysis result is a result obtained by performing primary analysis on the monitoring video in the preset period.
In this embodiment, the secondary analysis result is a result obtained by performing secondary analysis on the monitoring video in the preset period, and includes spatial data and relative illumination data.
In this embodiment, the display mode is a working mode corresponding to different display parameters when the intelligent LED display screen is turned on.
The beneficial effects of the above technology are: when a user control instruction is not received, intelligent control is carried out on the intelligent LED display screen based on sensing data and a preset decision mode, so that the intelligent LED display screen can be automatically set to a corresponding optimal working mode in a pre-treatment use environment, and intelligent adjustment of the intelligent LED display screen is realized.
Example 7:
on the basis of embodiment 6, the first analysis unit, with reference to fig. 7, includes:
the image comparison subunit is used for comparing each frame of video frame contained in the monitoring video in a preset period with a prestored background image and determining a difference image corresponding to each frame of video frame;
a preliminary screening subunit, configured to screen a human body image from the difference image based on a preliminary screening method;
the face identification subunit is used for identifying the human body image based on a machine learning self-adaptive algorithm to obtain a corresponding face area;
a reference point determining subunit, configured to determine a corresponding reference point in the face image based on a preset determination method;
the image cutting subunit is used for cutting the face image based on the reference point to obtain a corresponding complete face area;
the image standardization subunit is used for carrying out standardization processing on the complete face area based on the distance between the reference points to obtain a corresponding standard-size face area and a corresponding standard-size face area set;
the image dividing subunit is used for dividing the standard-size face area into sub-areas with preset number;
the image sampling subunit is used for sequentially performing sliding window sampling on the sub-regions in the standard-size face region to obtain corresponding sampling data;
the degree determining subunit is configured to determine, based on the sampling data, whether symmetric sub-regions corresponding to the sub-regions are included in the standard-size face region, and determine, based on the total number of sub-regions in which corresponding symmetric sub-regions exist in the standard-size face region, a forward degree of a corresponding face;
the mean value determining subunit is used for screening out a standard size face region corresponding to the maximum face forward degree from the standard size face region set to serve as an image to be corrected, and calculating a corresponding visual mean value based on visual data corresponding to each pixel point in the image to be corrected;
a first region determining subunit, configured to use a region formed by pixels whose visual data in the image to be corrected is greater than the visual mean as a sub-region to be attenuated, and use a region formed by pixels whose visual data in the image to be corrected is less than the visual mean as an sub-region to be enhanced;
the data determining subunit is used for determining first reflection data corresponding to the sub-region to be weakened and second reflection data corresponding to the sub-region to be enhanced;
the first normalization subunit is used for reducing the visual data corresponding to each pixel point in the sub-area to be weakened based on the first reflection data and the corresponding visual mean value, and obtaining a corresponding first standard sub-area;
the second normalization subunit is used for increasing the visual data corresponding to each pixel point in the sub-region to be enhanced based on the second reflection data and the corresponding visual mean value to obtain a corresponding second standard sub-region;
the region fusion subunit is configured to fuse the first standard sub-region, the second standard sub-region and the image to be corrected to obtain a corresponding standard face image;
and the first result determining subunit is used for judging whether a pre-stored face image matched with the standard face image exists in a pre-stored user library or not, if so, taking the pre-stored user in a preset range as a primary analysis result, and otherwise, taking the non-stored user in the preset range as the primary analysis result.
In this embodiment, the difference image is an image area corresponding to each frame of video frame determined by comparing each frame of video frame included in the monitoring video in the preset period with the pre-stored background image.
In this embodiment, the preliminary screening method is a method for screening a human body image from a difference image.
In this embodiment, the human body image is an image corresponding to a human body existing within a shooting range of the camera.
In this embodiment, the face region is a face image obtained by identifying a human body image based on a machine learning adaptive algorithm.
In this embodiment, the predetermined determination method is a method of determining a reference point in the face image.
In this embodiment, the reference point is, for example, an eyeball center point, a nose tip center point, or the like.
In this embodiment, the complete face region is an image region obtained by cutting the face image based on the reference point.
In this embodiment, the standard size face region is a face region obtained by normalizing the entire face region based on the distance between the reference points.
In this embodiment, the standard size face region set is a set of standard size face regions.
In this embodiment, the sub-region is an image region obtained by dividing a standard-size face region.
In this embodiment, the sampling data is data obtained by sequentially performing sliding window sampling on sub-regions in the standard-size face region.
In this embodiment, the symmetric sub-region is a sub-region of the standard-size face region that is symmetric to the sampling data of the corresponding sub-region.
In this embodiment, determining the forward degree of the corresponding face based on the total number of sub-regions in which the corresponding symmetric sub-regions exist in the standard-size face region includes:
Figure BDA0003551222450000141
wherein, alpha is the face positive direction degree, u is the total number of the sub-areas of the corresponding symmetrical sub-areas in the standard size face area, v is the total number of the sub-areas contained in the standard size face area,
Figure BDA0003551222450000142
is a pair of
Figure BDA0003551222450000143
Carrying out rounding;
for example, if u is 8,v is 9, then α is 0.8.
In this embodiment, the image to be corrected is a standard-size face region corresponding to the maximum face forward degree screened from the standard-size face region set.
In this embodiment, calculating a corresponding visual mean value based on the visual data corresponding to each pixel point in the image to be corrected is: the average chroma value of the pixel points contained in the image to be corrected is the visual average corresponding to the chroma value, and the average brightness value of the pixel points contained in the image with correction is the visual average corresponding to the brightness value.
In this embodiment, the visual data includes chrominance values and luminance values.
In this embodiment, the to-be-enhanced region is a region formed by pixel points of which the visual data in the to-be-corrected image is smaller than the visual mean.
In this embodiment, the sub-region to be weakened is a region formed by pixels of which the visual data in the image to be corrected is larger than the visual mean.
In this embodiment, determining the first reflection data corresponding to the sub-area to be weakened includes: and converting the sub-region to be weakened into a logarithmic domain, obtaining the corresponding incident component and the corresponding reflection component, carrying out filtering treatment on the sub-region to be weakened to obtain the corresponding reflection component, and taking the corresponding reflection component as the corresponding first reflection data.
In this embodiment, the second reflection data corresponding to the enhancer region includes: and converting the to-be-enhanced enhancer region into a logarithmic domain, obtaining corresponding incident components and reflection components, performing filtering processing on the to-be-enhanced enhancer region to obtain corresponding reflection components, and taking the corresponding reflection components as corresponding first reflection data.
In this embodiment, the first standard sub-region is an image region obtained by reducing visual data corresponding to each pixel point included in the sub-region to be weakened based on the first reflection data and the corresponding visual mean.
In this embodiment, reducing the visual data corresponding to each pixel point included in the sub-region to be attenuated based on the first reflection data and the corresponding visualization mean includes: and reducing the visual data of the corresponding pixel point by a corresponding difference value based on the visual data difference value between the first reflection data and the visual mean value.
In this embodiment, the second standard sub-region is an image region obtained by increasing the visual data corresponding to each pixel point included in the sub-region to be enhanced based on the second reflection data and the corresponding visualization mean value.
In this embodiment, increasing the visual data corresponding to each pixel point included in the sub-region to be enhanced based on the second reflection data and the corresponding visualization mean includes: and increasing the visual data of the corresponding pixel point by a corresponding difference value based on the visual data difference value between the second reflection data and the visual mean value.
In this embodiment, the standard face image is an image region obtained by fusing the first standard sub-region, the second standard sub-region, and the image to be corrected.
In this embodiment, the pre-stored user library is a pre-stored user that can call the start of the intelligent LED display screen.
In this embodiment, the pre-stored face image is a user database pre-stored in a pre-stored user database and capable of prompting the intelligent LED display screen to be started.
The beneficial effects of the above technology are: the method comprises the steps of obtaining a standard forward face image by comparing, screening, identifying, cutting, standardizing, dividing and correcting each frame of video frame contained in a monitoring video in a preset period, matching the standard face image with a prestored face image in a prestored user library, and realizing enabling control and self-starting control on an intelligent LED display screen based on a matching result.
Example 8:
on the basis of embodiment 6, the second analysis unit, with reference to fig. 8, includes:
the edge determining subunit is used for determining an edge line in each frame of video frame in the monitoring video in a preset period based on a preset determining mode when the intelligent LED display screen is judged to be started;
a video frame dividing subunit, configured to divide the corresponding video frame into a plurality of sub-blocks based on the edge lines;
a spatial data determining subunit, configured to determine corresponding spatial data based on the sub-block;
a curve determining subunit, configured to generate a corresponding component histogram distribution curve based on the visual data component corresponding to each pixel point in the sub-block;
a curve fusion subunit, configured to fuse the component histogram distribution curves to obtain a corresponding total histogram distribution curve;
a second region determining subunit, configured to determine a minimum value in the total histogram distribution curve, count a total number of pixels between adjacent minimum values as a corresponding pixel capacity, and use an image region corresponding to the sub-block in a curve segment corresponding to the maximum pixel capacity in the total histogram distribution curve as a corresponding sampling region;
the data calculation subunit is used for calculating corresponding relative illumination data based on the visual data corresponding to the sampling area;
and the second result determining subunit is used for taking the spatial data and the relative illumination data as corresponding secondary analysis results.
In this embodiment, the predetermined determining method is a method for determining a spatial edge line in each frame of video frame in the monitored video in a predetermined period.
In this embodiment, the sub-block is an image block obtained by dividing the corresponding video frame based on the edge line.
In this embodiment, the spatial data is size data of a space where the intelligent LED display screen is located.
In this embodiment, the visual data component includes a chrominance value component and a luminance value component.
In this embodiment, generating a corresponding component histogram distribution curve based on the visual data component corresponding to each pixel point in the sub-block includes: determining a chromatic value range and a brightness value range from the block, dividing the corresponding chromatic value range and the corresponding brightness value range into a plurality of chromatic levels and a plurality of brightness levels by taking the appropriate chromatic value interval and the proper brightness interval as units, representing the chromatic levels or the brightness levels by a horizontal axis, representing the total number of pixel points contained in the corresponding chromatic levels or the corresponding brightness levels by a vertical axis, making a bar statistical graph, and connecting and fitting the central points of all bars of the bar statistical graph to obtain a corresponding component histogram distribution curve.
In this embodiment, the total histogram distribution curve is a curve obtained by fusing the component histogram distribution curves.
In this embodiment, the pixel capacity is the total number of pixels between adjacent minimum values in the total histogram distribution curve.
In this embodiment, the sampling region is an image region corresponding to the sub-block, where a curve segment corresponding to the maximum pixel point capacity in the total histogram distribution curve is located.
In this embodiment, calculating corresponding relative illumination data based on the visual data corresponding to the sampling region includes:
determining a central point coordinate value and central point visual data of a sampling region, and determining a coordinate value and visual data of each pixel point contained in the sampling region;
calculating a corresponding relative illumination value based on the coordinate value of the central point, the coordinate value of each pixel point and the visual data:
Figure BDA0003551222450000171
wherein M is the relative illumination value, ε 1 Is a first coefficient (representing the conversion coefficient between the ratio of chromatic value to pixel point distance and the relative illumination value), epsilon 2 Is a second coefficient (a conversion coefficient between the characterization brightness value and the relative illumination value), i is the ith pixel point contained in the sampling region, n is the total number of the pixel points contained in the sampling region, s i For the chrominance value, s, corresponding to the ith pixel point contained in the sample area 0 Is a center point chroma value, x, contained in the center point visual data i Is an abscissa value, y, corresponding to the ith pixel point contained in the sampling region i Is the vertical coordinate value, x corresponding to the ith pixel point contained in the sampling region 0 Is the abscissa value corresponding to the coordinate value of the center point, y 0 Is the ordinate value corresponding to the coordinate value of the center point, | i For the luminance value corresponding to the ith pixel point contained in the sampling region,/ 0 The central point brightness value contained in the central point visual data;
for example, the coordinate value of the center point is (0,0), the chromaticity value of the center point is 5, and the center point brightnessThe value is 5,5, the sampling region comprises three pixel points, the coordinate values of the three pixel points are (1,1) and (-1,1) (0,1) in sequence, the chromatic values of the three pixel points are 2, 3 and 4 in sequence, and the brightness values of the three pixel points are 6, 7 and 8 in sequence, and epsilon 1 Is 0.5,. Epsilon 2 0.5, then M is 4.8;
and determining corresponding relative illumination data based on the relative illumination value and a corresponding relative illumination data list (representing the corresponding relation between the relative illumination value and the relative illumination data).
The beneficial effects of the above technology are: the monitoring video in the preset period is subjected to secondary analysis based on the histogram distribution curve, so that corresponding relative illumination data and space data are obtained, and a data basis is provided for subsequently determining the display mode of the intelligent LED display screen.
Example 9:
on the basis of embodiment 1, the state control terminal, referring to fig. 9, includes:
the state monitoring module is used for monitoring the current working state of the intelligent LED display screen;
the first judgment module is used for judging whether the current working state is closed, if so, judging whether the working mode is the closing of the intelligent LED display screen, if so, sending a holding instruction, and otherwise, sending a starting instruction;
the first control module is used for controlling the working state of the intelligent LED display screen based on the current working state and the working mode when the current working state is not closed;
and the second control module is used for controlling the working state of the intelligent LED display screen based on the working mode when the starting instruction is received.
In this embodiment, the holding instruction is an instruction for controlling the intelligent LED display to hold the current working state.
In this embodiment, the turn-on command is a command for controlling the intelligent LED display to turn on.
The beneficial effects of the above technology are: and the intelligent LED display screen control device is used for determining a corresponding control instruction based on the latest determined working mode and the latest working state of the intelligent LED display screen, so that the intelligent adjustment of the working state of the intelligent LED display screen is realized.
Example 10:
on the basis of embodiment 1, the first control module, with reference to fig. 10, includes:
the parameter determining unit is used for determining a corresponding first working parameter based on the current working state when the current working state is not closed, and determining a corresponding second working parameter based on the working mode;
the instruction generating unit is used for generating a corresponding parameter adjusting instruction based on the difference value of the first working parameter and the second working parameter;
and the state adjusting unit is used for adjusting the working state of the intelligent LED display screen based on the parameter adjusting instruction.
In this embodiment, the first working parameter is a working parameter corresponding to the current working state of the intelligent LED display screen, for example: display luminance and display chromaticity.
In this embodiment, the second operating parameter is the operating parameter corresponding to the newly determined operating mode.
In this embodiment, the parameter adjusting instruction is a corresponding instruction for adjusting the operating parameter of the intelligent LED display screen, which is generated based on a parameter difference determined by a difference between the first operating parameter and the second operating parameter.
The beneficial effects of the above technology are: and determining a corresponding parameter adjusting instruction based on the current state of the intelligent LED display screen and the latest determined working mode, and realizing the parameter adjustment of the intelligent LED display screen.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. The utility model provides an intelligence LED display screen system based on multisensor data fusion which characterized in that includes:
the data sensing terminal is used for acquiring various sensing data in real time based on a plurality of data acquisition terminals, and the various sensing data comprise light sensing data and video data;
the fusion determining end is used for determining a corresponding working mode based on the multiple sensing data, a preset decision mode and a user control instruction;
the state control end is used for controlling the working state of the intelligent LED display screen based on the working mode;
the data sensing terminal comprises:
the video acquisition module is used for acquiring a monitoring video in a preset range in real time based on a camera arranged on the intelligent LED display screen;
the illumination acquisition module is used for acquiring light sensation data in real time based on a light sensor arranged on the intelligent LED display screen;
the state monitoring module is used for judging whether the data sensing end has a fault or not based on the monitoring video and the light sensation data, obtaining a judgment result and sending a corresponding fault instruction based on the judgment result;
the state monitoring module includes:
the curve fitting unit is used for fitting and obtaining a corresponding light sensation data dynamic curve based on the light sensation data obtained in real time;
the data fusion unit is used for aligning and fusing the light sensation data dynamic curve and the monitoring video to obtain corresponding total dynamic data;
the first monitoring unit is used for judging that the monitoring video in the total dynamic data is paused, if so, a first fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
the second monitoring unit is used for judging whether a light sensation data dynamic curve in the total dynamic data is broken or not, if so, a second fault instruction is sent out, and otherwise, a corresponding judgment result is kept;
wherein the fault instruction comprises: a first fault instruction and a second fault instruction.
2. The intelligent LED display screen system based on multi-sensor data fusion as claimed in claim 1, wherein the fusion determination end comprises:
the first determining module is used for determining a corresponding working mode based on the user control instruction when the user control instruction is received;
and the second determining module is used for determining a corresponding working mode based on the multiple sensing data and the preset decision mode when the user control instruction is not received.
3. The intelligent LED display screen system based on multi-sensor data fusion as claimed in claim 2, wherein the first determination module comprises:
the first analysis unit is used for analyzing the user control instruction to obtain a corresponding control parameter when receiving the user control instruction;
and the first determining unit is used for determining the corresponding working mode based on the control parameter.
4. The intelligent LED display screen system based on multi-sensor data fusion as claimed in claim 2, wherein the second determination module comprises:
the first analysis unit is used for carrying out primary analysis on the monitoring video in a preset period to obtain a corresponding primary analysis result;
the state judging unit is used for judging to start the intelligent LED display screen when the primary analysis result is that a pre-stored user exists in a preset range, and otherwise, judging not to start the intelligent LED display screen;
the second analysis unit is used for carrying out secondary analysis on the monitoring video in a preset period to obtain a corresponding secondary analysis result when the intelligent LED display screen is judged to be started;
the second determining unit is used for determining a display mode of the intelligent LED display screen based on the light sensation data and the secondary analysis result, and taking the display mode as a working mode of the intelligent LED display screen;
and the third determining unit is used for turning off the intelligent LED display screen as a corresponding working mode when the intelligent LED display screen is judged not to be started.
5. The intelligent LED display screen system based on multi-sensor data fusion of claim 4, wherein the first analysis unit comprises:
the image comparison subunit is used for comparing each frame of video frame contained in the monitoring video in a preset period with a prestored background image and determining a difference image corresponding to each frame of video frame;
a preliminary screening subunit, configured to screen a human body image from the difference image based on a preliminary screening method;
the face identification subunit is used for identifying the human body image based on a machine learning self-adaptive algorithm to obtain a corresponding face image;
a reference point determining subunit, configured to determine, based on a preset determination method, a corresponding reference point in the face image;
the image cutting subunit is used for cutting the face image based on the reference point to obtain a corresponding complete face area;
the image standardization subunit is used for carrying out standardization processing on the complete face area based on the distance between the reference points to obtain a corresponding standard-size face area and a corresponding standard-size face area set;
the image dividing subunit is used for dividing the standard-size face area into sub-areas with preset number;
the image sampling subunit is used for sequentially performing sliding window sampling on the sub-regions in the standard-size face region to obtain corresponding sampling data;
the degree determining subunit is configured to determine, based on the sampling data, whether symmetric sub-regions corresponding to the sub-regions are included in the standard-size face region, and determine, based on the total number of sub-regions in which corresponding symmetric sub-regions exist in the standard-size face region, a forward degree of a corresponding face;
the mean value determining subunit is used for screening out a standard size face region corresponding to the maximum face forward degree from the standard size face region set to serve as an image to be corrected, and calculating a corresponding visual mean value based on visual data corresponding to each pixel point in the image to be corrected;
a first region determining subunit, configured to use a region formed by pixels whose visual data in the image to be corrected is greater than the visual mean as a sub-region to be attenuated, and use a region formed by pixels whose visual data in the image to be corrected is less than the visual mean as an sub-region to be enhanced;
the data determining subunit is used for determining first reflection data corresponding to the sub-region to be weakened and second reflection data corresponding to the sub-region to be enhanced;
the first standardization subunit is used for reducing the visual data corresponding to each pixel point in the sub-area to be weakened based on the first reflection data and the corresponding visual mean value to obtain a corresponding first standard sub-area;
the second normalization subunit is used for increasing the visual data corresponding to each pixel point in the sub-region to be enhanced based on the second reflection data and the corresponding visual mean value to obtain a corresponding second standard sub-region;
the region fusion subunit is configured to fuse the first standard sub-region, the second standard sub-region and the image to be corrected to obtain a corresponding standard face image;
and the first result determining subunit is used for judging whether a pre-stored face image matched with the standard face image exists in a pre-stored user library or not, if so, taking the pre-stored user in a preset range as a primary analysis result, and otherwise, taking the non-stored user in the preset range as the primary analysis result.
6. The intelligent LED display screen system based on multi-sensor data fusion as claimed in claim 4, wherein the second analysis unit comprises:
the edge determining subunit is used for determining an edge line in each frame of video frame in the monitoring video in a preset period based on a preset determining mode when the intelligent LED display screen is judged to be started;
a video frame dividing subunit, configured to divide the corresponding video frame into a plurality of sub-blocks based on the edge lines;
a spatial data determining subunit, configured to determine corresponding spatial data based on the sub-block;
a curve determining subunit, configured to generate a corresponding component histogram distribution curve based on the visual data component corresponding to each pixel point in the sub-block;
a curve fusion subunit, configured to fuse the component histogram distribution curves to obtain a corresponding total histogram distribution curve;
a second region determining subunit, configured to determine a minimum value in the total histogram distribution curve, count a total number of pixels between adjacent minimum values as a corresponding pixel capacity, and use an image region corresponding to the sub-block in a curve segment corresponding to the maximum pixel capacity in the total histogram distribution curve as a corresponding sampling region;
the data calculation subunit is used for calculating corresponding relative illumination data based on the visual data corresponding to the sampling area;
and the second result determining subunit is used for taking the spatial data and the relative illumination data as corresponding secondary analysis results.
7. The intelligent LED display screen system based on multi-sensor data fusion of claim 1, wherein the state control terminal comprises:
the state monitoring module is used for monitoring the current working state of the intelligent LED display screen;
the first judgment module is used for judging whether the current working state is closed, if so, judging whether the working mode is the closing of the intelligent LED display screen, if so, sending a holding instruction, and otherwise, sending a starting instruction;
the first control module is used for controlling the working state of the intelligent LED display screen based on the current working state and the working mode when the current working state is not closed;
and the second control module is used for controlling the working state of the intelligent LED display screen based on the working mode when the starting instruction is received.
8. The intelligent LED display screen system based on multi-sensor data fusion of claim 7, wherein the first control module comprises:
the parameter determining unit is used for determining a corresponding first working parameter based on the current working state when the current working state is not closed, and determining a corresponding second working parameter based on the working mode;
the instruction generating unit is used for generating a corresponding parameter adjusting instruction based on the difference value of the first working parameter and the second working parameter;
and the state adjusting unit is used for adjusting the working state of the intelligent LED display screen based on the parameter adjusting instruction.
CN202210278615.5A 2022-03-17 2022-03-17 Intelligent LED display screen system based on multi-sensor data fusion Active CN114708821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210278615.5A CN114708821B (en) 2022-03-17 2022-03-17 Intelligent LED display screen system based on multi-sensor data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210278615.5A CN114708821B (en) 2022-03-17 2022-03-17 Intelligent LED display screen system based on multi-sensor data fusion

Publications (2)

Publication Number Publication Date
CN114708821A CN114708821A (en) 2022-07-05
CN114708821B true CN114708821B (en) 2022-10-14

Family

ID=82169422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210278615.5A Active CN114708821B (en) 2022-03-17 2022-03-17 Intelligent LED display screen system based on multi-sensor data fusion

Country Status (1)

Country Link
CN (1) CN114708821B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115798400B (en) * 2023-01-09 2023-04-18 永林电子股份有限公司 LED display control method and device based on image processing and LED display system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266273B (en) * 2008-05-12 2010-11-24 徐立军 Multi- sensor system fault self-diagnosis method
TW201025231A (en) * 2008-12-19 2010-07-01 Formolight Technologies Inc Adjustment and control device for display panel
CN102421008A (en) * 2011-12-07 2012-04-18 浙江捷尚视觉科技有限公司 Intelligent video quality detecting system
CN103021370B (en) * 2012-12-26 2015-01-14 广东欧珀移动通信有限公司 System and method for improving anti-interference capability of liquid-crystal display screen
JP2014182291A (en) * 2013-03-19 2014-09-29 Canon Inc Light emission device and method for controlling the same
CN105185310B (en) * 2015-10-10 2017-11-17 西安诺瓦电子科技有限公司 Brightness of display screen adjusting method
CN107622750A (en) * 2017-09-11 2018-01-23 合肥缤赫信息科技有限公司 A kind of LED display tele-control system
CN108831357A (en) * 2018-05-02 2018-11-16 广州市统云网络科技有限公司 A kind of LED display working condition automated measurement &control method
CN108922494B (en) * 2018-07-20 2020-01-24 奥克斯空调股份有限公司 Light sensing module fault detection method and device, display screen and air conditioner
CN110211528A (en) * 2019-05-17 2019-09-06 海纳巨彩(深圳)实业科技有限公司 A kind of system that LED display display brightness is adjusted
CN110379355A (en) * 2019-08-16 2019-10-25 深圳供电局有限公司 A kind of control system and control method of large screen display wall
CN111479352B (en) * 2020-04-22 2022-06-14 聚好看科技股份有限公司 Display apparatus and illumination control method

Also Published As

Publication number Publication date
CN114708821A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN110769246B (en) Method and device for detecting faults of monitoring equipment
CN111770285B (en) Exposure brightness control method and device, electronic equipment and storage medium
CN103702111B (en) A kind of method detecting camera video color cast
CN115345802B (en) Remote monitoring method for operation state of electromechanical equipment
CN114708821B (en) Intelligent LED display screen system based on multi-sensor data fusion
CN107103330A (en) A kind of LED status recognition methods and device
US11615166B2 (en) System and method for classifying image data
CN103096124B (en) Auxiliary focusing method and auxiliary focusing device
CN112995510B (en) Method and system for detecting environment light of security monitoring camera
CN106941588B (en) Data processing method and electronic equipment
CN115065798A (en) Big data-based video analysis monitoring system
CN115936672A (en) Smart power grid online safety operation and maintenance management method and system
CN111565283A (en) Traffic light color identification method, correction method and device
CN115830676A (en) Image processing and identifying system and method based on neural network
CN107995476A (en) A kind of image processing method and device
CN112084902B (en) Face image acquisition method and device, electronic equipment and storage medium
CN113225525A (en) Indoor monitoring method and system
CN113031386A (en) Method, apparatus, device and medium for detecting abnormality of dual-filter switcher
CN115499692B (en) Digital television intelligent control method and system based on image processing
CN111908289B (en) Method, device and equipment for detecting illumination in elevator car and storage medium
CN112308814A (en) Method and system for automatically identifying switch on-off position state of disconnecting link of power system
CN118042271B (en) Control system based on image sensor
CN117651355B (en) Light display control method, system and storage medium of COB (chip on board) lamp strip
CN111954354B (en) Light control method based on image operation
CN114125293B (en) Image quality control method, device, medium and equipment for double-light camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant