CN112348808A - Screen perspective detection method and device - Google Patents

Screen perspective detection method and device Download PDF

Info

Publication number
CN112348808A
CN112348808A CN202011369669.XA CN202011369669A CN112348808A CN 112348808 A CN112348808 A CN 112348808A CN 202011369669 A CN202011369669 A CN 202011369669A CN 112348808 A CN112348808 A CN 112348808A
Authority
CN
China
Prior art keywords
screen
perspective
processing
classification
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011369669.XA
Other languages
Chinese (zh)
Inventor
田寨兴
许锦屏
余卫宇
廖伟权
刘嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Epbox Information Technology Co ltd
Original Assignee
Guangzhou Epbox Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Epbox Information Technology Co ltd filed Critical Guangzhou Epbox Information Technology Co ltd
Priority to CN202011369669.XA priority Critical patent/CN112348808A/en
Publication of CN112348808A publication Critical patent/CN112348808A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a device for detecting a screen perspective, which are used for classifying original pictures displayed on a screen of an intelligent device through a classification algorithm after the original pictures are obtained, obtaining a plurality of classification accuracy rates and picture weights, and obtaining target weights for representing the detection results of the screen perspective of the intelligent device according to the classification accuracy rates. Based on this, the accessible carries out real-time detection to the picture that the smart machine screen shows when retrieving smart machine, and whether the check out test set screen is transparent to be instructed to retrieve, reduces the recovery work load and improves and detect the rate of accuracy when being favorable to instructing the recovery.

Description

Screen perspective detection method and device
Technical Field
The invention relates to the technical field of electronic products, in particular to a screen perspective detection method and device.
Background
With the development of electronic product technology, various intelligent devices such as smart phones, notebook computers, tablet computers, and the like are developed. When the user uses the intelligent device, the main means of man-machine interaction with the intelligent device is realized through the screen of the intelligent device. Therefore, the quality of the screen of the intelligent device has an important influence on the use experience of the user. At present, along with the rapid development of economy and technology, the popularization and the updating speed of intelligent equipment are also faster and faster. Taking a smart phone as an example, the coming of the 5G era accelerates the generation change of the smart phone. In the iterative process of the intelligent equipment, effective recovery is one of effective utilization means of the residual value of the intelligent equipment, and the chemical pollution to the environment and the waste can be reduced.
In the recovery process of the intelligent device, the quality of the screen is an important reference for determining the residual value of the intelligent device. Generally, the image penetrating condition of the screen of the equipment is detected by the recovery intelligent equipment. The screen perspective is that the screen leaves an impression of image characters on the screen for a long time under high brightness/high chroma, which can seriously affect the look and feel of the screen and further affect the residual value rate of the intelligent device. Therefore, in the recovery process of the intelligent equipment, whether the screen is transparent or not needs to be detected to achieve the purpose of improving the recovery evaluation capability of the intelligent equipment, and the risk of recovering the loss is reduced.
The traditional method for detecting the perspective of the screen of the intelligent equipment is mainly observed by human eyes of professional quality inspectors, the professional quality inspectors adjust the background color of the screen of the equipment into white, and then the human eyes are used for observing the screen of the equipment at various angles so as to obtain the conclusion whether the perspective exists on the screen of the equipment. However, manual inspection of the perspective of the screen is labor-intensive, and subjective factors affect the stability and accuracy of the inspection result, so that it is difficult to achieve the purpose of inspecting the degree of the perspective loss of the screen of the device.
Disclosure of Invention
Therefore, it is necessary to provide a method and a device for detecting screen perspective, aiming at the defects that manual screen perspective detection is labor-consuming, subjective factors influence the stability and accuracy of detection results, and the detection of the perspective loss degree of the equipment screen is difficult to achieve.
A screen perspective detection method comprises the following steps:
acquiring an original picture displayed on a screen of the intelligent equipment;
classifying the original pictures through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights;
and obtaining the target weight for representing the screen perspective detection result of the intelligent equipment according to the classification accuracy.
According to the method for detecting the screen perspective, after the original picture displayed on the screen of the intelligent device is obtained, the original picture is classified through a classification algorithm, a plurality of classification accuracy rates and picture weights are obtained, and the target weight for representing the screen perspective detection result of the intelligent device is obtained according to the classification accuracy rates. Based on this, the accessible carries out real-time detection to the picture that the smart machine screen shows when retrieving smart machine, and whether the check out test set screen is transparent to be instructed to retrieve, reduces the recovery work load and improves and detect the rate of accuracy when being favorable to instructing the recovery.
In one embodiment, before the process of classifying the original picture by the classification algorithm, the method further includes the steps of:
and carrying out detail enhancement processing on the original picture.
In one embodiment, the process of performing detail enhancement processing on an original picture includes the steps of:
the color, brightness and texture of the original picture are enhanced by the enhancement function.
In one embodiment, the process of classifying the original picture by using a classification algorithm includes the steps of:
freezing related layer weight, setting an optimizer and setting learning rate processing based on a neural network basic model;
performing convolution processing, pooling processing and activation function processing on the original picture through the processed neural network basic model to obtain a full connection layer so that the size of the original picture becomes a target size;
corresponding weights are obtained through multiple iterations, and the classification accuracy of each iteration is calculated based on the loss function of the optimizer.
In one embodiment, the process of performing convolution processing, pooling processing and activation function processing on the original picture through the processed neural network base model to obtain the full connection layer includes the steps of:
performing primary convolution processing on an original picture;
carrying out multiple times of convolution processing on the normalization result of the first convolution processing;
performing pooling treatment on the secondary convolution treatment result;
and processing the pooling processing result according to the processed neural network basic model to obtain a full connection layer.
In one embodiment, the classification algorithm comprises a logistic regression algorithm, a decision tree algorithm, a linear SVM algorithm, a gradient boosting tree algorithm, or a K-nearest neighbor classification algorithm.
In one embodiment, the process of obtaining target weights for characterizing the perspective screen detection result of the smart device includes the following steps:
and obtaining the target weight through a gradient method based on the classification accuracy.
A screen shot detection apparatus comprising:
the picture acquisition module is used for acquiring an original picture displayed on a screen of the intelligent equipment;
the classification processing module is used for classifying the original pictures through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights;
and the result acquisition module is used for acquiring the target weight for representing the screen perspective detection result of the intelligent equipment according to the classification accuracy.
After the original picture displayed on the screen of the intelligent device is obtained, the screen perspective detection device classifies the original picture through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights, and obtains a target weight for representing the screen perspective detection result of the intelligent device according to the classification accuracy rates. Based on this, the accessible carries out real-time detection to the picture that the smart machine screen shows when retrieving smart machine, and whether the check out test set screen is transparent to be instructed to retrieve, reduces the recovery work load and improves and detect the rate of accuracy when being favorable to instructing the recovery.
A computer storage medium having computer instructions stored thereon, the computer instructions when executed by a processor implement the perspective screen detection method of any of the above embodiments.
After the original picture displayed on the screen of the intelligent device is obtained, the original picture is classified through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights, and a target weight for representing a detection result of the screen perspective of the intelligent device is obtained according to the classification accuracy rates. Based on this, the accessible carries out real-time detection to the picture that the smart machine screen shows when retrieving smart machine, and whether the check out test set screen is transparent to be instructed to retrieve, reduces the recovery work load and improves and detect the rate of accuracy when being favorable to instructing the recovery.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the perspective screen detection method according to any of the embodiments.
After the original picture displayed on the screen of the intelligent device is obtained, the original picture is classified through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights, and a target weight for representing a screen perspective detection result of the intelligent device is obtained according to the classification accuracy rates. Based on this, the accessible carries out real-time detection to the picture that the smart machine screen shows when retrieving smart machine, and whether the check out test set screen is transparent to be instructed to retrieve, reduces the recovery work load and improves and detect the rate of accuracy when being favorable to instructing the recovery.
Drawings
FIG. 1 is a flowchart of a perspective screen image detection method according to an embodiment;
FIG. 2 is a flowchart of a perspective view detection method according to another embodiment;
FIG. 3 is a flowchart of a perspective screen image detection method according to yet another embodiment;
FIG. 4 is a flowchart illustrating a detailed method for detecting a perspective view of a screen according to yet another embodiment;
FIG. 5 is a flowchart of a perspective view detection method according to an embodiment;
fig. 6 is a block diagram of a perspective screen image detection apparatus according to an embodiment.
Detailed Description
For better understanding of the objects, technical solutions and effects of the present invention, the present invention will be further explained with reference to the accompanying drawings and examples. Meanwhile, the following described examples are only for explaining the present invention, and are not intended to limit the present invention.
The embodiment of the invention provides a screen perspective detection method.
Fig. 1 is a flowchart of a perspective screen image detection method according to an embodiment, and as shown in fig. 1, the perspective screen image detection method according to an embodiment includes steps S100 to S102:
s100, acquiring an original picture displayed on a screen of the intelligent equipment;
in step S100, the smart device includes a smart phone, a notebook computer, a tablet computer, and the like. The method comprises the steps of obtaining an original picture displayed on a screen of the intelligent device, namely obtaining a screen photo in display as the original picture when the screen of the intelligent device is displayed.
In one embodiment, fig. 2 is a flowchart of a perspective screen image detection method according to another embodiment, and as shown in fig. 2, before the process of classifying the original picture by the classification algorithm in step S101, the method further includes step S200:
and S200, performing detail enhancement processing on the original picture.
The detail enhancement processing on the original picture comprises enhancing the color, brightness, texture and the like of the original picture.
In one embodiment, the color, brightness, and texture of the original picture are enhanced by enhancement functions. The enhancement function may be a contrast pull-up function, a gamma correction function, a homomorphic filter function, or the like. As a preferred embodiment, the enhancement function is an adaptively adjusted gamma function.
In one embodiment, a bias value is added to a gamma function to modify the function, before the gamma function is used as an index value to solve a new pixel value to restore a picture, the mean value of a gray image of an original picture is calculated, a mean value interval is set, and different functions for calculating gamma coefficients are used for pictures in different mean value intervals, wherein the functions are primary equations established with the mean value as x and the coefficients as y. And calculating the obtained gamma coefficient participating in the modified gamma function, and obtaining the pixel value of the new picture and restoring the picture by using the obtained result as an index value so as to obtain the enhanced original picture.
S101, classifying the original pictures through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights;
in one embodiment, the classification algorithm includes a logistic regression algorithm, a decision tree algorithm, a linear SVM (support vector machine) algorithm, a gradient boosting tree algorithm, or a K-nearest neighbor classification algorithm. And classifying the original pictures through a classification algorithm to obtain the accuracy and the picture weight under multiple classifications. Therefore, in the subsequent steps, the corresponding classified pictures can be selected according to the weight for perspective analysis, and the subjective influence in perspective detection is reduced.
In one embodiment, as shown in fig. 2, the process of classifying the original picture by the classification algorithm in step S101 includes steps S300 to S302:
s300, freezing related layer weight, setting an optimizer and setting a learning rate based on a neural network basic model;
the neural network basic model is loaded in advance, partial layer weights are frozen, and an optimizer and a learning rate are set. In one embodiment, the neural network basic model is an EfficientNets basic model. And freezing partial layer weights, setting an optimizer and learning rate based on the Efficientnets basic model.
S301, performing convolution processing, pooling processing and activation function processing on the original picture through the processed neural network basic model to obtain a full connection layer so that the size of the original picture becomes a target size;
in one embodiment, fig. 3 is a flowchart of a perspective screen image detection method according to yet another embodiment, and as shown in fig. 3, in step S301, a process of performing convolution processing, pooling processing and activation function processing on an original picture through the processed neural network base model to obtain a full connection layer includes steps S400 to S403:
s400, performing primary convolution processing on an original picture;
s401, carrying out multiple times of convolution processing on the normalization result of the first convolution processing;
s402, performing pooling treatment on the secondary convolution treatment result;
and S403, processing the pooling processing result according to the processed neural network basic model to obtain a full connection layer.
In order to better explain the perspective screen image detection method of the further embodiment shown in fig. 3, a specific flowchart is taken as an example to explain the embodiment shown in fig. 3. Fig. 4 is a detailed flowchart of a perspective screen image detection method according to yet another embodiment, as shown in fig. 4, after an original picture is obtained, the original picture is taken as a starting node, and a result of a primary convolution process is normalized after the primary convolution process. Further, the primary convolution processing result is subjected to convolution for N times, that is, secondary convolution processing, and the secondary convolution processing result is pooled. And finally, substituting the pooling result into a neural network basic model for training to obtain a trained reset full-link layer.
S302, obtaining corresponding weight through multiple iterations, and calculating the classification accuracy of each iteration based on the loss function of the optimizer.
Wherein, the processing result in step S301 is iterated for a plurality of times to obtain a corresponding plurality of weights. And calculating the accuracy rate of each classification based on the loss function of the optimizer in the neural network basic model.
In one embodiment, the cross entropy function crossEntropyLoss is used as a loss function to determine the closeness of the actual output to the expected output at each iteration. And when the approach degree of the current iteration is reduced compared with the last approach degree, namely the actual output is closer to the expected output, updating the weight parameters through gradient back propagation based on the difference so as to obtain better weight until the iteration is finished. And if the current weight coefficient is the coefficient of which iteration, the coefficient of the iteration is the optimal weight coefficient, namely the classification accuracy of the iteration is the highest.
And S102, obtaining target weight for representing the screen perspective detection result of the intelligent equipment according to the classification accuracy.
After obtaining a plurality of classification accuracy rates and picture weights in step S101, a target weight is determined according to the classification accuracy rates. In one embodiment, the optimal weight of the plurality of picture weights is determined as the target weight by a gradient method.
In order to better explain the embodiments of the present invention, the following explains the technical solutions of the embodiments of the present invention with a specific application example. It should be noted that the specific application examples are only for convenience of explanation and do not represent the only limitations of the embodiments of the present invention. In this specific application example, for convenience of explanation, the smart device is taken as an example of a smart phone, and each technical solution of this embodiment is explained.
Fig. 5 is a flowchart of a perspective screen image detection method according to a specific application example, and as shown in fig. 5, an original image corresponding to a screen display of a smart phone is obtained, where the size of the original image is 224 × 224 × 3. The original picture is subjected to primary convolution processing, and the size of a primary convolution layer is 224 multiplied by 32, and the step size is 2 multiplied by 2. And carrying out normalization processing on the convolution processing result to obtain a batch normalization layer. In one embodiment, the activation process is performed on the batch normalization layer by a Relu (Rectified linear unit) activation function. The activation result is subjected to convolution processing N times, and as shown in fig. 5, each convolution layer having a size of 112 × 112 × 16 and a step size of 2 × 2, a size of 112 × 112 × 24 and a step size of 2 × 2, a size of 56 × 56 × 40 and a step size of 2 × 2, a size of 14 × 14 × 112 and a step size of 2 × 2, a size of 28 × 28 × 80 and a step size of 1 × 1, a size of 14 × 14 × 192 and a step size of 1 × 1, a size of 7 × 7 × 320 and a step size of 2 × 2, and a size of 7 × 7 × 1280 and a step size of 2 × 2 under each convolution module is obtained. And carrying out normalization processing on the result after the convolution processing for N times to obtain another batch of normalization layers. Meanwhile, activating the batch normalization layer through a second-time Relu (Rectified linear unit) activation function, and performing global average pooling on the second-time activation processing result. Finally, the reset fully-connected layer size of 1 × 1 × 1280 is obtained through regularization processing of random inactivation, freezing weight, resetting optimizer, learning rate and the like. And outputting the classification and the accuracy thereof and the like through a classification function.
In the method for detecting the screen perspective view of any embodiment, after the original picture displayed on the screen of the intelligent device is obtained, the original picture is classified by using a classification algorithm to obtain a plurality of classification accuracy rates and picture weights, and a target weight for representing a screen perspective view detection result of the intelligent device is obtained according to the classification accuracy rates. Based on this, the accessible carries out real-time detection to the picture that the smart machine screen shows when retrieving smart machine, and whether the check out test set screen is transparent to be instructed to retrieve, reduces the recovery work load and improves and detect the rate of accuracy when being favorable to instructing the recovery. Meanwhile, subsequent related personnel can adjust and train the model parameters and the model data by further adjusting or training the neural network basic model, so that the algorithm accuracy is further improved.
The embodiment of the invention also provides a device for detecting the perspective view of the screen.
Fig. 6 is a block diagram of a perspective screen view detection apparatus according to an embodiment, and as shown in fig. 6, the perspective screen view detection apparatus according to an embodiment includes a block 100, a block 101, and a block 102:
the image acquisition module 100 is configured to acquire an original image displayed on a screen of the smart device;
the classification processing module 101 is configured to perform classification processing on the original pictures through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights;
and the result obtaining module 102 is configured to obtain a target weight for representing a perspective screen detection result of the smart device according to the classification accuracy.
After the original picture displayed on the screen of the intelligent device is obtained, the screen perspective detection device classifies the original picture through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights, and obtains a target weight for representing the screen perspective detection result of the intelligent device according to the classification accuracy rates. Based on this, the accessible carries out real-time detection to the picture that the smart machine screen shows when retrieving smart machine, and whether the check out test set screen is transparent to be instructed to retrieve, reduces the recovery work load and improves and detect the rate of accuracy when being favorable to instructing the recovery.
The embodiment of the invention also provides a computer storage medium, wherein computer instructions are stored on the computer storage medium, and when the instructions are executed by a processor, the method for detecting the screen perspective of any one of the embodiments is realized.
Those skilled in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Random Access Memory (RAM), a Read-Only Memory (ROM), a magnetic disk, and an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a RAM, a ROM, a magnetic or optical disk, or various other media that can store program code.
Corresponding to the computer storage medium, in one embodiment, a computer device is further provided, where the computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement any one of the screenshot detection methods in the embodiments described above.
After the original picture displayed on the screen of the intelligent device is obtained, the original picture is classified through a classification algorithm, a plurality of classification accuracy rates and picture weights are obtained, and a target weight for representing a screen perspective detection result of the intelligent device is obtained according to the classification accuracy rates. Based on this, the accessible carries out real-time detection to the picture that the smart machine screen shows when retrieving smart machine, and whether the check out test set screen is transparent to be instructed to retrieve, reduces the recovery work load and improves and detect the rate of accuracy when being favorable to instructing the recovery.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A screen perspective detection method is characterized by comprising the following steps:
acquiring an original picture displayed on a screen of the intelligent equipment;
classifying the original pictures through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights;
and obtaining target weight for representing the screen perspective detection result of the intelligent equipment according to the classification accuracy.
2. The method for detecting the screen shot of claim 1, wherein before the process of classifying the original pictures by the classification algorithm, the method further comprises the steps of:
and performing detail enhancement processing on the original picture.
3. The method for detecting the screen shot of claim 2, wherein the process of performing the detail enhancement processing on the original picture comprises the following steps:
and enhancing the color, brightness and texture of the original picture through an enhancement function.
4. The method for detecting the screen shot of claim 1, wherein the process of classifying the original pictures by a classification algorithm comprises the following steps:
freezing related layer weight, setting an optimizer and setting learning rate processing based on a neural network basic model;
performing convolution processing, pooling processing and activation function processing on the original picture through the processed neural network basic model to obtain a full connection layer so that the size of the original picture becomes a target size;
corresponding weights are obtained through multiple iterations, and the classification accuracy of each iteration is calculated based on the loss function of the optimizer.
5. The method for detecting the perspective screen image of claim 4, wherein the process of performing convolution processing, pooling processing and activation function processing on the original image through the processed neural network basic model to obtain the full connection layer comprises the steps of:
performing primary convolution processing on the original picture;
carrying out multiple times of convolution processing on the normalization result of the first convolution processing;
performing pooling treatment on the secondary convolution treatment result;
and processing the pooling processing result according to the processed neural network basic model to obtain the full connection layer.
6. The perspective screen detection method of claim 1, wherein the classification algorithm comprises a logistic regression algorithm, a decision tree algorithm, a linear SVM algorithm, a gradient boosting tree algorithm, or a K-nearest neighbor classification algorithm.
7. The perspective screen image detection method according to any one of claims 1 to 6, wherein the process of obtaining the target weight for representing the perspective screen image detection result of the smart device comprises the steps of:
and obtaining the target weight through a gradient method based on the classification accuracy.
8. A screen perspective detection apparatus, comprising:
the picture acquisition module is used for acquiring an original picture displayed on a screen of the intelligent equipment;
the classification processing module is used for classifying the original pictures through a classification algorithm to obtain a plurality of classification accuracy rates and picture weights;
and the result acquisition module is used for acquiring the target weight for representing the screen perspective detection result of the intelligent equipment according to the classification accuracy.
9. A computer storage medium having computer instructions stored thereon, wherein the computer instructions, when executed by a processor, implement the perspective screen detection method of any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements the perspective screen detection method according to any one of claims 1 to 7.
CN202011369669.XA 2020-11-30 2020-11-30 Screen perspective detection method and device Pending CN112348808A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011369669.XA CN112348808A (en) 2020-11-30 2020-11-30 Screen perspective detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011369669.XA CN112348808A (en) 2020-11-30 2020-11-30 Screen perspective detection method and device

Publications (1)

Publication Number Publication Date
CN112348808A true CN112348808A (en) 2021-02-09

Family

ID=74365038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011369669.XA Pending CN112348808A (en) 2020-11-30 2020-11-30 Screen perspective detection method and device

Country Status (1)

Country Link
CN (1) CN112348808A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052798A (en) * 2021-03-08 2021-06-29 广州绿怡信息科技有限公司 Screen aging detection model training method and screen aging detection method
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730473A (en) * 2017-11-03 2018-02-23 中国矿业大学 A kind of underground coal mine image processing method based on deep neural network
CN109272948A (en) * 2018-11-30 2019-01-25 中山大学 Electronic Paper driving adjustment method, device and computer equipment based on machine learning
CN109886357A (en) * 2019-03-13 2019-06-14 哈尔滨工程大学 A kind of adaptive weighting deep learning objective classification method based on Fusion Features
CN110751183A (en) * 2019-09-24 2020-02-04 东软集团股份有限公司 Image data classification model generation method, image data classification method and device
CN111175318A (en) * 2020-01-21 2020-05-19 上海悦易网络信息技术有限公司 Screen scratch fragmentation detection method and equipment
CN111294651A (en) * 2020-02-05 2020-06-16 深圳创维-Rgb电子有限公司 Still picture anti-afterimage method and device based on play data stream and storage medium
CN111382758A (en) * 2018-12-28 2020-07-07 杭州海康威视数字技术股份有限公司 Training image classification model, image classification method, device, equipment and medium
CN111507414A (en) * 2020-04-20 2020-08-07 安徽中科首脑智能医疗研究院有限公司 Deep learning skin disease picture comparison and classification method, storage medium and robot
CN111582342A (en) * 2020-04-29 2020-08-25 腾讯科技(深圳)有限公司 Image identification method, device, equipment and readable storage medium
CN111612763A (en) * 2020-05-20 2020-09-01 重庆邮电大学 Mobile phone screen defect detection method, device and system, computer equipment and medium
CN111914814A (en) * 2020-09-01 2020-11-10 平安国际智慧城市科技股份有限公司 Wheat rust detection method and device and computer equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730473A (en) * 2017-11-03 2018-02-23 中国矿业大学 A kind of underground coal mine image processing method based on deep neural network
CN109272948A (en) * 2018-11-30 2019-01-25 中山大学 Electronic Paper driving adjustment method, device and computer equipment based on machine learning
CN111382758A (en) * 2018-12-28 2020-07-07 杭州海康威视数字技术股份有限公司 Training image classification model, image classification method, device, equipment and medium
CN109886357A (en) * 2019-03-13 2019-06-14 哈尔滨工程大学 A kind of adaptive weighting deep learning objective classification method based on Fusion Features
CN110751183A (en) * 2019-09-24 2020-02-04 东软集团股份有限公司 Image data classification model generation method, image data classification method and device
CN111175318A (en) * 2020-01-21 2020-05-19 上海悦易网络信息技术有限公司 Screen scratch fragmentation detection method and equipment
CN111294651A (en) * 2020-02-05 2020-06-16 深圳创维-Rgb电子有限公司 Still picture anti-afterimage method and device based on play data stream and storage medium
CN111507414A (en) * 2020-04-20 2020-08-07 安徽中科首脑智能医疗研究院有限公司 Deep learning skin disease picture comparison and classification method, storage medium and robot
CN111582342A (en) * 2020-04-29 2020-08-25 腾讯科技(深圳)有限公司 Image identification method, device, equipment and readable storage medium
CN111612763A (en) * 2020-05-20 2020-09-01 重庆邮电大学 Mobile phone screen defect detection method, device and system, computer equipment and medium
CN111914814A (en) * 2020-09-01 2020-11-10 平安国际智慧城市科技股份有限公司 Wheat rust detection method and device and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊红林 等: "基于多尺度卷积神经网络的玻璃表面缺陷检测方法", 计算机集成制造***, vol. 26, no. 4, 30 April 2020 (2020-04-30), pages 900 - 909 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition
CN113052798A (en) * 2021-03-08 2021-06-29 广州绿怡信息科技有限公司 Screen aging detection model training method and screen aging detection method

Similar Documents

Publication Publication Date Title
WO2021057848A1 (en) Network training method, image processing method, network, terminal device and medium
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
CN111814902A (en) Target detection model training method, target identification method, device and medium
CN111582348A (en) Method, device, equipment and storage medium for training condition generating type countermeasure network
CN110807757B (en) Image quality evaluation method and device based on artificial intelligence and computer equipment
CN111178183A (en) Face detection method and related device
CN110866872B (en) Pavement crack image preprocessing intelligent selection method and device and electronic equipment
CN111724370B (en) Multi-task image quality evaluation method and system based on uncertainty and probability
US20240169518A1 (en) Method and apparatus for identifying body constitution in traditional chinese medicine, electronic device, storage medium and program
WO2022206729A1 (en) Method and apparatus for selecting cover of video, computer device, and storage medium
CN112348808A (en) Screen perspective detection method and device
CN107590460A (en) Face classification method, apparatus and intelligent terminal
CN115861715B (en) Knowledge representation enhancement-based image target relationship recognition algorithm
CN110826581A (en) Animal number identification method, device, medium and electronic equipment
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
CN111882555A (en) Net detection method, device, equipment and storage medium based on deep learning
CN111862040A (en) Portrait picture quality evaluation method, device, equipment and storage medium
CN111814820A (en) Image processing method and device
CN114841974A (en) Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium
Ahmed et al. BIQ2021: a large-scale blind image quality assessment database
CN111353577B (en) Multi-task-based cascade combination model optimization method and device and terminal equipment
CN116977271A (en) Defect detection method, model training method, device and electronic equipment
CN116543433A (en) Mask wearing detection method and device based on improved YOLOv7 model
CN115798005A (en) Reference photo processing method and device, processor and electronic equipment
CN114742774A (en) No-reference image quality evaluation method and system fusing local and global features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination