CN109829503B - Dense fear picture distinguishing method, system, equipment and storage medium thereof - Google Patents

Dense fear picture distinguishing method, system, equipment and storage medium thereof Download PDF

Info

Publication number
CN109829503B
CN109829503B CN201910110928.8A CN201910110928A CN109829503B CN 109829503 B CN109829503 B CN 109829503B CN 201910110928 A CN201910110928 A CN 201910110928A CN 109829503 B CN109829503 B CN 109829503B
Authority
CN
China
Prior art keywords
picture
distinguished
positive
rotation invariant
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910110928.8A
Other languages
Chinese (zh)
Other versions
CN109829503A (en
Inventor
陈方毅
黄容鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meishao Co ltd
Original Assignee
Xiamen Meishao Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meishao Co ltd filed Critical Xiamen Meishao Co ltd
Priority to CN201910110928.8A priority Critical patent/CN109829503B/en
Publication of CN109829503A publication Critical patent/CN109829503A/en
Application granted granted Critical
Publication of CN109829503B publication Critical patent/CN109829503B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of network information, in particular to a method, a system, equipment and a storage medium for judging dense fear pictures, wherein the method comprises the following steps: respectively selecting positive and negative sample pictures, and calculating the rotation invariant characteristic value of the selected pictures; respectively carrying out deep learning on the rotation invariant characteristic values to obtain an identification model; selecting a picture to be distinguished, and calculating the rotation invariant characteristic value of the selected picture; importing the rotation invariant feature value into the identification model to obtain the type of the picture to be identified; the invention can distinguish most pictures, has strong universality and high accuracy, meets the requirement of automatic distinguishing and greatly improves the working efficiency of picture distinguishing.

Description

Dense fear picture distinguishing method, system, equipment and storage medium thereof
Technical Field
The invention relates to the technical field of network information, in particular to a method and a system for judging an intensive fear picture.
Background
With the rapid development of network technology, networks have become an important way for people to acquire information; in the network of today, there are various image data, and some image data are relatively small objects densely arranged, which easily causes people's fear of dense objects and causes people's discomfort.
Therefore, the existing websites can judge and screen the dense fear pictures causing the dense object fear, but the existing judging mode mainly adopts manual judgment, the working efficiency is low, and the existing judging mode is not enough to deal with massive image data on the network.
Disclosure of Invention
In order to overcome the above drawbacks, the present invention provides a method and a system for discriminating a dense fear picture based on a machine deep learning technique.
The purpose of the invention is realized by the following technical scheme:
the invention relates to a method for judging dense fear pictures, which comprises the following steps:
establishing a sample picture library containing a positive sample picture and a negative sample picture;
respectively selecting positive and negative sample pictures in the sample picture library, and calculating the rotation invariant characteristic value of the selected pictures;
performing deep learning on the rotation invariant feature values in the positive sample picture and the negative sample picture respectively to obtain an identification model, wherein the identification model comprises a positive template and a negative template;
selecting a picture to be distinguished, and calculating the rotation invariant characteristic value of the selected picture;
and importing the rotation invariant feature value in the picture to be distinguished into the distinguishing model to obtain the type of the picture to be distinguished.
In the present invention, before the calculating the rotation invariant feature value of the selected picture, the method includes:
and judging whether the selected picture is in a preset format or not, and if not, adjusting the picture to be in the preset format.
In the present invention, the determining whether the picture is in the predetermined format, and if not, adjusting the picture to the predetermined format includes:
and judging whether the positive and negative sample pictures are gray level pictures or not, and if not, adjusting the positive and negative sample pictures to be gray level pictures.
In the present invention, the determining whether the picture is in the predetermined format, and if not, adjusting the picture to the predetermined format further includes:
and judging whether the size values of the positive and negative sample pictures are in a preset size or not, and if not, adjusting the size values to be in the preset size.
In the present invention, the selecting the picture to be distinguished includes:
and importing the picture to be distinguished into a picture library to be distinguished, and selecting the picture to be distinguished from the picture library to be distinguished.
In the present invention, the calculating the rotation invariant feature value of the selected picture includes:
selecting a pixel point from the selected picture as a central pixel, and acquiring a pixel point adjacent to the central pixel, wherein all the pixel points adjacent to the central pixel form a neighborhood of the central pixel;
comparing and counting the gray value of the central pixel with the gray values of all adjacent pixel points in the neighborhood to obtain an original characteristic value of the central pixel;
taking the central pixel as a center, rotationally moving adjacent pixel points in the neighborhood, and calculating again to obtain a new original characteristic value of the central pixel;
and comparing all the original characteristic values in size, and taking the original characteristic value with the minimum value as the rotation invariant characteristic value.
In the present invention, the importing the rotation invariant feature value in the picture to be distinguished into the identification model, and the obtaining the type of the picture to be distinguished includes:
leading the rotation invariant feature value in the picture to be distinguished into the distinguishing model, and respectively matching with the positive template and the negative template;
if the image is matched with the positive template, defining the image to be distinguished as an intensive fear image; and if the image is matched with the negative template, defining the image to be judged as a non-dense fear image.
The invention relates to a dense fear picture distinguishing system, which comprises:
a sample picture library storing positive sample pictures and negative sample pictures;
the picture library to be distinguished stores pictures to be distinguished;
the picture selection module is respectively connected with the sample picture library and the picture library to be distinguished and is used for selecting positive and negative sample pictures or pictures to be distinguished;
the characteristic value calculating module is connected with the picture selecting module and is used for calculating the rotation invariant characteristic values of the selected positive and negative sample pictures or the picture to be distinguished;
the model establishing module is connected with the characteristic value calculating module and is used for performing deep learning on the rotation invariant characteristic values in the positive sample picture and the negative sample picture respectively to obtain an identification model, and the identification model comprises a positive template and a negative template;
and the image matching module is respectively connected with the characteristic value calculating module and the model establishing module and is used for leading the rotation invariant characteristic value in the image to be distinguished into the identification model and acquiring the type of the image to be distinguished.
The present invention is an electronic device including:
a processor;
a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method for discriminating between dense fear pictures as described above.
The present invention is a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the dense fear picture discriminating method as described above.
The method comprises the steps of obtaining the characteristics of positive and negative sample pictures, generating an identification model by combining a deep learning technology, introducing the picture to be identified into the identification model, and comparing to obtain whether the picture to be identified is an intensive fear picture; the invention can distinguish most pictures, has strong universality and high accuracy, meets the requirement of automatic distinguishing and greatly improves the working efficiency of picture distinguishing.
Drawings
For the purpose of easy explanation, the present invention will be described in detail with reference to the following preferred embodiments and the accompanying drawings.
Fig. 1 is a schematic workflow diagram of an embodiment of a method for discriminating a dense fear picture according to the present invention;
fig. 2 is a schematic workflow diagram of another embodiment of the method for discriminating the dense fear picture according to the present invention;
FIG. 3 is a schematic diagram of a workflow for calculating a rotation invariant feature value of a sample picture according to the present invention;
fig. 4 is a schematic logic structure diagram of an embodiment of the dense fear picture discrimination system according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected unless otherwise explicitly stated or limited. Either mechanically or electrically. Either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The following describes a method for determining a dense fear picture according to an embodiment of the present invention, referring to fig. 1, which includes:
s101, establishing a sample picture library
Establishing a sample picture library containing a positive sample picture and a negative sample picture; the positive sample picture is a picture which is already determined as the dense fear picture, and the negative sample picture is a picture which is already determined as the non-dense fear picture.
S102, calculating a rotation invariant characteristic value of the sample picture
And respectively selecting the positive sample picture and the negative sample picture in the sample picture library, and calculating the rotation invariant characteristic values of the selected positive sample picture and the selected negative sample picture to respectively obtain the rotation invariant characteristic value of the positive sample picture and the rotation invariant characteristic value of the negative sample picture.
S103, deep learning is carried out to obtain an identification model
Performing deep learning on the rotation invariant feature values in the plurality of positive sample pictures to obtain a positive template; performing deep learning on the rotation invariant feature values in the negative sample pictures to obtain a negative template; the positive and negative templates constitute an authentication model. The deep learning is a method based on characterization learning of data in machine learning. An observation (e.g., an image) may be represented using a number of ways, such as a vector of intensity values for each pixel, or more abstractly as a series of edges, a specially shaped region, etc. Tasks (e.g., face recognition or facial expression recognition) are more easily learned from the examples using some specific representation methods. The benefit of deep learning is to replace the manual feature acquisition with unsupervised or semi-supervised feature learning and hierarchical feature extraction efficient algorithms.
S104, calculating the rotation invariant characteristic value of the selected picture
Selecting a picture to be distinguished, and calculating the rotation invariant characteristic value of the selected picture;
s105, importing the type of the picture obtained from the identification model
And importing the rotation invariant feature value in the picture to be distinguished into the distinguishing model to obtain the type of the picture to be distinguished.
For better understanding of the present invention, the following takes another embodiment as an example to specifically describe the method for discriminating the dense fear picture of the present invention, please refer to fig. 2, which includes:
s201, establishing a sample picture library
Establishing a sample picture library containing a positive sample picture and a negative sample picture; the positive sample picture is a picture which is already determined as the dense fear picture, and the negative sample picture is a picture which is already determined as the non-dense fear picture.
S202, adjusting the picture formats of the positive and negative sample pictures
Respectively selecting positive and negative sample pictures in the sample picture library, judging whether the selected pictures are in a preset format, and if not, adjusting the pictures to be in the preset format;
wherein the adjusting the picture into the predetermined format comprises:
judging whether the positive and negative sample pictures are gray level pictures or not, and if not, adjusting the positive and negative sample pictures to be gray level pictures; the selected image must be a gray scale image, and if the selected image is a color image, the selected image needs to be converted into the gray scale image.
Judging whether the size values of the positive and negative sample pictures are the preset sizes or not, and if not, adjusting the size values to be the preset sizes; the preferred size of the grayscale map is 224 x 224, which facilitates the processing of the image by the system.
S203, calculating a rotation invariant characteristic value of the sample picture
And calculating the rotation invariant characteristic values of the selected positive sample picture and the negative sample picture to respectively obtain the rotation invariant characteristic value of the positive sample picture and the rotation invariant characteristic value of the negative sample picture.
Wherein calculating the rotation invariant feature values of the picture comprises:
s2031, selecting a central pixel and a neighborhood thereof
Selecting a pixel point from the selected picture as a central pixel, and acquiring a pixel point adjacent to the central pixel, wherein all the pixel points adjacent to the central pixel form a neighborhood of the central pixel; selecting a central pixel and obtaining a neighborhood formed by pixel points in a 3 x 3 area adjacent to the central pixel.
S2032, obtaining the original characteristic value of the central pixel
Comparing and counting the gray value of the central pixel with the gray values of all adjacent pixel points in the neighborhood to obtain an original characteristic value of the central pixel; in the neighborhood of the central pixel 3 x 3, the neighborhood central pixel is taken as a threshold value, the gray values of 8 adjacent pixels are compared with the pixel value of the neighborhood center, if the surrounding pixels are greater than the central pixel value, the position of the pixel point is marked as 1, otherwise, the position is 0. Thus, 8 dots in the 3 × 3 neighborhood can generate 8-bit binary numbers through comparison, and the 8-bit binary numbers are sequentially arranged to form a binary number, wherein the binary number is the original characteristic value of the central pixel.
S2033, obtaining new original characteristic value by neighborhood rotation and movement
Taking the central pixel as a center, rotationally moving adjacent pixel points in the neighborhood, and calculating again to obtain a new original characteristic value of the central pixel;
the method comprises the following steps: firstly, clockwise rotating the neighborhood, obtaining a series of LBP characteristic values according to different selected starting points, and selecting the original LBP characteristic with the minimum LBP characteristic value as a central pixel point from the LBP characteristic values.
The above process is formulated as:
Figure BDA0001968355600000071
wherein (x)c,yc) Is the coordinate of the central pixel, b is the starting pixel, P, specified in the fieldbP-th in the neighborhood when calculating a single LBP value for b as a starting pointbA plurality of pixels, each of which is a pixel,
Figure BDA0001968355600000073
is the P-thbGray value of individual neighborhood pixel, icIs the gray value of the center pixel, and s (x) is a sign function:
Figure BDA0001968355600000072
s2034, determining a rotation invariant characteristic value
And comparing all the original characteristic values in size, and taking the original characteristic value with the minimum value as the rotation invariant characteristic value.
S204, deep learning is carried out to obtain an identification model
Performing deep learning on the rotation invariant feature values in the plurality of positive sample pictures to obtain a positive template; performing deep learning on the rotation invariant feature values in the negative sample pictures to obtain a negative template; the positive and negative templates constitute an authentication model. The deep learning is a method based on characterization learning of data in machine learning. An observation (e.g., an image) may be represented using a number of ways, such as a vector of intensity values for each pixel, or more abstractly as a series of edges, a specially shaped region, etc. Tasks (e.g., face recognition or facial expression recognition) are more easily learned from the examples using some specific representation methods. The benefit of deep learning is to replace the manual feature acquisition with unsupervised or semi-supervised feature learning and hierarchical feature extraction efficient algorithms.
S205, adjusting the picture format of the picture to be distinguished
Selecting a picture to be distinguished, judging whether the selected picture to be distinguished is in a preset format or not, and if not, adjusting the picture to be distinguished to be in the preset format;
wherein the adjusting the picture into the predetermined format comprises:
judging whether the positive and negative sample pictures are gray level pictures or not, and if not, adjusting the positive and negative sample pictures to be gray level pictures; the selected image must be a gray scale image, and if the selected image is a color image, the selected image needs to be converted into the gray scale image.
Judging whether the size values of the positive and negative sample pictures are the preset sizes or not, and if not, adjusting the size values to be the preset sizes; the preferred size of the grayscale map is 224 x 224, which facilitates the processing of the image by the system.
S206, calculating the rotation invariant characteristic value of the selected picture
Calculating the rotation invariant feature value of the selected picture;
the method for calculating the rotation invariant feature value of the picture in this step is consistent with steps S2031 to S2034.
S207, importing the identification model for matching
Leading the rotation invariant feature value in the picture to be distinguished into the distinguishing model, and respectively matching with the positive template and the negative template;
s208, acquiring the type of the picture to be distinguished
If the image is matched with the positive template, defining the image to be distinguished as an intensive fear image; if the negative template is matched with the negative template, defining the picture to be judged as a non-dense fear picture; and acquiring the type of the picture to be distinguished according to the matching result.
Referring to fig. 3, the present invention is a system for discriminating a dense fear picture, including:
a sample picture library 301, wherein the sample picture library 301 stores a positive sample picture and a negative sample picture;
a to-be-determined picture library 302, wherein pictures to be determined are stored in the to-be-determined picture library 302;
a picture selecting module 303, wherein the picture selecting module 303 is respectively connected with the sample picture library 301 and the picture library 302 to be distinguished, and is used for selecting positive and negative sample pictures or pictures to be distinguished;
a feature value calculating module 304, wherein the feature value calculating module 304 is connected to the picture selecting module 303, and is configured to calculate a rotation invariant feature value of the selected positive and negative sample pictures or the picture to be distinguished;
the model establishing module 305 is connected to the characteristic value calculating module 304, and is configured to perform deep learning on the rotation invariant characteristic values in the positive and negative sample pictures respectively to obtain an identification model, where the identification model includes a positive template and a negative template;
a picture matching module 306, where the picture matching module 306 is respectively connected to the eigenvalue calculation module 304 and the model establishment module 305, and is configured to import the rotation invariant eigenvalue in the picture to be distinguished into the identification model, and obtain the type of the picture to be distinguished.
The modules in this embodiment may be implemented in software, or may be implemented in hardware, and the described modules may also be disposed in a processor. Wherein the names of these modules do not in some cases constitute a limitation of the unit itself.
The present invention may be an electronic device including:
a processor;
a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method for discriminating between dense fear pictures as described above.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, enable the electronic device to implement the method for discriminating the dense fear picture as described in the above embodiments.
The present invention may also be a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for discriminating a dense fear picture as described above. For example, the present embodiments include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the method flow described above.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
In the description of the present specification, reference to the description of the terms "one embodiment", "some embodiments", "an illustrative embodiment", "an example", "a specific example", or "some examples", etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A dense fear picture judging method is characterized by comprising the following steps:
establishing a sample picture library containing a positive sample picture and a negative sample picture;
respectively selecting positive and negative sample pictures in the sample picture library, and calculating the rotation invariant characteristic value of the selected pictures;
performing deep learning on the rotation invariant feature values in the positive sample picture and the negative sample picture respectively to obtain an identification model, wherein the identification model comprises a positive template and a negative template, performing deep learning on the rotation invariant feature values in the positive sample picture to obtain a positive template, and performing deep learning on the rotation invariant feature values in the negative sample picture to obtain a negative template, wherein the positive template and the negative template form the identification model;
selecting a picture to be distinguished, and calculating the rotation invariant characteristic value of the selected picture;
importing the rotation invariant feature value in the picture to be distinguished into the identification model to acquire the type of the picture to be distinguished, wherein the importing the rotation invariant feature value in the picture to be distinguished into the identification model to acquire the type of the picture to be distinguished comprises: leading the rotation invariant feature value in the picture to be distinguished into the distinguishing model, and respectively matching with the positive template and the negative template; if the image is matched with the positive template, defining the image to be distinguished as an intensive fear image; and if the image is matched with the negative template, defining the image to be judged as a non-dense fear image.
2. The method for discriminating the dense fear picture according to claim 1, wherein the calculating the rotation invariant feature value of the selected picture comprises:
and judging whether the selected picture is in a preset format or not, and if not, adjusting the picture to be in the preset format.
3. The method for discriminating the dense fear picture as claimed in claim 2, wherein the step of judging whether the picture is in a predetermined format, and if not, the step of adjusting the picture to the predetermined format comprises:
and judging whether the positive and negative sample pictures are gray level pictures or not, and if not, adjusting the positive and negative sample pictures to be gray level pictures.
4. The method for discriminating the dense fear picture as claimed in claim 3, wherein the step of judging whether the picture is in a predetermined format, and if not, the step of adjusting the picture to the predetermined format further comprises:
and judging whether the size values of the positive and negative sample pictures are in a preset size or not, and if not, adjusting the size values to be in the preset size.
5. The method for discriminating the dense fear picture according to claim 4, wherein the selecting the picture to be discriminated includes:
and importing the picture to be distinguished into a picture library to be distinguished, and selecting the picture to be distinguished from the picture library to be distinguished.
6. The method for discriminating the dense fear picture according to claim 5, wherein the calculating the rotation invariant feature value of the selected picture includes:
selecting a pixel point from the selected picture as a central pixel, and acquiring a pixel point adjacent to the central pixel, wherein all the pixel points adjacent to the central pixel form a neighborhood of the central pixel;
comparing and counting the gray value of the central pixel with the gray values of all adjacent pixel points in the neighborhood to obtain an original characteristic value of the central pixel;
taking the central pixel as a center, rotationally moving adjacent pixel points in the neighborhood, and calculating again to obtain a new original characteristic value of the central pixel;
and comparing all the original characteristic values in size, and taking the original characteristic value with the minimum value as the rotation invariant characteristic value.
7. An intensive fear picture discrimination system, comprising:
a sample picture library storing positive sample pictures and negative sample pictures;
the picture library to be distinguished stores pictures to be distinguished;
the picture selection module is respectively connected with the sample picture library and the picture library to be distinguished and is used for selecting positive and negative sample pictures or pictures to be distinguished;
the characteristic value calculating module is connected with the picture selecting module and is used for calculating the rotation invariant characteristic values of the selected positive and negative sample pictures or the picture to be distinguished;
the model establishing module is connected with the characteristic value calculating module and is used for performing deep learning on the rotation invariant characteristic values in the positive sample picture and the negative sample picture respectively to obtain an identification model, the identification model comprises a positive template and a negative template, the rotation invariant characteristic values in the positive sample picture are subjected to deep learning to obtain the positive template, the rotation invariant characteristic values in the negative sample picture are subjected to deep learning to obtain the negative template, and the positive template and the negative template form the identification model;
the image matching module is respectively connected with the characteristic value calculating module and the model establishing module, and is used for importing the rotation invariant characteristic value in the image to be distinguished into the identification model to obtain the type of the image to be distinguished, wherein the importing the rotation invariant characteristic value in the image to be distinguished into the identification model to obtain the type of the image to be distinguished comprises: leading the rotation invariant feature value in the picture to be distinguished into the distinguishing model, and respectively matching with the positive template and the negative template; if the image is matched with the positive template, defining the image to be distinguished as an intensive fear image; and if the image is matched with the negative template, defining the image to be judged as a non-dense fear image.
8. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory having stored thereon computer readable instructions which, when executed by the processor, implement the method of discriminating between dense fear pictures according to any one of claims 1 to 6.
9. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the method for discriminating a dense fear picture according to any one of claims 1 to 6.
CN201910110928.8A 2019-02-12 2019-02-12 Dense fear picture distinguishing method, system, equipment and storage medium thereof Active CN109829503B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910110928.8A CN109829503B (en) 2019-02-12 2019-02-12 Dense fear picture distinguishing method, system, equipment and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910110928.8A CN109829503B (en) 2019-02-12 2019-02-12 Dense fear picture distinguishing method, system, equipment and storage medium thereof

Publications (2)

Publication Number Publication Date
CN109829503A CN109829503A (en) 2019-05-31
CN109829503B true CN109829503B (en) 2021-12-17

Family

ID=66863574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910110928.8A Active CN109829503B (en) 2019-02-12 2019-02-12 Dense fear picture distinguishing method, system, equipment and storage medium thereof

Country Status (1)

Country Link
CN (1) CN109829503B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605811A (en) * 2013-12-10 2014-02-26 三峡大学 Texture image retrieval method and device
CN104077613A (en) * 2014-07-16 2014-10-01 电子科技大学 Crowd density estimation method based on cascaded multilevel convolution neural network
CN104239896A (en) * 2014-09-04 2014-12-24 四川省绵阳西南自动化研究所 Method for classifying crowd density degrees in video image
CN104268528A (en) * 2014-09-28 2015-01-07 深圳市科松电子有限公司 Method and device for detecting crowd gathered region
CN107506692A (en) * 2017-07-21 2017-12-22 天津大学 A kind of dense population based on deep learning counts and personnel's distribution estimation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986824B (en) * 2018-07-09 2022-12-27 宁波大学 Playback voice detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605811A (en) * 2013-12-10 2014-02-26 三峡大学 Texture image retrieval method and device
CN104077613A (en) * 2014-07-16 2014-10-01 电子科技大学 Crowd density estimation method based on cascaded multilevel convolution neural network
CN104239896A (en) * 2014-09-04 2014-12-24 四川省绵阳西南自动化研究所 Method for classifying crowd density degrees in video image
CN104268528A (en) * 2014-09-28 2015-01-07 深圳市科松电子有限公司 Method and device for detecting crowd gathered region
CN107506692A (en) * 2017-07-21 2017-12-22 天津大学 A kind of dense population based on deep learning counts and personnel's distribution estimation method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Estimation of Crowd Density based on Adaptive LBP;Yue Li 等;《Advanced Materials Research》;20140730;第864-868页 *
基于LBP和极限学习机的脑部MR图像分类;何其佳 等;《山东大学学报(工学版)》;20170322;第47卷(第2期);第86-93页第1-2节 *
基于深度学习的人群密度估计及行为分析方法研究;成金庚;《中国优秀硕士学位论文全文数据库社会科学II辑》;20190115;第H123-6页正文第36页第2段 *

Also Published As

Publication number Publication date
CN109829503A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN108304835B (en) character detection method and device
CN110516577B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110516201B (en) Image processing method, image processing device, electronic equipment and storage medium
CN108280477B (en) Method and apparatus for clustering images
CN110838126B (en) Cell image segmentation method, cell image segmentation device, computer equipment and storage medium
CN109343920B (en) Image processing method and device, equipment and storage medium thereof
TW201740316A (en) Image text identification method and apparatus
CN110781885A (en) Text detection method, device, medium and electronic equipment based on image processing
CN113095327B (en) Method and system for positioning optical character recognition area and storage medium thereof
JP2014232533A (en) System and method for ocr output verification
CN106372624B (en) Face recognition method and system
CN107977658B (en) Image character area identification method, television and readable storage medium
CN111400528B (en) Image compression method, device, server and storage medium
CN110705565A (en) Lymph node tumor region identification method and device
KR20100073749A (en) Apparatus and method for extracting feature point based on sift, and face recognition system using thereof
CN110766708B (en) Image comparison method based on contour similarity
CN112907569A (en) Head image area segmentation method and device, electronic equipment and storage medium
CN111429409A (en) Method and system for identifying glasses worn by person in image and storage medium thereof
CN111723688B (en) Human body action recognition result evaluation method and device and electronic equipment
CN110210467A (en) A kind of formula localization method, image processing apparatus, the storage medium of text image
CN117392659A (en) Vehicle license plate positioning method based on parameter-free attention mechanism optimization
CN109829503B (en) Dense fear picture distinguishing method, system, equipment and storage medium thereof
CN111144466B (en) Image sample self-adaptive depth measurement learning method
CN115273195A (en) Face living body detection method, device, equipment and storage medium
CN115131240A (en) Target identification method and system for three-dimensional point cloud data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 361000 Area 1F-D1, Huaxun Building A, Software Park, Xiamen Torch High-tech Zone, Xiamen City, Fujian Province

Applicant after: Xiamen Meishao Co., Ltd.

Address before: Unit G03, Room 102, 22 Guanri Road, Phase II, Xiamen Software Park, Fujian Province

Applicant before: XIAMEN MEIYOU INFORMATION SCIENCE & TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant