CN112464850A - Image processing method, image processing apparatus, computer device, and medium - Google Patents

Image processing method, image processing apparatus, computer device, and medium Download PDF

Info

Publication number
CN112464850A
CN112464850A CN202011421618.7A CN202011421618A CN112464850A CN 112464850 A CN112464850 A CN 112464850A CN 202011421618 A CN202011421618 A CN 202011421618A CN 112464850 A CN112464850 A CN 112464850A
Authority
CN
China
Prior art keywords
outline
sub
matrix
image
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011421618.7A
Other languages
Chinese (zh)
Other versions
CN112464850B (en
Inventor
梁帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Prophet Big Data Co ltd
Original Assignee
Dongguan Prophet Big Data Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Prophet Big Data Co ltd filed Critical Dongguan Prophet Big Data Co ltd
Priority to CN202011421618.7A priority Critical patent/CN112464850B/en
Publication of CN112464850A publication Critical patent/CN112464850A/en
Application granted granted Critical
Publication of CN112464850B publication Critical patent/CN112464850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to the technical field of image recognition, and discloses an image processing method, an image processing device, computer equipment and a medium. Extracting a figure outline image, and classifying RGB coordinates of pixel points in a figure outline area in the figure outline image; dividing the figure outline region into n sub-regions and m sub-regions along the directions of an x axis and a y axis of a space coordinate respectively to obtain vector sets V and U corresponding to the n sub-regions and the m sub-regions respectively; converting vector sets V and U into n x k dimensional feature matrix V respectivelydAnd m × k dimensional feature matrix Ud(ii) a The figure outline picture included in each frame of picture to be detectedClustering all the corresponding feature matrixes, and respectively calculating the feature matrix V of each clustering centercAnd UcAnd standard matrix set Dv、DuSimilarity v of any elementsAnd usAnd determining that the picture to be detected contains the target person according to the similarity calculation result. The embodiment of the invention can obviously improve the efficiency and the safety of supervision and can reduce the workload of supervision personnel.

Description

Image processing method, image processing apparatus, computer device, and medium
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, and a medium.
Background
With the rapid development of the information technology taking the internet and artificial intelligence as the leading factors, the campus security improvement through the information technology will become a necessary trend for the development of a safe campus. The control of campus food safety is always a key social concern, and the control of the entrance and exit of non-working personnel in a campus canteen operating room and a warehouse is a key point of safety precaution. Schools usually prohibit non-canteen workers from entering the school, and the current specific operation is usually to paste related identification words and to monitor in real time by additionally arranging a camera. However, the canteen is busy in work, a lot of people are involved, once a stranger enters the canteen, the stranger cannot be found in time, the post-inspection mode of video spot inspection cannot meet the supervision requirement, risk early warning cannot be carried out in time, and the safety management work of the canteen is not facilitated.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method, an image processing apparatus, a computer device, and a medium, which extract and analyze color information of a person image in a surveillance video, and compare the color information with a standard color of a canteen worker, so that an intelligent warning can be performed on an illegal person entering or exiting the canteen, the efficiency and the safety of supervision are significantly improved, and the workload of a supervisor can be reduced.
To solve the above technical problem, an embodiment of the present invention provides an image processing method, including:
extracting a figure outline image from the picture to be detected by using a target detection model trained based on R-CNN to obtain a figure outline image set contained in each frame of picture to be detected;
classifying the RGB coordinates of the pixel points in the figure outline area in each figure outline image to obtain the color category of the pixel points in the figure outline area;
dividing the figure outline area into n sub-areas along the x-axis direction of the space coordinate of the figure outline image to obtain a vector set V corresponding to the n sub-areas, and dividing the figure outline area into m sub-areas along the y-axis direction of the space coordinate of the figure outline image to obtain a vector set U corresponding to the m sub-areas; wherein, the elements of the vector set V are one-dimensional color vectors respectively composed of pixel points in each sub-region of the n sub-regions; the elements of the vector set U are one-dimensional color vectors respectively formed by pixel points in each sub-area of the m sub-areas; n and m are positive integers greater than 1;
converting the vector set V into an n x k dimensional feature matrix VdConverting the vector set U into a feature matrix U with m × k dimensionsd(ii) a Wherein k is the number of the color categories;
clustering all characteristic matrixes corresponding to the figure outline included in each frame of picture to be detected to obtain N clustering centers; the characteristic matrix of each cluster center is N multiplied by N characteristic matrix VcAnd m × N feature matrix Uc
Respectively calculating a characteristic matrix V of each clustering centercAnd standard matrix set DvSimilarity v of any elementsAnd a feature matrix UcAnd standard matrix set DuSimilarity u of any one elementsAnd calculating a result v from the similaritysAnd usDetermining that the picture to be detected contains a target person; wherein, the standard matrix set DvAnd DuFor characterizing reference character features.
An embodiment of the present invention further provides an image processing apparatus, including:
the extraction module is used for extracting a figure outline image from the picture to be detected by using a target detection model trained based on R-CNN to obtain a figure outline image set contained in each frame of picture to be detected;
the color classification module is used for classifying the RGB coordinates of the pixel points in the figure outline area in each figure outline image to obtain the color category of the pixel points in the figure outline area;
the vector conversion module is used for dividing the figure outline region into n sub-regions along the x-axis direction of the space coordinate of the figure outline image to obtain a vector set V corresponding to the n sub-regions, and dividing the figure outline region into m sub-regions along the y-axis direction of the space coordinate of the figure outline image to obtain a vector set U corresponding to the m sub-regions; wherein, the elements of the vector set V are one-dimensional color vectors respectively composed of pixel points in each sub-region of the n sub-regions; the elements of the vector set U are one-dimensional color vectors respectively formed by pixel points in each sub-area of the m sub-areas; n and m are positive integers greater than 1;
a feature matrix conversion module for converting the vector set V into an n × k feature matrix VdConverting the vector set U into a feature matrix U with m × k dimensionsd(ii) a Wherein k is the number of the color categories;
the clustering module is used for clustering all characteristic matrixes corresponding to the figure outline images contained in each frame of picture to be detected to obtain N clustering centers; the characteristic matrix of each cluster center is N multiplied by N characteristic matrix VcAnd m × N feature matrix Uc
A similarity calculation module for calculating the feature matrix V of each cluster centercAnd standard matrix set DvSimilarity v of any elementsAnd a feature matrix UcAnd standard matrix set DuSimilarity u of any one elements
A determination module for calculating a result v from the similaritysAnd usDetermining that the picture to be detected contains a target person; wherein, the standard matrix set DvAnd DuFor characterizing reference character features.
An embodiment of the present invention further provides a computer device, including: a memory storing a computer program and a processor running the computer program to implement the image processing method as described above.
Embodiments of the present invention also provide a storage medium storing a computer-readable program for causing a computer to execute the image processing method as described above.
The image processing method provided by the embodiment of the invention classifies the colors of the extracted figure outline and respectively carries out the color classification along the direction of the x axis and the direction of the y axis of the space coordinate of the figure outlineDividing the figure outline area to obtain n sub-areas and m sub-areas, respectively obtaining phasor sets V corresponding to the n sub-areas and phasor sets U corresponding to the m sub-areas, respectively converting the vector sets V and the vector sets U into a characteristic matrix VdAnd UdThen, all feature matrixes corresponding to the figure outline images contained in each frame of picture to be detected are clustered to obtain feature matrixes V of N clustering centerscAnd UcRespectively calculating a feature matrix V of each cluster centercAnd UcAnd standard matrix set DvAnd DuAnd the similarity of any element, so that whether the picture to be detected contains the target person or not is determined according to the similarity result. Therefore, the embodiment of the invention can determine whether people with clothes inconsistent with standard clothes of canteen workers enter and exit the canteen or not by analyzing and comparing the colors of the figure outline areas, thereby not only remarkably improving the real-time performance and safety of canteen supervision, but also reducing the workload of supervisors.
As an embodiment, the following clustering objective functions are used to obtain the clustering centers of the various classes:
Figure BDA0002822613400000031
when the clustering objective function is iterated to the minimum value, clustering centers of various types are obtained; the feature matrix of each cluster center is NiEach feature matrix D corresponding to the figure outline included in each frame of picture to be detectedjMembership degree u belonging to class iijThe number of feature matrices is w, dpm,npmRespectively, feature matrix DjCluster center matrix N, matrix element of (p, m) th position;
Figure BDA0002822613400000032
as an embodiment, the dividing the human figure outline region into n sub-regions along the x-axis direction of the space coordinate of the human figure outline image includes:
dividing the figure outline area into n parts along the x-axis direction in an equidistant mode to obtain n sub-areas;
the dividing the figure outline area into m sub-areas along the y-axis direction of the space coordinate of the figure outline image comprises the following steps:
dividing the figure outline area into m parts along the y-axis direction in an equidistant manner to obtain the m sub-areas.
As an embodiment, before the obtaining the vector set V corresponding to the n sub-regions, the method further includes:
removing noise pixel points in the figure outline region along the x-axis direction;
before the obtaining of the vector set U corresponding to the m sub-regions, the method further includes:
and removing noise pixel points in the figure outline region along the y-axis direction.
As an embodiment, the removing noise pixel points in the human figure outline region along the x-axis direction includes:
classifying the pixels in the figure outline region according to the x-axis coordinate value of the pixels and the color category to obtain the number of the pixels in each category, obtaining the pixel height ratio of the number of the pixels in each category to the maximum number of the pixels in the y axis of the figure outline image, and deleting the pixels in the categories of which the pixel height ratio is smaller than the pixel height ratio threshold;
the removing of the noise pixel points in the figure outline region along the y-axis direction includes:
classifying the pixels in the figure outline region according to the y-axis coordinate value of the pixels and the color category to obtain the number of the pixels in each category, obtaining the pixel length ratio of the number of the pixels in each category to the maximum number of the pixels in the x axis of the figure outline image, and deleting the pixels in the categories of which the pixel length ratio is smaller than the pixel length ratio threshold.
As an embodiment, the result v is calculated from the similaritysAnd usDetermining whether the picture to be detected contains a target person or not, including:
if the similarity v issIs less than a first similarity threshold, and the similarity usAnd if the similarity is smaller than a second similarity threshold, determining that the picture to be detected contains the target person.
As an embodiment, the converting the vector set V into a feature matrix V of n × k dimensionsdThe method comprises the following steps:
combining each vector V within the set of vectors ViForming new vector v by color proportioni=(pi1,···,pik) All vectors v are combinediForming a n x k dimension characteristic matrix V according to the sequence of the corner marksd
Converting the vector set U into a feature matrix U with m × k dimensionsdThe method comprises the following steps:
each vector U in the vector set UiForming a new vector u by color proportioni=(qi1,···,qik) All vectors uiForming a characteristic matrix U with dimension of m multiplied by k according to the sequence of the corner marksd
As an embodiment, before the classifying the RGB coordinates of the pixel points in the human figure outline region in each human figure outline diagram, the method further includes:
and performing brightness normalization processing on the figure outline in the figure outline set by adopting the following formula:
Figure BDA0002822613400000051
wherein, x is the coordinate of any channel in the RGB channel of any pixel point in the figure outline image, and max and min are the maximum value and the minimum value of the channel where x is located respectively.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is understood that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a normalized figure outline of a target person, according to an embodiment;
FIG. 3 is a schematic representation of a human figure of a normalized reference human provided by an embodiment;
FIG. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present invention in its various embodiments. However, the technical solution claimed in the present invention can be implemented without these technical details and various changes and modifications based on the following embodiments.
The embodiment of the invention provides an image processing method which can be applied to a server, is particularly suitable for automatically identifying strangers entering and exiting a dining hall, but is not limited to the method. As shown in fig. 1, the image processing method of the present embodiment includes steps 101 to 107.
Step 101: and extracting a figure outline image from the picture to be detected by using a target detection model trained based on R-CNN to obtain a figure outline image set contained in each frame of picture to be detected.
In this embodiment, the target detection model is obtained by training based on an R-CNN (Region-CNN) technical framework. The picture to be detected can be obtained from image data sent by the video monitoring equipment. The target detection model respectively detects each frame of picture to be detected and extracts the figure outline image, and each frame of picture to be detected can contain a plurality of figure outline images, so that a figure outline image set is obtained. The figure outline may be image data from which background information is removed, and the figure outline contains dress information of the figure, such as the style and color of the figure dress.
Step 102: and classifying the RGB coordinates of the pixel points in the figure outline region in each figure outline image to obtain the color categories of the pixel points in the figure outline region.
In this embodiment, the trained color classifier may be used to classify the colors of the pixels in the character profile area, and in the classified character profile data, the color classification of each pixel is used to replace the RGB coordinates of each pixel, for example, the feature information of the pixel before classification includes a pixel position coordinate and an RGB coordinate, and the feature information of the pixel after classification includes a pixel position coordinate and a color category thereof, thereby achieving the preliminary dimension reduction of the image data. The color category of the color classifier and the name thereof may be determined according to the color characteristics of the classified object, such as setting the color classification of the person's clothing to 15 color categories. The present embodiment does not specifically limit the color classifier.
Step 103: dividing the figure outline area into n sub-areas along the x-axis direction of the space coordinate of the figure outline image to obtain vector sets V corresponding to the n sub-areas, and dividing the figure outline area into m sub-areas along the y-axis direction of the space coordinate of the figure outline image to obtain vector sets U corresponding to the m sub-areas.
The vector set V comprises elements of a vector set V, wherein the elements of the vector set V are one-dimensional color vectors respectively formed by pixel points in each sub-area of n sub-areas; the elements of the vector set U are one-dimensional color vectors respectively formed by pixel points in each of the m sub-regions; n and m are positive integers greater than 1.
Optionally, in this embodiment, the step 103 of dividing the human figure region into n sub-regions may include: the human figure outline region is divided into n parts at equal distances along the x-axis direction to obtain n sub-regions. Similarly, the step 103 of dividing the human figure region into m sub-regions may include: the human figure outline area is divided into m parts at equal distances along the y-axis direction to obtain m sub-areas. Optionally, in this embodiment, before obtaining the vector set V corresponding to the n sub-regions, the method may further include: removing noise pixel points in the figure outline area along the x-axis direction; before obtaining the vector set U corresponding to the m sub-regions, the method may further include: and removing noise pixel points in the figure outline area along the y-axis direction. By way of example and not limitation, removing noise pixel points in the human figure outline region along the x-axis direction may include: classifying the pixel points in the figure outline region according to the x-axis coordinate values and the color categories of the pixel points to obtain the number of the pixel points of each category, obtaining the pixel height ratio of the number of the pixel points of each category to the maximum pixel point number on the y axis of the figure outline image, and deleting the pixel points in the categories of which the pixel height ratio is smaller than the pixel height ratio threshold; removing noise pixel points in the human figure outline region along the y-axis direction may include: classifying the pixel points in the figure outline region according to the y-axis coordinate values and the color categories of the pixel points to obtain the number of the pixel points of each category, obtaining the pixel length ratio of the number of the pixel points of each category to the maximum pixel point number on the x axis of the figure outline image, and deleting the pixel points in the categories of which the pixel length ratio is smaller than the pixel length ratio threshold.
The following detailed description of the generation process of the vector set V and the vector set U is as follows:
as shown in fig. 2, the human figure profile may be equally divided into 6 sub-regions along the x-axis direction of the spatial coordinate, that is, the human figure profile may be divided into 6 parts in a manner of being parallel to the y-axis and being distributed at equal intervals, and the number of the pixel points in each sub-region may be different due to the irregularity of the human figure itself. Each sub-region may then be de-noised along the x-axis. Specifically, the pixel points of the image region corresponding to each value of the x axis may be classified to obtain the number of pixel points of each classification category, and then the number of pixel points of each classification category is compared with the longest pixel distance (max (x) -min (x)) on the y axis (i.e., the maximum number of pixel points on the y axis)Classes for which the pixel height ratio is less than the pixel height ratio threshold are deleted. Then, the remaining classes are sorted according to the value size of the x axis and are averagely divided into n groups (the difference between the number of pixel point columns of the classes in each group is not more than 1, namely, the figure outline region is equidistantly divided into n sub-regions along the x axis direction), and then the color classes of the pixel points in each group (namely, each sub-region) form a vector, so that a vector set V is obtained1,...,vn}. Similarly, the figure outline image may be equally divided into 6 sub-regions along the y-axis direction of the spatial coordinates, that is, the figure outline region may be divided into 6 parts in a manner of being distributed at equal intervals in parallel with the x-axis, and then the noise removal processing may be performed on each sub-region. Specifically, the pixel points of the image area corresponding to each value of the y axis may be classified to obtain the number of the pixel points of each classification category, and then the category, in which the ratio of the pixel length between the number of the pixel points of each category and the longest pixel distance (max (x) -min (x)) on the x axis (i.e., the maximum number of the pixel points on the x axis) is smaller than the threshold of the pixel length ratio, is deleted. Then, the remaining classes are sorted according to the value size of the y axis and are evenly divided into m groups (namely, the figure outline region is equally divided into m sub-regions along the y axis direction), and then the color classes of the pixel points in each group (namely, each sub-region) form a vector, so that a vector set U is obtained, wherein the vector set U is { U ═ U { (U })1,...,um}. The pixel height ratio threshold and the pixel length ratio threshold may be set according to experience, as long as the interference image information caused by ambient light and the like can be effectively filtered, and no specific limitation is made here. The method and the device can accurately and efficiently remove the noise of the image in the figure outline region, thereby being beneficial to improving the image identification precision. It is understood that the present embodiment does not specifically limit the manner, sequence, and the like of the above-mentioned denoising processing.
Step 104: converting vector set V into n x k dimensional feature matrix VdConverting the vector set U into a feature matrix U with m × k dimensionsd. Where k is the number of color classes.
Optionally, in this embodiment, the vector set V is converted into an n × k dimensional feature matrix VdThe method comprises the following steps: combining each vector V in the set of vectors ViForming new vector v by color proportioni=(pi1,···,pik) All vectors v are combinediForming a n x k dimension characteristic matrix V according to the sequence of the corner marksd. Converting vector set U into m x k dimensional characteristic matrix UdThe method comprises the following steps: mixing each vector U in the vector set UiForming a new vector u by color proportioni=(qi1,···,qik) All vectors uiForming a characteristic matrix U with dimension of m multiplied by k according to the sequence of the corner marksd. For each vector V within the set of vectors ViCounting the number of pixel points corresponding to each color category and the total number of pixel points corresponding to the vector, and forming a new vector v by the ratio of the pixel points to the total number of pixel points (i.e. the color ratio of each color category)i=(pi1,···,pik) Thus, each new vector viIs equal to the color class, and then all vectors v are addediForming a n x k dimension characteristic matrix V according to the sequence of the corner marksd. Similarly, a characteristic matrix U with dimension of m multiplied by k is obtaineddEach new vector uiAll the lengths of the color filters are the same and are the number of color categories. The vector sets V and U are respectively converted into the feature matrixes, so that the calculation of subsequent image recognition is greatly facilitated.
Step 105: clustering all characteristic matrixes corresponding to the figure outline included in each frame of picture to be detected to obtain N clustering centers; the characteristic matrix of each cluster center is N multiplied by N characteristic matrix VcAnd m × N feature matrix Uc
Optionally, in this embodiment, the following clustering objective functions may be adopted to obtain various types of clustering centers:
Figure BDA0002822613400000081
when the clustering objective function is iterated to the minimum value, various clustering centers are obtained; the feature matrix of each cluster center is NiEach feature matrix D corresponding to the figure outline included in each frame of picture to be detectedjBelong to the firstMembership of class i as uijThe number of feature matrices is w, dpm,npmRespectively, feature matrix DjCluster center matrix N, matrix element of (p, m) th position;
Figure BDA0002822613400000082
in this embodiment, the clustering centers of the various types are obtained by iterative computation of a clustering objective function, a clustering objective function value is obtained by modifying the clustering center obtained at the previous time in the iterative process, and when the clustering objective function reaches the minimum value, an optimal clustering scheme (i.e., the clustering centers of the various types) is obtained. Not only can an accurate clustering center be obtained through an iteration mode, but also the operation speed is high.
Step 106: respectively calculating a characteristic matrix V of each clustering centercAnd standard matrix set DvSimilarity v of any elementsAnd a feature matrix UcAnd standard matrix set DuSimilarity u of any one elements
Step 107: calculating a result v from the similaritysAnd usAnd determining that the picture to be detected contains the target person.
Wherein, the standard matrix set DvAnd DuFor characterizing reference character features. Set of standard matrices DvAnd DuThe extracted reference person outline may be obtained by performing the correlation processing of steps 101 to 105. For example, in a canteen safety supervision application, a reference figure profile is extracted from an image acquired when canteen staff correctly wear a suit, and the reference figure profile can be captured at various angles and under different environmental conditions, so as to improve the identification accuracy.
Specifically, V is calculatedcAnd standard matrix set Dv={Dv1,...,DvwAny one of DviDegree of similarity v ofsCalculate UcAnd standard matrix set Du={D1,...,DwIn with DviCorresponding DuiDegree of similarity usAs described above, DviAnd DuiYing ChengFor the occurrence, the result is noted as (v)s,us) Obtaining a similarity set { (v)s,us)}. Optionally, the similarity degree v in step 106sAnd usThe specific calculation method is as follows:
for any two n x m dimensional matrices D1,D2Respectively dividing the two vectors into a one-dimensional vector d with the length of n multiplied by m according to columns1,d2And calculating cosine values of the included angles of the two vectors as similarity, wherein the formula is as follows:
Figure BDA0002822613400000091
optionally, determining whether the picture to be detected contains the target person according to the similarity calculation result may include: if the similarity v issIs less than the first similarity threshold and has a similarity usAnd if the similarity is smaller than the second similarity threshold, determining that the picture to be detected contains the target person. Specifically, when two similarity degrees in a group of similarity degree sets are both larger than respective threshold values, the figure outline contained in the picture to be detected is judged to be a school canteen worker, otherwise, the target figure is judged to be detected and is an abnormal figure, and the target figure can be a canteen worker or a stranger who does not wear the tool correctly. The first similarity threshold and the second similarity threshold are obtained by feature training of a target person in a certain scene, are empirical constants, and change along with updating of training data. By way of example and not limitation, the first similarity threshold and the second similarity threshold are calculated as follows: a large number of standard sets are collected, the similarity between workers in a certain scene and workers in other scenes is calculated respectively, then the similarity between strangers and workers in the scene is calculated, respective value ranges of the two similarities are obtained respectively, comparison is carried out, and the center of the intersection of the two similarity ranges can be used as a similarity threshold value.
Generally, each frame of the image to be detected may include a large number of, for example, more than 10 person outline diagrams, and feature matrices corresponding to all the person outline diagrams in each frame of the image to be detected are clustered to obtain feature matrices of each cluster center respectivelyFeature matrix N × N feature matrix VcAnd m × N feature matrix UcThen, the feature matrix V of the feature matrix N multiplied by N of each cluster center is usedcAnd m × N feature matrix UcWhen the similarity is compared with the elements in the standard matrix set, the similarity v is convenient to determinesAnd usThe respective threshold values so that the target person can be identified more accurately.
Optionally, in this embodiment, before classifying the RGB coordinates of the pixel points in the human profile area in the human profile graph, the method may further include: and carrying out brightness normalization processing on the human figure outline in the human figure outline set. Further, the following formula can be used to perform brightness normalization processing on the human figure outline in the human figure outline set:
Figure BDA0002822613400000101
wherein, x is the coordinate of any channel in the RGB channel of any pixel point in the figure outline image, and max and min are the maximum value and the minimum value of the channel where x is located respectively.
The real-time change of the brightness of the light in the environment causes that the same color has a larger color when being imaged, for example, white is white when the brightness is higher, and the image is gray when the brightness is lower. Max and min are the maximum and minimum color coordinates of each channel of RGB in each character outline, and may be obtained by a method known to those skilled in the art, for example, by means of a method of marking points, which is not described herein again. According to the embodiment, the brightness normalization processing is carried out on the human profile map, so that the influence of the ambient light brightness on the recognition result can be eliminated, and the accuracy of image recognition is improved. Moreover, the normalization method is simple and efficient.
The accuracy of identifying a target person by an embodiment of the invention is illustrated by the following numerical example:
as shown in fig. 2 and 3, the person outline diagrams shown in fig. 2 and 3 are normalized for the extracted person outline diagrams of strangers (non-canteen workers) and canteen workers wearing fixtures. The color classifier adopted in the embodiment has a color category of { white, black, other }, and corresponds to work clothes, apron and other colors respectively. The vector sets V and U contain 6 elements, respectively. The feature matrices for strangers and canteen workers are shown in the following table, respectively. The similarity result obtained by adopting the image processing method of the embodiment of the invention is as follows:
a stranger: [(0.546,0.501),(0.493,0.371),(0.493,0.311)],
school canteen workers: [(0.967,0.977),(0.862,0.737),(0.862,0.733)],
the first and second similarity thresholds (0.85 ) are set, so that the embodiment of the invention can accurately identify strangers according to the comparison result of the similarity calculation result and the respective thresholds.
Stranger feature matrix Vc
0.0 0.5 0.5
0.1 0.2 0.7
0.1 0.15 0.75
0.2 0.2 0.6
0.0 0.3 0.7
0.0 0.0 1.0
Stranger feature matrix Uc
0.3 0.1 0.6
0.1 0.0 0.9
0.16 0.01 0.83
0.02 0.0 0.98
0.0 0.9 0.1
0.0 1.0 0.0
Canteen worker feature matrix Vc
0.42 0.0 0.58
0.3 0.7 0.0
0.15 0.79 0.06
0.08 0.8 0.12
0.13 0384 0.03
0.56 0.0 0.44
Canteen worker feature matrix Uc
Figure BDA0002822613400000111
Figure BDA0002822613400000121
Compared with the prior art, the embodiment of the invention carries out color classification on the figure outline area, divides the figure outline area to obtain the color vector set V and the vector set U corresponding to the figure outline image, and respectively converts the vector set V and the vector set U into the characteristic matrix VdAnd VdClustering feature matrixes corresponding to all figure outline graphs contained in each frame of picture to be detected to obtain a feature matrix V of the feature matrix nxN of each clustering centercAnd m × N feature matrix UcThen by separately calculating the feature matrix VcAnd UcAnd standard matrix set DvAnd DuAnd the similarity of any element, so that whether the picture to be detected contains the target person or not is determined according to the similarity result. Therefore, the embodiment of the invention can determine whether people with clothes inconsistent with standard clothes of canteen workers enter and exit the canteen or not by analyzing and comparing the color types of the figure outline areas and the corresponding position areas, thereby not only remarkably improving the real-time performance and safety of canteen supervision, but also reducing the workload of supervisors.
An embodiment of the present invention further provides an image processing apparatus 400, where the apparatus 400 may be configured in a server, and as shown in fig. 4, the apparatus 400 includes:
the extraction module 401 is configured to extract a person silhouette image from the to-be-detected picture by using a target detection model trained based on R-CNN, so as to obtain a person silhouette image set included in each frame of to-be-detected picture.
And the color classification module 402 is configured to classify the RGB coordinates of the pixel points in the character outline area in each character outline image, so as to obtain the color classification of the pixel points in the character outline area.
A vector conversion module 403, configured to divide the human outline area into n sub-areas along an x-axis direction of a spatial coordinate of the human outline image, so as to obtain a vector set V corresponding to the n sub-areas, and divide the human outline area into m sub-areas along a y-axis direction of the spatial coordinate of the human outline image, so as to obtain a vector set U corresponding to the m sub-areas; wherein, the elements of the vector set V are one-dimensional color vectors respectively composed of pixel points in each sub-region of the n sub-regions; the elements of the vector set U are one-dimensional color vectors respectively formed by pixel points in each sub-area of the m sub-areas; n and m are positive integers greater than 1.
A feature matrix conversion module 404 for converting the vector set V into a feature matrix V of n × k dimensionsdConverting the vector set U into a feature matrix U with m × k dimensionsd(ii) a Wherein k is the number of the color categories;
the clustering module 405 is configured to cluster all feature matrices corresponding to the figure outline included in each frame of the picture to be detected to obtain N clustering centers; the characteristic matrix of each cluster center is N multiplied by N characteristic matrix VcAnd m × N feature matrix Uc
A similarity calculation module 406 for calculating the feature matrix V of each cluster center respectivelycAnd standard matrix set DvSimilarity v of any elementsAnd a feature matrix UcAnd standard matrix set DuSimilarity u of any one elements
A determining module 407 for calculating a result v according to the similaritysAnd usDetermining that the picture to be detected contains a target person; wherein, the standard matrix set DvAnd DuFor characterizing reference character features.
Alternatively, the clustering module 405 may obtain the clustering centers of various classes by using the following clustering objective functions:
Figure BDA0002822613400000131
wherein, when the clustering objective function iterates to the minimum value, various types of clusters are obtainedA class center; the feature matrix of each cluster center is NiEach feature matrix D corresponding to the figure outline included in each frame of picture to be detectedjMembership degree u belonging to class iijThe number of feature matrices is w, dpm,npmRespectively, feature matrix DjCluster center matrix N, matrix element of (p, m) th position;
Figure BDA0002822613400000132
optionally, the vector conversion module 403 is configured to divide the human figure region into n parts in an equidistant manner along the x-axis direction to obtain n sub-regions, and divide the human figure region into m parts in an equidistant manner along the y-axis direction to obtain m sub-regions.
Optionally, the apparatus 400 may further include:
and the denoising module is used for removing noise pixel points in the figure outline region along the x-axis direction before the vector set V corresponding to the n sub-regions is obtained, and removing noise pixel points in the figure outline region along the y-axis direction before the vector set U corresponding to the m sub-regions is obtained.
Optionally, the denoising module is specifically configured to classify the pixels in the human figure contour region according to the x-axis coordinate value and the color category of the pixels to obtain the number of the pixels in each category, obtain a pixel height ratio of the number of the pixels in each category to the maximum number of the pixels in the y-axis of the human figure contour map, and delete the pixels in the category of which the pixel height ratio is smaller than the pixel height ratio threshold; and classifying the pixels in the figure outline region according to the y-axis coordinate value of the pixels and the color category to obtain the number of the pixels in each category, obtaining the pixel length ratio of the number of the pixels in each category to the maximum number of the pixels in the x axis of the figure outline image, and deleting the pixels in the categories of which the pixel length ratio is smaller than the pixel length ratio threshold.
Optionally, the determining module 407 is configured to determine the similarity v if the similarity v is greater than or equal to a threshold valuesIs less than a first similarity threshold, and the similarity usIf the similarity is smaller than a second similarity threshold value, the target is determined to be contained in the picture to be detectedA character.
Optionally, the feature matrix conversion module 404 is configured to convert each vector V in the set of vectors ViForming new vector v by color proportioni=(pi1,···,pik) All vectors v are combinediForming a n x k dimension characteristic matrix V according to the sequence of the corner marksd(ii) a And combining each vector U in the set of vectors UiForming a new vector u by color proportioni=(qi1,···,qik) All vectors uiForming a characteristic matrix U with dimension of m multiplied by k according to the sequence of the corner marksd
Optionally, the apparatus 400 may further include:
and the normalization module is used for carrying out brightness normalization processing on the figure outline image in the figure outline image set before classifying the RGB coordinates of the pixel points in the figure outline region in the figure outline image. Further, the normalization module is specifically configured to perform brightness normalization processing on the human figure outline in the human figure outline set by using the following formula:
Figure BDA0002822613400000141
wherein, x is the coordinate of any channel in the RGB channels of any pixel point in the figure of the human figure, and max and min are the maximum value and the minimum value of the channel where x is located respectively.
Compared with the prior art, the image processing device of the embodiment of the invention carries out color classification on the figure outline area, divides the figure outline area into the color vector set V and the vector set U corresponding to the figure outline image, and respectively converts the vector set V and the vector set U into the characteristic matrix VdAnd VdClustering feature matrixes corresponding to all figure outline graphs contained in each frame of picture to be detected to obtain a feature matrix V of the feature matrix nxN of each clustering centercAnd m × N feature matrix UcThen by separately calculating the feature matrix VcAnd UcAnd standard matrix set DvAnd DuSimilarity of any one of the elementsTherefore, whether the picture to be detected contains the target person or not is determined according to the similarity result. Therefore, the embodiment of the invention can determine whether people with clothes inconsistent with standard clothes of canteen workers enter and exit the canteen or not by analyzing and comparing the color types of the figure outline areas and the corresponding position areas, thereby not only remarkably improving the real-time performance and safety of canteen supervision, but also reducing the workload of supervisors.
An embodiment of the invention also provides computer equipment. As shown in fig. 5, the apparatus includes: memory 502, processor 501;
the memory 502 stores instructions executable by the at least one processor 501, the instructions being executable by the at least one processor 501 to implement the image processing method of the foregoing embodiments.
The computer device includes one or more processors 501 and a memory 502, one processor 501 being taken as an example in fig. 5. The processor 501 and the memory 502 may be connected by a bus or other means, and fig. 5 illustrates the connection by the bus as an example. The memory 502, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor 501 executes various functional applications of the apparatus and data processing, i.e., implements the above-described image processing method, by executing nonvolatile software programs, instructions, and modules stored in the memory 502.
The memory 502 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
One or more modules are stored in the memory 502, and when executed by the one or more processors 501, perform the image processing method of any of the method embodiments described above.
The computer device of the embodiment is a character wheelColor classification is carried out on the contour region, the figure contour region is partitioned to obtain a color vector set V and a vector set U corresponding to the figure contour image, and then the vector set V and the vector set U are respectively converted into a characteristic matrix VdAnd VdClustering feature matrixes corresponding to all figure outline graphs contained in each frame of picture to be detected to obtain a feature matrix V of the feature matrix nxN of each clustering centercAnd m × N feature matrix UcThen by separately calculating the feature matrix VcAnd UcAnd standard matrix set DvAnd DuAnd the similarity of any element, so that whether the picture to be detected contains the target person or not is determined according to the similarity result. Therefore, the embodiment of the invention can determine whether people with clothes inconsistent with standard clothes of canteen workers enter and exit the canteen or not by analyzing and comparing the color types of the figure outline areas and the corresponding position areas, thereby not only remarkably improving the real-time performance and safety of canteen supervision, but also reducing the workload of supervisors.
The above-mentioned device can execute the method provided by the embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method, and reference may be made to the method provided by the embodiment of the present invention for technical details that are not described in detail in the embodiment.
An embodiment of the present application further provides a non-volatile storage medium for storing a computer-readable program, where the computer-readable program is used for a computer to execute some or all of the above method embodiments.
That is, those skilled in the art can understand that all or part of the steps in the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. An image processing method, comprising:
extracting a figure outline image from the picture to be detected by using a target detection model trained based on R-CNN to obtain a figure outline image set contained in each frame of picture to be detected;
classifying the RGB coordinates of the pixel points in the figure outline area in each figure outline image to obtain the color category of the pixel points in the figure outline area;
dividing the figure outline area into n sub-areas along the x-axis direction of the space coordinate of the figure outline image to obtain a vector set V corresponding to the n sub-areas, and dividing the figure outline area into m sub-areas along the y-axis direction of the space coordinate of the figure outline image to obtain a vector set U corresponding to the m sub-areas; wherein, the elements of the vector set V are one-dimensional color vectors respectively composed of pixel points in each sub-region of the n sub-regions; the elements of the vector set U are one-dimensional color vectors respectively formed by pixel points in each sub-area of the m sub-areas; n and m are positive integers greater than 1;
converting the vector set V into an n x k dimensional feature matrix VdConverting the vector set U into a feature matrix U with m × k dimensionsd(ii) a Wherein k is the number of the color categories;
clustering all characteristic matrixes corresponding to the figure outline included in each frame of picture to be detected to obtain N clustering centers; the characteristic matrix of each cluster center is N multiplied by N characteristic matrix VcAnd m × N feature matrix Uc
Respectively calculating a characteristic matrix V of each clustering centercAnd standard matrix set DvSimilarity v of any elementsAnd characteristic momentsArray UcAnd standard matrix set DuSimilarity u of any one elementsAnd calculating a result v from the similaritysAnd usDetermining that the picture to be detected contains a target person; wherein, the standard matrix set DvAnd DuFor characterizing reference character features.
2. The image processing method according to claim 1, wherein the following clustering objective functions are used to obtain the clustering centers of each class:
Figure FDA0002822613390000011
when the clustering objective function is iterated to the minimum value, clustering centers of various types are obtained; the feature matrix of each cluster center is NiEach feature matrix D corresponding to the figure outline included in each frame of picture to be detectedjMembership degree u belonging to class iijThe number of feature matrices is w, dpm,npmRespectively, feature matrix DjCluster center matrix N, matrix element of (p, m) th position;
Figure FDA0002822613390000012
3. the image processing method as claimed in claim 1, wherein said dividing the human figure outline region into n sub-regions along the x-axis direction of the spatial coordinates of the human figure outline image comprises:
dividing the figure outline area into n parts along the x-axis direction in an equidistant mode to obtain n sub-areas;
the dividing the figure outline area into m sub-areas along the y-axis direction of the space coordinate of the figure outline image comprises the following steps:
dividing the figure outline area into m parts along the y-axis direction in an equidistant manner to obtain the m sub-areas.
4. The image processing method according to claim 3, further comprising, before the obtaining the vector set V corresponding to the n sub-regions:
removing noise pixel points in the figure outline region along the x-axis direction;
before the obtaining of the vector set U corresponding to the m sub-regions, the method further includes:
and removing noise pixel points in the figure outline region along the y-axis direction.
5. The image processing method of claim 4, wherein the removing of the noise pixel points in the human figure outline region along the x-axis direction comprises:
classifying the pixels in the figure outline region according to the x-axis coordinate value of the pixels and the color category to obtain the number of the pixels in each category, obtaining the pixel height ratio of the number of the pixels in each category to the maximum number of the pixels in the y axis of the figure outline image, and deleting the pixels in the categories of which the pixel height ratio is smaller than the pixel height ratio threshold;
the removing of the noise pixel points in the figure outline region along the y-axis direction includes:
classifying the pixels in the figure outline region according to the y-axis coordinate value of the pixels and the color category to obtain the number of the pixels in each category, obtaining the pixel length ratio of the number of the pixels in each category to the maximum number of the pixels in the x axis of the figure outline image, and deleting the pixels in the categories of which the pixel length ratio is smaller than the pixel length ratio threshold.
6. The image processing method according to claim 1, wherein v is calculated from the similaritysAnd usDetermining whether the picture to be detected contains a target person or not, including:
if the similarity v issIs less than a first similarity threshold, and the similarity usLess than the second similarity threshold, then determiningThe picture to be detected comprises the target person.
7. The image processing method according to claim 1, wherein the converting the set of vectors V into an n x k dimensional feature matrix VdThe method comprises the following steps:
combining each vector V within the set of vectors ViForming new vector v by color proportioni=(pi1,···,pik) All vectors v are combinediForming a n x k dimension characteristic matrix V according to the sequence of the corner marksd
Converting the vector set U into a feature matrix U with m × k dimensionsdThe method comprises the following steps:
each vector U in the vector set UiForming a new vector u by color proportioni=(qi1,···,qik) All vectors uiForming a characteristic matrix U with dimension of m multiplied by k according to the sequence of the corner marksd
8. The image processing method as claimed in claim 1, further comprising, before said classifying RGB coordinates of pixel points in the human figure outline region in each of the human figure outline maps:
and performing brightness normalization processing on the figure outline in the figure outline set by adopting the following formula:
Figure FDA0002822613390000031
wherein, x is the coordinate of any channel in the RGB channel of any pixel point in the figure outline image, and max and min are the maximum value and the minimum value of the channel where x is located respectively.
9. An image processing apparatus characterized by comprising:
the extraction module is used for extracting a figure outline image from the picture to be detected by using a target detection model trained based on R-CNN to obtain a figure outline image set contained in each frame of picture to be detected;
the color classification module is used for classifying the RGB coordinates of the pixel points in the figure outline area in each figure outline image to obtain the color category of the pixel points in the figure outline area;
the vector conversion module is used for dividing the figure outline region into n sub-regions along the x-axis direction of the space coordinate of the figure outline image to obtain a vector set V corresponding to the n sub-regions, and dividing the figure outline region into m sub-regions along the y-axis direction of the space coordinate of the figure outline image to obtain a vector set U corresponding to the m sub-regions; wherein, the elements of the vector set V are one-dimensional color vectors respectively composed of pixel points in each sub-region of the n sub-regions; the elements of the vector set U are one-dimensional color vectors respectively formed by pixel points in each sub-area of the m sub-areas; n and m are positive integers greater than 1;
a feature matrix conversion module for converting the vector set V into an n × k feature matrix VdConverting the vector set U into a feature matrix U with m × k dimensionsd(ii) a Wherein k is the number of the color categories;
the clustering module is used for clustering all characteristic matrixes corresponding to the figure outline images contained in each frame of picture to be detected to obtain N clustering centers; the characteristic matrix of each cluster center is N multiplied by N characteristic matrix VcAnd m × N feature matrix Uc
A similarity calculation module for calculating the feature matrix V of each cluster centercAnd standard matrix set DvSimilarity v of any elementsAnd a feature matrix UcAnd standard matrix set DuSimilarity u of any one elements
A determination module for calculating a result v from the similaritysAnd usDetermining that the picture to be detected contains a target person; wherein, the standard matrix set DvAnd DuFor characterizing reference character features.
10. A computer device, comprising: a memory storing a computer program and a processor running the computer program to implement the image processing method of any one of claims 1 to 8.
CN202011421618.7A 2020-12-08 2020-12-08 Image processing method, device, computer equipment and medium Active CN112464850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011421618.7A CN112464850B (en) 2020-12-08 2020-12-08 Image processing method, device, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011421618.7A CN112464850B (en) 2020-12-08 2020-12-08 Image processing method, device, computer equipment and medium

Publications (2)

Publication Number Publication Date
CN112464850A true CN112464850A (en) 2021-03-09
CN112464850B CN112464850B (en) 2024-02-09

Family

ID=74800790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011421618.7A Active CN112464850B (en) 2020-12-08 2020-12-08 Image processing method, device, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN112464850B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807229A (en) * 2021-09-13 2021-12-17 深圳市巨龙创视科技有限公司 Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom
CN114998614A (en) * 2022-08-08 2022-09-02 浪潮电子信息产业股份有限公司 Image processing method, device and equipment and readable storage medium
CN115047442A (en) * 2022-03-21 2022-09-13 珠海格力电器股份有限公司 Point cloud data processing method and device, electronic equipment and storage medium
CN117079058A (en) * 2023-10-11 2023-11-17 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011248702A (en) * 2010-05-28 2011-12-08 Sharp Corp Image processing device, image processing method, image processing program, and program storage medium
US20170351708A1 (en) * 2016-06-06 2017-12-07 Think-Cell Software Gmbh Automated data extraction from scatter plot images
CN110610453A (en) * 2019-09-02 2019-12-24 腾讯科技(深圳)有限公司 Image processing method and device and computer readable storage medium
CN110991465A (en) * 2019-11-15 2020-04-10 泰康保险集团股份有限公司 Object identification method and device, computing equipment and storage medium
CN111709483A (en) * 2020-06-18 2020-09-25 山东财经大学 Multi-feature-based super-pixel clustering method and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011248702A (en) * 2010-05-28 2011-12-08 Sharp Corp Image processing device, image processing method, image processing program, and program storage medium
US20170351708A1 (en) * 2016-06-06 2017-12-07 Think-Cell Software Gmbh Automated data extraction from scatter plot images
CN110610453A (en) * 2019-09-02 2019-12-24 腾讯科技(深圳)有限公司 Image processing method and device and computer readable storage medium
CN110991465A (en) * 2019-11-15 2020-04-10 泰康保险集团股份有限公司 Object identification method and device, computing equipment and storage medium
CN111709483A (en) * 2020-06-18 2020-09-25 山东财经大学 Multi-feature-based super-pixel clustering method and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHUN-RU DONG, ET AL: "An improved differential evolution and its application to determining feature weights in similarity based clustering", 2013 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS *
邓秋君;: "基于轮廓转角特征的物体分类识别算法研究", 现代计算机, no. 22 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807229A (en) * 2021-09-13 2021-12-17 深圳市巨龙创视科技有限公司 Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom
CN115047442A (en) * 2022-03-21 2022-09-13 珠海格力电器股份有限公司 Point cloud data processing method and device, electronic equipment and storage medium
CN114998614A (en) * 2022-08-08 2022-09-02 浪潮电子信息产业股份有限公司 Image processing method, device and equipment and readable storage medium
CN114998614B (en) * 2022-08-08 2023-01-24 浪潮电子信息产业股份有限公司 Image processing method, device and equipment and readable storage medium
CN117079058A (en) * 2023-10-11 2023-11-17 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic equipment
CN117079058B (en) * 2023-10-11 2024-01-09 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN112464850B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
Yousif et al. Fast human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification
CN112464850B (en) Image processing method, device, computer equipment and medium
Rachmadi et al. Vehicle color recognition using convolutional neural network
CN104166841B (en) The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network
CN109145742B (en) Pedestrian identification method and system
JP6192271B2 (en) Image processing apparatus, image processing method, and program
CN106933816A (en) Across camera lens object retrieval system and method based on global characteristics and local feature
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
Seong et al. Vision-based safety vest detection in a construction scene
CN111353338B (en) Energy efficiency improvement method based on business hall video monitoring
Klare et al. Background subtraction in varying illuminations using an ensemble based on an enlarged feature set
Bhuiyan et al. Person re-identification by discriminatively selecting parts and features
Masmoudi et al. Vision based system for vacant parking lot detection: Vpld
CN114863464A (en) Second-order identification method for PID drawing picture information
Yang et al. The system of detecting safety helmets based on YOLOv5
Rangdal et al. Animal detection using histogram oriented gradient
Soltani et al. Euclidean distance versus Manhattan distance for skin detection using the SFA database
CN107341456B (en) Weather sunny and cloudy classification method based on single outdoor color image
Padmashini et al. Vision based algorithm for people counting using deep learning
Pucci et al. WhoAmI: An automatic tool for visual recognition of tiger and leopard individuals in the wild
Patravali et al. Skin segmentation using YCBCR and RGB color models
Chung et al. Face detection and posture recognition in a real time tracking system
Shehnaz et al. An object recognition algorithm with structure-guided saliency detection and SVM classifier
CN115719469A (en) Target identification method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant