CN113158732A - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN113158732A
CN113158732A CN202011642494.5A CN202011642494A CN113158732A CN 113158732 A CN113158732 A CN 113158732A CN 202011642494 A CN202011642494 A CN 202011642494A CN 113158732 A CN113158732 A CN 113158732A
Authority
CN
China
Prior art keywords
image
feature data
sub
images
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011642494.5A
Other languages
Chinese (zh)
Inventor
余世杰
陈浩彬
刘凯鉴
陈大鹏
赵瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202011642494.5A priority Critical patent/CN113158732A/en
Publication of CN113158732A publication Critical patent/CN113158732A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an image processing method and a related device, wherein the method comprises the following steps: performing feature extraction on an image to be detected to obtain a first sub-feature data, wherein the first sub-feature data are feature data of an unobstructed human body image; determining t first images from a first database according to the a first sub-feature data, wherein the first images comprise feature data matched with the a first sub-feature data; processing at least according to the t first images to obtain a target image corresponding to the image to be detected; and matching in a second database according to the target image to obtain a matched image corresponding to the target image, so that the accuracy in image matching can be improved.

Description

Image processing method and related device
Technical Field
The present application relates to the field of image data processing technologies, and in particular, to an image processing method and a related apparatus.
Background
With the increasing demand of people for safety, city security is very important. The pedestrian re-identification aims to give a picture of a certain pedestrian, and accurately find other pictures of the pedestrian from a picture database so as to achieve the purpose of identifying the pedestrian. With the rise of the convolutional neural network along with the deep learning in recent years, the field of pedestrian re-identification is developed vigorously, and unusual results are obtained in conventional pedestrian retrieval. However, there still exist some problems, one of which is that pedestrians are easily blocked by obstacles in a real scene, and due to the existence of the obstacles, the characteristics finally extracted by the neural network are interfered, and the accuracy in image matching is low.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related device, which can improve the accuracy in image matching.
A first aspect of embodiments of the present application provides an image processing method, where the method includes:
performing feature extraction on an image to be detected to obtain a first sub-feature data, wherein the first sub-feature data are feature data of an unobstructed human body image;
determining t first images from a first database according to the a first sub-feature data, wherein the first images comprise feature data matched with the a first sub-feature data;
processing at least according to the t first images to obtain a target image corresponding to the image to be detected;
and matching in a second database according to the target image to obtain a matched image corresponding to the target image.
In this example, a first sub-feature data is obtained by performing feature extraction on an image to be detected, where the first sub-feature data is feature data of an unobstructed human body image, t first images are determined from a first database according to the a first sub-feature data, where the first images include feature data matched with the a first sub-feature data, a target image corresponding to the image to be detected is obtained by processing at least the t first images, a matching image corresponding to the target image is obtained by matching the target image in a second database according to the target image, and therefore, compared with the existing scheme, the accuracy in matching an obstructed image to be detected is not high, and the t first images can be obtained through feature data of an unobstructed human body image in the image to be detected, and obtaining a target image corresponding to the image to be detected through the t-th first image, wherein the target image carries the information of the human body image which is not shielded in the image to be detected, and matching is carried out according to the target image to obtain a matched image, so that the accuracy of obtaining the matched image can be improved.
With reference to the first aspect, in a possible implementation manner, the determining, according to the a first sub-feature data, t first images from a first database includes:
acquiring K images corresponding to each first feature data in the a first feature data in a first database to obtain a first image set;
and acquiring t first images, wherein the first images are all images in the a first image sets.
In this example, a first image sets corresponding to a first feature data are obtained from the first database, and t first images are obtained from the a first image sets, where the first images are images existing in all the a first image sets, so that accuracy in obtaining the t first images can be improved.
With reference to the first aspect, in a possible implementation manner, the processing at least according to the t first images to obtain a target image corresponding to the image to be detected includes:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
performing feature extraction on the image to be detected to obtain b second sub-feature data, wherein the second sub-feature data are feature data of a shielded human body image;
acquiring feature data corresponding to the b second sub-feature data in the target feature data to obtain b third sub-feature data;
and processing the image to be detected according to the b third sub-characteristic data to obtain the target image.
In this example, the target image is obtained by processing b third sub-feature data corresponding to b second sub-feature data in the image to be detected, where the b third sub-feature data is determined from the target feature data determined by the t reference feature data, and may carry information of the t reference feature data, so as to improve accuracy in determining the target image.
With reference to the first aspect, in a possible implementation manner, the processing the image to be detected according to the b third sub-feature data to obtain the target image includes:
replacing the b second sub-feature data in the feature data of the image to be detected with the b third sub-feature data to obtain the feature data of the image to be detected after replacement;
and determining the image corresponding to the replaced characteristic data as the target image.
In the example, b second sub-feature data in the image to be detected are replaced by b third feature data, so that feature data of a human body image which is shielded in the image to be detected can be replaced, the target image is a human body image which is not shielded, and accuracy in matching of subsequent images is improved.
With reference to the first aspect, in a possible implementation manner, the determining target feature data according to the t reference feature data includes:
and determining the mean value of the t reference characteristic data as the target characteristic data.
With reference to the first aspect, in a possible implementation manner, the processing at least according to the t first images to obtain a target image corresponding to the image to be detected includes:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
and determining the image corresponding to the target characteristic data as the target image.
In this example, the images corresponding to the target feature data determined by the t pieces of reference feature data are determined as target images, and since the t pieces of first images are non-occluded images, the target images are also non-occluded images, so that the accuracy in matching subsequent images is improved.
With reference to the first aspect, in a possible implementation manner, the performing feature extraction on the image to be detected to obtain a first sub-feature data includes:
extracting the characteristics of the image to be detected to obtain n local characteristic data;
determining human body region information in the image to be detected according to a human body semantic segmentation method;
determining sub-human body region information corresponding to each local characteristic data according to the human body region information;
and determining the a first sub-characteristic data from the n local characteristic data according to the sub-human body region information corresponding to each local characteristic data.
In the example, n local feature data are obtained by performing feature extraction on the image to be detected, and a first sub-feature data are determined according to the sub-human body region information, so that the accuracy of acquiring the first sub-feature data is improved.
With reference to the first aspect, in a possible implementation manner, the determining, according to the sub-human body region information corresponding to each piece of local feature data, the a pieces of first sub-feature data from the n pieces of local feature data includes:
acquiring a proportional value of the area corresponding to the sub-human body area information and the area corresponding to the corresponding local characteristic data to obtain n human body area proportional values;
acquiring a personal volume area ratio value a higher than a preset area ratio value in the n personal volume area ratio values;
and determining the local feature data corresponding to the a personal area proportion value as the a first sub-feature data.
In this example, the local feature data corresponding to the human body area proportion value higher than the preset area proportion value is determined as a first sub-feature data, and the human body area proportion value is higher than the preset area proportion value, so that the feature that the human body is not shielded can be embodied, and the accuracy in determining the first sub-feature data is improved.
With reference to the first aspect, in one possible implementation manner, the method further includes:
and determining the b second sub-feature data from the n local feature data according to the sub-human body region information corresponding to each local feature data, wherein the sum of a and b is n.
With reference to the first aspect, in one possible implementation manner, the method further includes:
and carrying out target identification according to the matched image to obtain a target identification result.
In this example, the target recognition is performed according to the matching image to obtain the target recognition result, and since the accuracy of the matching image is improved, the accuracy of the target recognition result is higher.
A second aspect of an embodiment of the present application provides an image processing apparatus, including:
the device comprises an extraction unit, a detection unit and a comparison unit, wherein the extraction unit is used for extracting the characteristics of an image to be detected to obtain a first sub-characteristic data, and the first sub-characteristic data is the characteristic data of an unoccluded human body image;
a determining unit, configured to determine t first images from a first database according to the a first sub-feature data, where the first images include feature data matched with the a first sub-feature data;
the processing unit is used for processing at least according to the t first images to obtain a target image corresponding to the image to be detected;
and the matching unit is used for matching in a second database according to the target image to obtain a matching image corresponding to the target image.
With reference to the second aspect, in one possible implementation manner, the determining unit is configured to:
acquiring K images corresponding to each first feature data in the a first feature data in a first database to obtain a first image set;
and acquiring t first images, wherein the first images are all images in the a first image sets.
With reference to the second aspect, in one possible implementation manner, the processing unit is configured to:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
performing feature extraction on the image to be detected to obtain b second sub-feature data, wherein the second sub-feature data are feature data of a shielded human body image;
acquiring feature data corresponding to the b second sub-feature data in the target feature data to obtain b third sub-feature data;
and processing the image to be detected according to the b third sub-characteristic data to obtain the target image.
With reference to the second aspect, in a possible implementation manner, in terms of processing the image to be detected according to the b third sub feature data to obtain the target image, the processing unit is configured to:
replacing the b second sub-feature data in the feature data of the image to be detected with the b third sub-feature data to obtain the feature data of the image to be detected after replacement;
and determining the image corresponding to the replaced characteristic data as the target image.
With reference to the second aspect, in a possible implementation manner, in the determining target feature data according to the t reference feature data, the processing unit is configured to:
and determining the mean value of the t reference characteristic data as the target characteristic data.
With reference to the second aspect, in a possible implementation manner, in terms of processing at least according to the t first images to obtain a target image corresponding to the image to be detected, the processing unit is configured to:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
and determining the image corresponding to the target characteristic data as the target image.
With reference to the second aspect, in one possible implementation manner, the extracting unit is configured to:
extracting the characteristics of the image to be detected to obtain n local characteristic data;
determining human body region information in the image to be detected according to a human body semantic segmentation method;
determining sub-human body region information corresponding to each local characteristic data according to the human body region information;
and determining the a first sub-characteristic data from the n local characteristic data according to the sub-human body region information corresponding to each local characteristic data.
With reference to the second aspect, in a possible implementation manner, in the aspect that the a first sub-feature data are determined from the n local feature data according to the sub-human body region information corresponding to each local feature data, the extraction unit is configured to:
acquiring a proportional value of the area corresponding to the sub-human body area information and the area corresponding to the corresponding local characteristic data to obtain n human body area proportional values;
acquiring a personal volume area ratio value a higher than a preset area ratio value in the n personal volume area ratio values;
and determining the local feature data corresponding to the a personal area proportion value as the a first sub-feature data.
With reference to the second aspect, in one possible implementation manner, the apparatus is further configured to:
and determining the b second sub-feature data from the n local feature data according to the sub-human body region information corresponding to each local feature data, wherein the sum of a and b is n.
With reference to the second aspect, in one possible implementation manner, the apparatus is further configured to:
and carrying out target identification according to the matched image to obtain a target identification result.
A third aspect of the embodiments of the present application provides a terminal, including a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the step instructions in the first aspect of the embodiments of the present application.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, and wherein the computer program causes a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative work.
Fig. 1A is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 1B is a schematic diagram of image segmentation provided in an embodiment of the present application;
FIG. 1C is a diagram illustrating an effect of an image replacement process according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating another image processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to better understand an image processing method provided in the embodiments of the present application, a brief description is first given below of a scene to which the image processing method is applied. The image processing method is applied to a scene of target re-identification, for example, in cell security, a camera shoots a picture of a certain pedestrian, and other pictures corresponding to the pedestrian, such as pictures of regional activities shot by the pedestrian in other cameras in a cell, are obtained by matching in a database according to the picture, so that the effect of identifying the pedestrian is achieved. However, when the camera is used for taking a picture of a pedestrian, the camera may be blocked by some other objects, so that the taken picture is only a partial human body image of the pedestrian, for example, the taken picture is blocked by leaves, a dustbin, a vehicle bumper and the like. If the images with the human body being blocked are adopted for matching, the matching effect is poor, and the more serious condition is that the corresponding images cannot be matched, so that the accuracy during matching is reduced sharply. The method aims to solve the problem that the matching accuracy is low, the shielding and completion processing is carried out through the shielded picture of the human body, namely, the shielded human body is replaced by the similar unshielded human body image, so that the matching accuracy is improved.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present disclosure. The image processing method may be executed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the image processing method may be implemented by a processor calling a computer-readable instruction stored in a memory. Alternatively, the image processing method may be performed by a server. As shown in fig. 1, the image processing method may include:
101. and performing feature extraction on the image to be detected to obtain a first sub-feature data, wherein the first sub-feature data are feature data of the human body image which is not shielded.
The image to be detected is an image used for re-identifying the target user, and may be, for example, an image of a user walking in a cell. The target user may be any user that is photographed, and may be, for example, a resident in a cell, a visitor in the cell, or another person.
The method for extracting the features of the image to be detected to obtain the a first sub-feature data may be: and performing feature extraction on the image to be detected to obtain a plurality of local feature data, and determining a first sub-feature data from the local feature data.
The local feature data can be understood as feature data corresponding to a sub-image obtained after segmentation processing is performed on an image to be detected. The method for segmenting the image to be detected can be a uniform segmentation method, for example, the image to be detected is equally divided into 2 sub-images, 4 sub-images, 8 sub-images, and the like. One possible segmentation scheme is shown in fig. 1B, which illustrates the segmentation of the image to be detected into 2 sub-images, 4 sub-images in fig. 1B.
102. And determining t first images from a first database according to the a first sub-feature data, wherein the first images comprise feature data matched with the a first sub-feature data.
A plurality of images may be matched from the first database according to each of the first sub-feature data, and t first images may be obtained according to the plurality of images. The matching of the first sub-feature data from the first database to a plurality of images may be: and taking a fixed numerical image from high to low of the similarity with the first sub-feature data to obtain the multiple images. Specifically, for example, the first 10 images with the similarity from high to low with the first sub-feature data in the first database are taken to obtain the plurality of images. Of course, other numerical images are also possible, and this is merely an example and is not particularly limited.
103. And processing at least according to the t first images to obtain a target image corresponding to the image to be detected.
The occluded human body image in the image to be detected can be replaced by the t first images to obtain a target image, and the average value of the characteristic data corresponding to the t first images and the corresponding image can be determined as the target image.
Specifically, for example, 5 images matched with the image of the user walking in the cell are matched in the first database, and the 5 images are associated with the image of the user walking in the cell. The 5 images are complete images, and it can be specifically understood that the portrait in each of the 5 images is a complete portrait and is not occluded. Specific alternatives may be: if the leg posting column is blocked in the image where the user walks in the cell, only the upper body image of the user can be displayed in the image, the 5 matched images are adopted to carry out blocking replacement on the image where the user walks in the cell, the blocked leg is replaced, and the replaced target image is obtained. The replaced target image can be regarded as an unobstructed image and can be used for subsequent image retrieval matching and the like.
104. And matching in a second database according to the target image to obtain a matching image corresponding to the target image.
And matching in the second database according to the target image, and determining the image with the highest matching degree with the target image as the matching image corresponding to the target image. The matching degree here can be corresponded by the similarity, and the higher the similarity is, the higher the matching degree is, and the lower the similarity is, the lower the matching degree is.
The first database and the second database may be the same database or different databases. If the first database and the second database are different databases, the capacity of the first database can be smaller than that of the second database, so that rapid retrieval can be realized to perform image completion replacement on the shielded image to be detected, and after completion replacement, retrieval is performed in the second database with larger capacity to obtain a corresponding matched image.
In this example, a first sub-feature data is obtained by performing feature extraction on an image to be detected, where the first sub-feature data is feature data of an unobstructed human body image, t first images are determined from a first database according to the a first sub-feature data, where the first images include feature data matched with the a first sub-feature data, a target image corresponding to the image to be detected is obtained by processing at least the t first images, a matching image corresponding to the target image is obtained by matching the target image in a second database according to the target image, and therefore, compared with the existing scheme, the accuracy in matching an obstructed image to be detected is not high, and the t first images can be obtained through feature data of an unobstructed human body image in the image to be detected, and obtaining a target image corresponding to the image to be detected through the t-th first image, wherein the target image carries the information of the human body image which is not shielded in the image to be detected, and matching is carried out according to the target image to obtain a matched image, so that the accuracy of obtaining the matched image can be improved.
In a possible implementation manner, a possible method for extracting features of an image to be detected to obtain a first sub-feature data includes:
a1, performing feature extraction on the image to be detected to obtain n local feature data;
a2, determining human body region information in the image to be detected according to a human body semantic segmentation method;
a3, determining sub-human body area information corresponding to each local characteristic data according to the human body area information;
a4, determining the a first sub-feature data from the n local feature data according to the sub-human body region information corresponding to each local feature data.
The image to be detected can be subjected to feature extraction through a feature extraction network to obtain n local feature data. The feature network may be a pre-trained network for feature extraction. The feature extraction network may perform segmentation processing on the image to be detected to obtain a plurality of sub-images after segmentation, and perform feature extraction on each sub-image to obtain local feature data corresponding to each sub-image.
The human body semantic segmentation method can determine human body region information in an image to be detected, a binary image is obtained after the human body semantic segmentation method performs segmentation, the gray value of a human body part in the binary image is 255, and the gray value of a non-human body part is 0. For example, if the head of the user in the image to be detected is blocked, the gray scale value of the head of the user is 0, and the gray scale values of the rest of the body parts are 255. For another example, if the user's leg is blocked, the tone value of the user's leg is 0, and the tone values of the remaining body parts are 255.
And determining the intersection of the human body region and the region corresponding to the local characteristic data as a sub human body region corresponding to the local characteristic data, thereby obtaining sub human body region information corresponding to the local characteristic data.
The method for determining a first sub-feature data according to the sub-human body region information corresponding to the local feature data may be to determine the first sub-feature data according to an area ratio of a human body region corresponding to the human body region information to a region corresponding to the sub-feature data.
In the example, n local feature data are obtained by performing feature extraction on the image to be detected, and a first sub-feature data are determined according to the sub-human body region information, so that the accuracy of acquiring the first sub-feature data is improved.
In a possible implementation manner, a possible method for determining the a first sub-feature data from the n local feature data according to the sub-human body region information corresponding to each local feature data includes:
b1, obtaining a proportional value of the area corresponding to the sub-human body area information and the area corresponding to the corresponding local characteristic data to obtain an n human body area proportional value;
b2, obtaining a human body area proportion values which are higher than a preset area proportion value in the n human body area proportion values;
and B3, determining the local feature data corresponding to the a personal area proportion value as the a first sub-feature data.
The preset area ratio value may be set by an empirical value or historical data, and may be, for example, 0.3. Specifically, for example, the image to be detected is divided into 4 sub-images, the sub-images can be recorded as a first sub-image, a second sub-image, a third sub-image and a fourth sub-image, and the sequence is from head to bottom, for example, the head of the user exists in the first sub-image, and the leg exists in the fourth sub-image. And the leg of the user in the image to be detected is shielded, and the leg data is in the local feature data corresponding to the fourth sub-image. Then, whether the local feature data corresponding to the fourth sub-image is the first sub-feature data or not can be determined according to the area ratio value of the leg image in the fourth sub-image. For example, when the preset area ratio is 0.3, and the area ratio is smaller than 0.3, the local feature data corresponding to the fourth sub-image is the first sub-feature data.
The area corresponding to the sub-human body area information corresponds to the area corresponding to the local feature data, and specifically, the ratio of the area corresponding to the sub-human body area information in the area corresponding to the local feature data to the area corresponding to the local feature data is determined as a human body area ratio value. The human body area ratio value is a ratio of the human body area to the region corresponding to the total local feature data.
In this example, the local feature data corresponding to the human body area proportion value higher than the preset area proportion value is determined as a first sub-feature data, and the human body area proportion value is higher than the preset area proportion value, so that the feature that the human body is not shielded can be embodied, and the accuracy in determining the first sub-feature data is improved.
In one possible implementation, a possible method for determining the second sub-feature data includes:
and determining the b second sub-feature data from the n local feature data according to the sub-human body region information corresponding to each local feature data, wherein the sum of a and b is n.
And determining the local characteristic data of which the ratio of the area corresponding to the sub-human body area information to the area corresponding to the local characteristic data is smaller than a preset area ratio value as second sub-characteristic data.
Of course, the local feature data out of the first sub-feature data in the n local feature data may be determined as the second sub-feature data.
In one possible implementation manner, a possible method for determining t first images from a first database according to the a first sub-feature data includes:
c1, acquiring K images corresponding to each first feature data in the a first feature data in a first database to obtain a first image sets;
and C2, acquiring t first images, wherein the first images are images existing in the a first image sets.
The method for acquiring the first image set corresponding to the first feature data in the first database may be:
the images in the first database may be segmented to obtain segmented images corresponding to the first feature data. For example, if the first feature data is feature data obtained by dividing the image to be detected into the first partial image of 2 parts, the divided image corresponding to the first feature data is an image obtained by dividing the image in the database into the first partial image of 2 parts. The method of segmenting the image in the first database is exactly the same as the method of segmenting the image to be detected, e.g. by segmenting through a segmentation network.
And comparing the similarity of the image corresponding to the first characteristic data with the segmentation image to obtain the corresponding similarity. And determining the complete images corresponding to the K divided images with the similarity from high to low as the K images to obtain a first image set.
The images in the intersection of the a first image sets may be determined as t first images. Of course, the images in the subset of the intersection of the a first image sets may be determined as t first images.
In this example, a first image sets corresponding to a first feature data are obtained from the first database, and t first images are obtained from the a first image sets, where the first images are images existing in all the a first image sets, so that accuracy in obtaining the t first images can be improved.
In a possible implementation manner, a possible method for processing at least according to the t first images to obtain a target image corresponding to the image to be detected includes:
d1, performing feature extraction on the t first images to obtain t reference feature data;
d2, determining target characteristic data according to the t reference characteristic data;
d3, performing feature extraction on the image to be detected to obtain b second sub-feature data, wherein the second sub-feature data are feature data of the shielded human body image;
d4, acquiring feature data corresponding to the b second sub-feature data in the target feature data to obtain b third sub-feature data;
d5, processing the image to be detected according to the b third sub-feature data to obtain the target image.
The method for extracting the features of the t first images may be to extract the t first images by using a general feature extraction algorithm to obtain t reference feature data. Or, the feature extraction network in the foregoing embodiment may be adopted to perform feature extraction, so as to obtain t pieces of reference feature data. The feature extraction network may extract local feature data or global feature data, which may be understood as feature data of the entire image.
The mean of the t reference feature data may be determined as the target feature data.
The method for extracting the features of the image to be detected to obtain the b second sub-feature data may refer to the method shown in the foregoing embodiment, and details are not described here.
The third sub-feature data and the second sub-feature data are in one-to-one correspondence, and the correspondence relationship is performed by location, and the specific correspondence method may refer to the correspondence method between the first sub-feature data and the segmented image in the foregoing embodiment.
The feature data corresponding to the third sub-feature data in the image to be processed may be replaced, thereby obtaining the target image.
In this example, the target image is obtained by processing b third sub-feature data corresponding to b second sub-feature data in the image to be detected, where the b third sub-feature data is determined from the target feature data determined by the t reference feature data, and may carry information of the t reference feature data, so as to improve accuracy in determining the target image.
In a possible implementation manner, a possible processing of the image to be detected according to the b third sub-feature data to obtain the target image includes:
e1, replacing the b second sub-feature data in the feature data of the image to be detected with the b corresponding third sub-feature data to obtain the replaced feature data of the image to be detected;
e2, determining the image corresponding to the replaced characteristic data as the target image.
Specifically, fig. 1C shows an effect diagram of an image replacement process. In fig. 1C, a lower part of the leg of the image to be detected is shielded, the shielded part is replaced, and the leg characteristics are supplemented in the replaced image, so that a complete human body image is obtained.
And (4) completely replacing the feature data according to the format of the original feature data when the feature data is replaced. Since the third sub-feature data are feature data obtained from the unoccluded human body image, the occluded human body image does not exist in the target image obtained after the extraction.
In one possible implementation manner, a possible determination of target feature data according to the t reference feature data includes:
and determining the mean value of the t reference characteristic data as the target characteristic data.
The mean value of the reference characteristic data is determined as the target characteristic data, so that the accuracy of determining the target characteristic data can be improved.
In a possible implementation manner, another possible processing is performed at least according to the t first images to obtain a target image corresponding to the image to be detected, including:
f1, performing feature extraction on the t first images to obtain t reference feature data;
f2, determining target characteristic data according to the t reference characteristic data;
and F3, determining the image corresponding to the target characteristic data as the target image.
The above steps F1 and F2 may refer to the implementation manners of the steps D1 and D2 in the foregoing embodiments, and are not described herein again.
In this example, if the image corresponding to the target feature data is determined as the target image, the continuity of the target image can be improved, so that the reliability in matching can be improved.
In a possible implementation manner, the embodiment of the present application may further perform destination identification by matching the images, which specifically includes:
and carrying out target identification according to the matched image to obtain a target identification result.
The target recognition based on the matching image may be performed by acquiring attribute information corresponding to the matching image and determining the attribute information as a target recognition result. The attribute information may include identity information of the user in the matching image, a work category, residential home address information, and the like.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another image processing method according to an embodiment of the present disclosure. As shown in fig. 2, the method includes:
201. performing feature extraction on an image to be detected to obtain a first sub-feature data, wherein the first sub-feature data are feature data of an unobstructed human body image;
202. determining t first images from a first database according to the a first sub-feature data, wherein the first images comprise feature data matched with the a first sub-feature data;
203. performing feature extraction on the t first images to obtain t reference feature data;
204. determining target characteristic data according to the t reference characteristic data;
205. performing feature extraction on the image to be detected to obtain b second sub-feature data, wherein the second sub-feature data are feature data of a shielded human body image;
206. acquiring feature data corresponding to the b second sub-feature data in the target feature data to obtain b third sub-feature data;
207. processing the image to be detected according to the b third sub-feature data to obtain the target image;
208. and matching in a second database according to the target image to obtain a matching image corresponding to the target image.
In this example, the images corresponding to the target feature data determined by the t pieces of reference feature data are determined as target images, and since the t pieces of first images are non-occluded images, the target images are also non-occluded images, so that the accuracy in matching subsequent images is improved.
In accordance with the foregoing embodiments, please refer to fig. 3, fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application, and as shown in the drawing, the terminal includes a processor, an input device, an output device, and a memory, and the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, the computer program includes program instructions, the processor is configured to call the program instructions, and the program includes instructions for performing the following steps;
performing feature extraction on an image to be detected to obtain a first sub-feature data, wherein the first sub-feature data are feature data of an unobstructed human body image;
determining t first images from a first database according to the a first sub-feature data, wherein the first images comprise feature data matched with the a first sub-feature data;
processing at least according to the t first images to obtain a target image corresponding to the image to be detected;
and matching in a second database according to the target image to obtain a matched image corresponding to the target image.
In one possible implementation manner, the determining t first images from the first database according to the a first sub-feature data includes:
acquiring K images corresponding to each first feature data in the a first feature data in a first database to obtain a first image set;
and acquiring t first images, wherein the first images are all images in the a first image sets.
In a possible implementation manner, the processing at least according to the t first images to obtain a target image corresponding to the image to be detected includes:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
performing feature extraction on the image to be detected to obtain b second sub-feature data, wherein the second sub-feature data are feature data of a shielded human body image;
acquiring feature data corresponding to the b second sub-feature data in the target feature data to obtain b third sub-feature data;
and processing the image to be detected according to the b third sub-characteristic data to obtain the target image.
In a possible implementation manner, the processing the image to be detected according to the b third sub-feature data to obtain the target image includes:
replacing the b second sub-feature data in the feature data of the image to be detected with the b third sub-feature data to obtain the feature data of the image to be detected after replacement;
and determining the image corresponding to the replaced characteristic data as the target image.
In one possible implementation manner, the determining target feature data according to the t reference feature data includes:
and determining the mean value of the t reference characteristic data as the target characteristic data.
In a possible implementation manner, the processing at least according to the t first images to obtain a target image corresponding to the image to be detected includes:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
and determining the image corresponding to the target characteristic data as the target image.
In a possible implementation manner, the performing feature extraction on the image to be detected to obtain a first sub-feature data includes:
extracting the characteristics of the image to be detected to obtain n local characteristic data;
determining human body region information in the image to be detected according to a human body semantic segmentation method;
determining sub-human body region information corresponding to each local characteristic data according to the human body region information;
and determining the a first sub-characteristic data from the n local characteristic data according to the sub-human body region information corresponding to each local characteristic data.
In a possible implementation manner, the determining, according to the sub-human body region information corresponding to each local feature data, the a first sub-feature data from the n local feature data includes:
acquiring a proportional value of the area corresponding to the sub-human body area information and the area corresponding to the corresponding local characteristic data to obtain n human body area proportional values;
acquiring a personal volume area ratio value a higher than a preset area ratio value in the n personal volume area ratio values;
and determining the local feature data corresponding to the a personal area proportion value as the a first sub-feature data.
In one possible implementation, the method further includes:
and determining the b second sub-feature data from the n local feature data according to the sub-human body region information corresponding to each local feature data, wherein the sum of a and b is n.
In one possible implementation, the method further includes:
and carrying out target identification according to the matched image to obtain a target identification result.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the terminal includes corresponding hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the terminal may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In accordance with the above, please refer to fig. 4, fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus includes:
the extraction unit 401 is configured to perform feature extraction on an image to be detected to obtain a first sub-feature data, where the first sub-feature data is feature data of an unobstructed human body image;
a determining unit 402, configured to determine t first images from a first database according to the a first sub-feature data, where the first images include feature data matching the a first sub-feature data;
a processing unit 403, configured to perform processing at least according to the t first images to obtain a target image corresponding to the image to be detected;
a matching unit 404, configured to perform matching in a second database according to the target image, so as to obtain a matching image corresponding to the target image.
In one possible implementation manner, the determining unit 502 is configured to:
acquiring K images corresponding to each first feature data in the a first feature data in a first database to obtain a first image set;
and acquiring t first images, wherein the first images are all images in the a first image sets.
In one possible implementation manner, the processing unit 403 is configured to:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
performing feature extraction on the image to be detected to obtain b second sub-feature data, wherein the second sub-feature data are feature data of a shielded human body image;
acquiring feature data corresponding to the b second sub-feature data in the target feature data to obtain b third sub-feature data;
and processing the image to be detected according to the b third sub-characteristic data to obtain the target image.
In a possible implementation manner, in the aspect that the target image is obtained by processing the image to be detected according to the b third sub-feature data, the processing unit 403 is configured to:
replacing the b second sub-feature data in the feature data of the image to be detected with the b third sub-feature data to obtain the feature data of the image to be detected after replacement;
and determining the image corresponding to the replaced characteristic data as the target image.
In a possible implementation manner, in the aspect of determining the target feature data according to the t reference feature data, the processing unit 403 is configured to:
and determining the mean value of the t reference characteristic data as the target characteristic data.
In a possible implementation manner, in terms of processing at least according to the t first images to obtain a target image corresponding to the image to be detected, the processing unit 403 is configured to:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
and determining the image corresponding to the target characteristic data as the target image.
In one possible implementation manner, the extracting unit 401 is configured to:
extracting the characteristics of the image to be detected to obtain n local characteristic data;
determining human body region information in the image to be detected according to a human body semantic segmentation method;
determining sub-human body region information corresponding to each local characteristic data according to the human body region information;
and determining the a first sub-characteristic data from the n local characteristic data according to the sub-human body region information corresponding to each local characteristic data.
In a possible implementation manner, in the aspect that the a first sub-feature data are determined from the n local feature data according to the sub-human body region information corresponding to each local feature data, the extraction unit 401 is configured to:
acquiring a proportional value of the area corresponding to the sub-human body area information and the area corresponding to the corresponding local characteristic data to obtain n human body area proportional values;
acquiring a personal volume area ratio value a higher than a preset area ratio value in the n personal volume area ratio values;
and determining the local feature data corresponding to the a personal area proportion value as the a first sub-feature data.
In one possible implementation, the apparatus is further configured to:
and determining the b second sub-feature data from the n local feature data according to the sub-human body region information corresponding to each local feature data, wherein the sum of a and b is n.
In one possible implementation, the apparatus is further configured to:
and carrying out target identification according to the matched image to obtain a target identification result.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes a computer to execute part or all of the steps of any one of the image processing methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program causes a computer to execute part or all of the steps of any one of the image processing methods as described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is merely a logical division, and the actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a memory and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer readable memory, which may include: flash memory disks, read-only memory, random access memory, magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the methods and their core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (13)

1. An image processing method, characterized in that the method comprises:
performing feature extraction on an image to be detected to obtain a first sub-feature data, wherein the first sub-feature data are feature data of an unobstructed human body image;
determining t first images from a first database according to the a first sub-feature data, wherein the first images comprise feature data matched with the a first sub-feature data;
processing at least according to the t first images to obtain a target image corresponding to the image to be detected;
and matching in a second database according to the target image to obtain a matched image corresponding to the target image.
2. The method according to claim 1, wherein said determining t first images from a first database according to said a first sub-feature data comprises:
acquiring K images corresponding to each first feature data in the a first feature data in a first database to obtain a first image set;
and acquiring t first images, wherein the first images are images existing in the a first image sets.
3. The method according to claim 1 or 2, wherein said processing at least according to said t first images to obtain a target image corresponding to said image to be detected comprises:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
performing feature extraction on the image to be detected to obtain b second sub-feature data, wherein the second sub-feature data are feature data of a shielded human body image;
acquiring feature data corresponding to the b second sub-feature data in the target feature data to obtain b third sub-feature data;
and processing the image to be detected according to the b third sub-characteristic data to obtain the target image.
4. The method according to claim 3, wherein the processing the image to be detected according to the b third sub-feature data to obtain the target image comprises:
replacing the b second sub-feature data in the feature data of the image to be detected with the b corresponding third sub-feature data to obtain the feature data of the image to be detected after replacement;
and determining the image corresponding to the replaced characteristic data as the target image.
5. The method according to claim 3 or 4, wherein determining target feature data from the t reference feature data comprises:
and determining the mean value of the t reference characteristic data as the target characteristic data.
6. The method according to claim 1 or 2, wherein said processing at least according to said t first images to obtain a target image corresponding to said image to be detected comprises:
performing feature extraction on the t first images to obtain t reference feature data;
determining target characteristic data according to the t reference characteristic data;
and determining the image corresponding to the target characteristic data as the target image.
7. The method according to any one of claims 1 to 6, wherein the extracting features of the image to be detected to obtain a first sub-feature data comprises:
extracting the characteristics of the image to be detected to obtain n local characteristic data;
determining human body region information in the image to be detected according to a human body semantic segmentation method;
determining sub-human body region information corresponding to each local characteristic data according to the human body region information;
and determining the a first sub-characteristic data from the n local characteristic data according to the sub-human body region information corresponding to each local characteristic data.
8. The method according to claim 7, wherein the determining the a first sub-feature data from the n local feature data according to the sub-human body region information corresponding to each local feature data comprises:
acquiring a proportional value of the area corresponding to the sub-human body area information and the area corresponding to the corresponding local characteristic data to obtain n human body area proportional values;
acquiring a personal volume area ratio value a higher than a preset area ratio value in the n personal volume area ratio values;
and determining the local feature data corresponding to the a personal area proportion value as the a first sub-feature data.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
and determining the b second sub-feature data from the n local feature data according to the sub-human body region information corresponding to each local feature data, wherein the sum of a and b is n.
10. The method according to any one of claims 1-8, further comprising:
and carrying out target identification according to the matched image to obtain a target identification result.
11. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises an extraction unit, a detection unit and a comparison unit, wherein the extraction unit is used for extracting the characteristics of an image to be detected to obtain a first sub-characteristic data, and the first sub-characteristic data is the characteristic data of an unoccluded human body image;
a determining unit, configured to determine t first images from a first database according to the a first sub-feature data, where the first images include feature data matched with the a first sub-feature data;
the processing unit is used for processing at least according to the t first images to obtain a target image corresponding to the image to be detected;
and the matching unit is used for matching in a second database according to the target image to obtain a matching image corresponding to the target image.
12. A terminal, comprising a processor, an input device, an output device, and a memory, the processor, the input device, the output device, and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-10.
CN202011642494.5A 2020-12-31 2020-12-31 Image processing method and related device Pending CN113158732A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011642494.5A CN113158732A (en) 2020-12-31 2020-12-31 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011642494.5A CN113158732A (en) 2020-12-31 2020-12-31 Image processing method and related device

Publications (1)

Publication Number Publication Date
CN113158732A true CN113158732A (en) 2021-07-23

Family

ID=76878260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011642494.5A Pending CN113158732A (en) 2020-12-31 2020-12-31 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN113158732A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN109740541A (en) * 2019-01-04 2019-05-10 重庆大学 A kind of pedestrian weight identifying system and method
CN110110681A (en) * 2019-05-14 2019-08-09 哈尔滨理工大学 It is a kind of for there is the face identification method blocked
CN110569731A (en) * 2019-08-07 2019-12-13 北京旷视科技有限公司 face recognition method and device and electronic equipment
WO2020215552A1 (en) * 2019-04-26 2020-10-29 平安科技(深圳)有限公司 Multi-target tracking method, apparatus, computer device, and storage medium
CN112115886A (en) * 2020-09-22 2020-12-22 北京市商汤科技开发有限公司 Image detection method and related device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN109740541A (en) * 2019-01-04 2019-05-10 重庆大学 A kind of pedestrian weight identifying system and method
WO2020215552A1 (en) * 2019-04-26 2020-10-29 平安科技(深圳)有限公司 Multi-target tracking method, apparatus, computer device, and storage medium
CN110110681A (en) * 2019-05-14 2019-08-09 哈尔滨理工大学 It is a kind of for there is the face identification method blocked
CN110569731A (en) * 2019-08-07 2019-12-13 北京旷视科技有限公司 face recognition method and device and electronic equipment
CN112115886A (en) * 2020-09-22 2020-12-22 北京市商汤科技开发有限公司 Image detection method and related device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEI-SHI ZHENG 等: "Partial Person Re-Identification", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》, vol. 2016, 18 February 2016 (2016-02-18) *

Similar Documents

Publication Publication Date Title
CN109255352B (en) Target detection method, device and system
CN108960266B (en) Image target detection method and device
CN109697416B (en) Video data processing method and related device
JP5775225B2 (en) Text detection using multi-layer connected components with histograms
CN108256404B (en) Pedestrian detection method and device
EP3493101A1 (en) Image recognition method, terminal, and nonvolatile storage medium
US9489566B2 (en) Image recognition apparatus and image recognition method for identifying object
CN111598067B (en) Re-recognition training method, re-recognition method and storage device in video
WO2022217876A1 (en) Instance segmentation method and apparatus, and electronic device and storage medium
CN103198311A (en) Method and apparatus for recognizing a character based on a photographed image
CN112200115B (en) Face recognition training method, recognition method, device, equipment and storage medium
CN111444976A (en) Target detection method and device, electronic equipment and readable storage medium
CN110765903A (en) Pedestrian re-identification method and device and storage medium
CN107578011A (en) The decision method and device of key frame of video
CN115062186B (en) Video content retrieval method, device, equipment and storage medium
CN111079648A (en) Data set cleaning method and device and electronic system
CN115909176A (en) Video semantic segmentation method and device, electronic equipment and storage medium
CN115223022A (en) Image processing method, device, storage medium and equipment
CN113269010A (en) Training method and related device for human face living body detection model
Gudavalli et al. SeeTheSeams: Localized detection of seam carving based image forgery in satellite imagery
CN104252618B (en) method and system for improving photo return speed
CN113221922B (en) Image processing method and related device
CN106295693B (en) A kind of image-recognizing method and device
CN113158732A (en) Image processing method and related device
Groeneweg et al. A fast offline building recognition application on a mobile telephone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination