CN114155576A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN114155576A
CN114155576A CN202111345656.3A CN202111345656A CN114155576A CN 114155576 A CN114155576 A CN 114155576A CN 202111345656 A CN202111345656 A CN 202111345656A CN 114155576 A CN114155576 A CN 114155576A
Authority
CN
China
Prior art keywords
quality
sub
identity information
file
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111345656.3A
Other languages
Chinese (zh)
Inventor
张宏
周明伟
陈立力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202111345656.3A priority Critical patent/CN114155576A/en
Publication of CN114155576A publication Critical patent/CN114155576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image processing method and device, which are used for reasonably utilizing low-quality pictures to enable files to be more complete. The method comprises the following steps: dividing a plurality of images into low-quality images and high-quality images, and clustering the high-quality images based on identity information to obtain a high-quality quantum archive corresponding to each identity information; clustering the low-quality images based on the vehicle identifications to obtain a low-quality quantum file corresponding to each vehicle identification; marking identity information for the low-quality quantum files corresponding to the vehicle identification; and combining the low-quality quantum archives and the high-quality quantum archives with the same identity information to obtain the archives corresponding to the identity information. The low-quality pictures are reasonably utilized, so that the files are more complete.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
With the continuous upgrade of national security prevention and control, more and more scenes are covered by the intelligent snapshot equipment. Due to the uncontrollable nature of the snapshot scene, some low quality pictures are often captured. For example, weather factors can cause the brightness of a shot picture to be different, and even parameters are under-exposed or over-exposed; serious image noise can be generated at night due to serious and insufficient illumination; when a person is in a moving state, the photographed person may have a serious blur.
In the portrait clustering scheme, low-quality pictures are usually discarded directly, so as to reduce the amount of stored data and reduce the computational resource overhead. However, if an archive lacks data related to the low-quality picture, the archive is incomplete, and may affect subsequent applications. For example, discarding low quality pictures taken by a traffic snapshot device may leave the vehicle track incomplete.
Therefore, how to reasonably utilize the low-quality pictures and make the files more complete is a technical problem to be solved.
Disclosure of Invention
The invention provides an image processing method and device, which are used for reasonably utilizing low-quality pictures to enable files to be more complete.
In order to achieve the above object, an embodiment of the present invention discloses an image processing method, including:
acquiring a plurality of images;
dividing the plurality of images into a first quality image and a second quality image, the second quality being better than the first quality;
clustering the second quality images based on identity information to obtain a second quality sub-file corresponding to each identity information;
clustering the first quality images based on vehicle identifications to obtain a first quality sub-file corresponding to each vehicle identification; marking identity information for a first quality sub-file corresponding to the vehicle identification;
and merging the first quality sub-file and the second quality sub-file with the same identity information to obtain a file corresponding to the identity information.
In one example, the method further comprises: acquiring relevant information of an image, wherein the relevant information of the image comprises one or more of the following items: the system comprises a bayonet identification, time information, longitude and latitude information, scene information, coordinate information, quality scores, definition, a pitch angle, width and confidence.
In one example, the dividing the plurality of images into a first quality image and a second quality image includes: dividing the plurality of images into a first quality image and a second quality image based on partial related information of the images and a threshold corresponding to the partial related information; the partial relevant information includes one or more of: quality score, definition, pitch angle, width, confidence.
An example, prior to tagging identity information for a corresponding first mass sub-profile of the vehicle identification, further comprising: aiming at any vehicle identification, determining a first mass track corresponding to the vehicle identification according to a first mass sub-file corresponding to the vehicle identification; and comparing the first mass track corresponding to the vehicle identification with the accurate track corresponding to the vehicle identification, and determining that the similarity is greater than or equal to a set threshold value. In one example, clustering the second quality images based on identity information to obtain a second quality sub-profile corresponding to each identity information includes: clustering the second quality images based on the face information to obtain a second quality face sub-file corresponding to each face information; clustering the second quality images based on the human body information to obtain a second quality human body sub-file corresponding to each human body information; identifying the second quality human body sub-file by adopting the face information; merging the second quality human face sub-file and the second quality human body sub-file with the same human face information to obtain a second quality sub-file corresponding to the human face information; and searching identity information associated with the face information, and associating the identity information to a second quality sub-file corresponding to the face information to obtain a second quality sub-file corresponding to each identity information.
In one example, before determining the first mass track corresponding to the vehicle identifier according to the first mass sub-profile corresponding to the vehicle identifier, the method further includes: and determining an overlapped vehicle identifier corresponding to the first quality sub-file and the second quality image together, and filtering the first quality sub-file corresponding to the overlapped vehicle identifier in the first quality sub-file.
In one example, tagging identity information for a first mass sub-profile corresponding to the vehicle identification comprises: querying identity information associated with the vehicle identification; if one piece of identity information is inquired, the inquired identity information is marked for the first quality sub-file corresponding to the vehicle identification; if a plurality of identity information are inquired, comparing a first quality track corresponding to the vehicle identification with a second quality track corresponding to the plurality of identity information respectively, determining corresponding target identity information when the track similarity is highest, marking the target identity information for a first quality sub-file corresponding to the vehicle identification, and determining the second quality track according to a second quality file.
The present application provides an image processing apparatus, the apparatus including:
the acquisition module is used for acquiring a plurality of images;
a processing module for dividing the plurality of images into a first quality image and a second quality image, the second quality being superior to the first quality; clustering the second quality images based on identity information to obtain a second quality sub-file corresponding to each identity information; clustering the first quality images based on vehicle identifications to obtain a first quality sub-file corresponding to each vehicle identification; marking identity information for a first quality sub-file corresponding to the vehicle identification;
and the merging module is used for merging the first quality sub-file and the second quality sub-file with the same identity information to obtain a file corresponding to the identity information.
In one example, the obtaining module is further configured to obtain information related to an image, where the information related to the image includes one or more of the following: the system comprises a bayonet identification, time information, longitude and latitude information, scene information, coordinate information, definition, a pitch angle, width and confidence coefficient.
In one example, the processing module, when being configured to divide the plurality of images into a first quality image and a second quality image, is specifically configured to: dividing the plurality of images into a first quality image and a second quality image based on partial related information of the images and a threshold corresponding to the partial related information; the partial relevant information includes one or more of: definition, pitch angle, width, confidence.
In one example, the processing module is further configured to determine, for any vehicle identifier, a first mass track corresponding to the vehicle identifier according to a first mass sub-profile corresponding to the vehicle identifier; and comparing the first mass track corresponding to the vehicle identification with the accurate track corresponding to the vehicle identification, and determining that the similarity is greater than or equal to a set threshold value. In one example, when the processing module is configured to cluster the second quality images based on identity information to obtain a second quality sub-profile corresponding to each identity information, the processing module is specifically configured to: clustering the second quality images based on the face information to obtain a second quality face sub-file corresponding to each face information; clustering the second quality images based on the human body information to obtain a second quality human body sub-file corresponding to each human body information; identifying the second quality human body sub-file by adopting the face information; merging the second quality human face sub-file and the second quality human body sub-file with the same human face information to obtain a second quality sub-file corresponding to the human face information; and searching identity information associated with the face information, and associating the identity information to a second quality sub-file corresponding to the face information to obtain a second quality sub-file corresponding to each identity information.
In one example, the method further comprises: and the filtering module is used for determining an overlapped vehicle identifier corresponding to the first quality sub-file and the second quality image together, and filtering the first quality sub-file corresponding to the overlapped vehicle identifier in the first quality sub-file.
In one example, the processing module, when configured to label the first quality sub-profile corresponding to the vehicle identifier with identity information, is specifically configured to:
querying identity information associated with the vehicle identification; if one piece of identity information is inquired, the inquired identity information is marked for the first quality sub-file corresponding to the vehicle identification; if a plurality of identity information are inquired, comparing a first quality track corresponding to the vehicle identification with a second quality track corresponding to the plurality of identity information respectively, determining corresponding target identity information when the track similarity is highest, marking the target identity information for a first quality sub-file corresponding to the vehicle identification, and determining the second quality track according to a second quality file.
The application provides an image processing apparatus, comprising a processor and a memory;
the memory for storing computer programs or instructions;
the processor is configured to execute part or all of the computer program or instructions in the memory, and when the part or all of the computer program or instructions are executed, the processor is configured to implement the image processing method.
The present application provides a computer-readable storage medium for storing a computer program comprising instructions for implementing the above-described method.
The present application provides a computer program product, the computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the above-described method.
The low quality images in the present application, although not participating in portrait clustering (e.g., clustering based on identity information), are not discarded. The low-quality images are clustered according to the vehicle identifications to obtain low-quality archives, and identity information is identified for the low-quality archives, so that the low-quality archives obtained through vehicle identification clustering can be associated with the high-quality archives obtained through portrait clustering by utilizing the identity information. The whole process enables discarded images (namely low-quality images) in the portrait cluster to be reasonably filed, does not participate in the portrait cluster, and does not influence the centroid (a reference image or a representative image of the archive) of the archive.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of an image processing process provided herein;
FIG. 2 is a schematic diagram of an image processing process provided herein;
FIG. 3 is a schematic diagram of an image processing process provided herein;
FIG. 4 is a schematic diagram of an image processing process provided herein;
fig. 5 is a structural diagram of an image processing apparatus provided in the present application;
fig. 6 is a structural diagram of an image processing apparatus according to the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an image process provided in the present application, and an execution device of the process may be any electronic device. The process comprises the following steps:
step 11: a plurality of images are acquired.
The method mainly focuses on images of a person driving on a road, and the acquired images are used for representing that the person drives on the road and can also be called as vehicle window images. The multiple images include, but are not limited to, one or more of the following: face, human body, vehicle identification. Vehicle identification such as license plate number, vehicle color, vehicle brand, etc. The snapshot device or the video recording device can be deployed at each traffic intersection, or bayonet, or road, or school, or hospital, and the like, and is used for acquiring multiple images or selecting multiple frames of images in a video.
Optionally, related information of the image may also be acquired, where the related information of the image includes one or more of the following items: the system comprises a bayonet identification, a collection time, longitude and latitude information, scene information, coordinate information (such as coordinates of human faces/human bodies/license plates in images), quality scores (such as weighted values of opposite credibility), image definition, image pitch angle, image width, image confidence and equipment identification for collecting images. The image definition, the image pitch angle (the angle relative to the person is not in the moving process of the person, the orientation of the face, the image is analyzed), the image width, the image coordinates and the like can be called as attribute information of the image. Wherein, the bayonet identification, the time-space information (such as time, longitude and latitude, etc.), the scene information (such as school, village, hospital, bank, etc.), etc. can be used to determine the trajectory. As the person or vehicle moves (e.g., turns, looks sideways, etc.), the orientation of the face or person or license plate changes, and the pitch angle may refer to the orientation of the face or person or license plate.
The process of step 11 will be described in detail later.
Step 12: the plurality of images are divided into a first (e.g., low) quality image and a second (e.g., high) quality image.
It is noted that the first quality and the second quality mentioned in this application are used for the comparison of the image quality, the second quality being better than the first quality. The terms "first" and "second" may be replaced with other words capable of representing image quality, for example, "first" may be replaced with "low" or "medium low" and "second" may be replaced with "high" or "medium high". For ease of understanding, the following description will be given taking as an example that the first mass is a low mass and the second mass is a high mass.
When images of different qualities are distinguished, the plurality of images may be divided into low-quality images and high-quality images based on partial related information of the images and a threshold corresponding to the partial related information. The partial related information includes one or more of: quality score, definition, pitch angle, picture width, confidence.
Step 13: and clustering the high-quality images based on the identity information to obtain a high-quality quantum archive corresponding to each identity information.
The high-quality quantum file corresponding to any identity information comprises a high-quality image corresponding to the identity information and related information of the high-quality image.
The identity information can be face information or identity card information.
The high-quality images may include face images and human body images, and the following introduces a process of clustering the high-quality images based on identity information to obtain a high-quality quantum archive corresponding to each identity information:
clustering the high-quality images based on the face information to obtain a high-quality face sub-file corresponding to each face information;
clustering the high-quality images based on the human body information to obtain high-quality human body sub-files corresponding to each human body information; identifying the high-quality human body sub-file by adopting human face information;
merging the high-quality face sub-files and the high-quality human body sub-files with the same face information to obtain high-quality sub-files corresponding to the face information;
and searching identity information associated with the face information, and associating the identity information to the high-quality quantum files corresponding to the face information to obtain the high-quality quantum files corresponding to each identity information.
In addition, it should be noted that the archive includes images and information related to the images.
Step 14: and clustering the low-quality images based on the vehicle identifications to obtain a low-quality sub-file corresponding to each vehicle identification.
The low-quality sub-archive corresponding to any vehicle identification comprises a low-quality image corresponding to the vehicle identification and related information of the low-quality image.
The order of step 13 and step 14 is not limited.
Step 15: and marking the identity information of the low-quality quantum archives corresponding to the vehicle identification.
For example, low quality pictures that do not meet the threshold are grouped by vehicle identification to form a low quality profile. And counting the times of appearance of the gates, the frequency of appearance time periods, the vehicle track and other information in the low-quality file to obtain the low-quality track corresponding to the low-quality file (namely the vehicle identifier).
When the identity information is marked for the low-quality quantum archive corresponding to the vehicle identifier, the identity information associated with the vehicle identifier may be searched first, and the searched identity information is marked on the low-quality quantum archive corresponding to the vehicle identifier.
For example, the identity information associated with the vehicle identification is looked up in the static data corresponding to the vehicle identification.
And if one piece of identity information is inquired, marking the inquired identity information for the low-quality quantum archive corresponding to the vehicle identification.
The multiple images acquired by the method are images acquired within a period of time, and in the period of time, the vehicle may be bought and sold, and the owner of the vehicle may be changed, so that multiple identity information may be searched.
In one example, if a plurality of identity information are searched, one identity information can be selected as the target information at will, or the identity information at the later time is selected as the target information, and the target identity information is marked for the low-quality quantum file corresponding to the vehicle identifier.
In another example, if a plurality of identity information are searched, the low-quality tracks corresponding to the vehicle identifiers are respectively compared with the high-quality tracks corresponding to the high-quality archives corresponding to the plurality of identity information, the corresponding target identity information when the track similarity is highest is determined, and the target identity information is marked for the low-quality quantum archives corresponding to the vehicle identifiers. Identity information is determined through similarity comparison, and accuracy of determining the identity information can be improved.
Step 16: and combining the low-quality quantum archives and the high-quality quantum archives with the same identity information to obtain the archives corresponding to the identity information.
The low quality images in the present application, although not participating in portrait clustering (e.g., clustering based on identity information), are not discarded. The low-quality images are clustered according to the vehicle identifications to obtain low-quality archives, and identity information is identified for the low-quality archives, so that the low-quality archives obtained through vehicle identification clustering can be associated with the high-quality archives obtained through portrait clustering by utilizing the identity information. The whole process enables discarded images (namely low-quality images) in the portrait cluster to be reasonably filed, does not participate in the portrait cluster, and does not influence the centroid (a reference image or a representative image of the archive) of the archive.
In an optional example, for any vehicle identifier, a low-quality track corresponding to the vehicle identifier is determined according to a low-quality sub-archive corresponding to the vehicle identifier. And comparing the low-quality track corresponding to the vehicle identification with the accurate track corresponding to the vehicle identification, and marking the identity information for the low-quality quantum archive corresponding to the vehicle identification under the condition that the similarity is greater than or equal to a set threshold value.
The accurate track corresponding to the vehicle identifier may be pre-stored in static data, for example, a snapshot device or a video device dedicated to determining the vehicle running track is deployed, images dedicated to determining the vehicle running track are clear, the accurate track corresponding to the vehicle identifier is determined based on the images, and the accurate track corresponding to the vehicle identifier is stored in the static data corresponding to the vehicle identifier. The device dedicated to determining the vehicle driving track is different from the image acquisition device in step 101 of the present application, and the image dedicated to determining the vehicle driving track is different from the source of the image acquired in step 101 of the present application.
Or, the present application may cluster the high-quality images in step 102 based on vehicle identifiers to obtain a high-quality quantum archive corresponding to each vehicle identifier; the high-quality sub-file corresponding to any vehicle identification comprises a high-quality image corresponding to the vehicle identification and related information of the high-quality image; and then determining a high-quality track corresponding to the vehicle identification according to the high-quality quantum archives corresponding to the vehicle identification, and determining the high-quality track as an accurate track.
And comparing the similarity of the low-quality track corresponding to the low-quality file with the accurate track corresponding to the vehicle identifier, and keeping the low-quality file under the condition that the similarity meets the condition to identify the identity information for the low-quality file, so that the accuracy of the low-quality file can be improved.
In an optional example, before determining the low-quality track corresponding to the vehicle identifier according to the low-quality quantum archive corresponding to the vehicle identifier, an overlapping vehicle identifier corresponding to both the low-quality quantum archive and the high-quality image may also be determined, and the low-quality quantum archive corresponding to the overlapping vehicle identifier is filtered out from the low-quality quantum archive.
Or before comparing the low-quality track corresponding to the vehicle identifier with the accurate track corresponding to the vehicle identifier, determining an overlapping vehicle identifier corresponding to the low-quality sub-archive and the high-quality image together, and filtering the low-quality track corresponding to the overlapping vehicle identifier in the low-quality track.
For example, the present application clusters low-quality sub-archives corresponding to 4 vehicle identifiers, such as vehicle identifier 1, vehicle identifier 2, vehicle identifier 3, and vehicle identifier 4. 4 vehicle identifications, respectively vehicle identification 3, vehicle identification 4, vehicle identification 5 and vehicle identification 6, are found in the high quality image. Since the vehicle identifier 3 and the vehicle identifier 4 are both included in the high-quality image and the low-quality image, and the vehicle identifier 3 and the vehicle identifier 4 are overlapped vehicle identifiers, the low-quality files (or low-quality tracks) corresponding to the vehicle identifiers 3 and 4 can be filtered out, and do not participate in the subsequent track comparison process.
In addition, when the vehicle identification corresponding to the high-quality image is determined, the high-quality image can be clustered based on the vehicle identification to obtain a high-quality sub-file corresponding to each vehicle identification; thus obtaining the vehicle identification corresponding to the high-quality image.
As shown in fig. 2, for step 11: the relevant process of acquiring multiple images is introduced.
In step 201, data (for example, data such as pictures, videos, and audios) collected by various types of capturing devices are transmitted to a pre-storage through a front-end sensing device via a network, and the pre-storage is used for data.
Step 202, data collected by the capturing device is classified, for example, into pictures, videos, audios, and the like. The image that the equipment of taking a candid photograph gathered can be regarded as the image that this application will adopt, can also carry out the analysis to the video and generate the image that this application will adopt. For example, video is preferred, and several frames of pictures are selected from a video as the images to be used in the application. For example, for data collected by a device which only shoots a human face, a picture containing the human face is selected; for the data collected by the structured capture device, a picture containing a human face and a picture containing a human body can be selected. In the selected picture, a license plate may be included.
It is understood that spatio-temporal information, scene information, etc. may also be included in the data.
In step 203, the selected picture is transmitted to an analysis operator, such as a human face operator, a human body operator, etc., through a message queue. After the different operators are analyzed, corresponding attribute information (e.g., definition, coordinates, etc.) is generated. And supplementing the attribute information into a message body corresponding to the picture.
Attribute information, spatio-temporal information, scene information, etc. are all relevant information of the picture.
Optionally, in step 204, the data of the picture (the data of the picture may be understood as a feature vector of the image (the feature vector is used to represent the image, and for convenience of description, the feature vector of the image is abbreviated as the image) and related information of the image) may be respectively sent to different message queues, for example, to corresponding message queues of a face flow, a body flow, a traffic flow, and the like. Or the flow of human face, the flow of human body and the flow of passing vehicles can be sent to the same message queue without distinguishing. It is understood that the pictures in the face stream, the body stream and the traffic stream may overlap, for example, if one picture includes both the face and the body, the data of the picture may be sent to the message queue corresponding to the face stream or the message queue corresponding to the body stream. For example, if a picture includes both a face and a vehicle identifier (e.g., a license plate number), the data of the picture may be sent to a message queue corresponding to a face stream or a message queue corresponding to a passing stream.
Next for step 12: the process of correlating the plurality of images into a first (low) quality image and a second (high) quality image is described.
And judging the image according to the respective corresponding threshold values of different related information (such as quality score, definition, pitch angle, picture width, confidence coefficient and the like) so as to distinguish the low-quality image from the high-quality image. The low quality image and the high quality image may be sent to different message queues, respectively.
Optionally, the high quality and the low quality may be distinguished for different types of data streams (e.g., a face stream, a body stream, and a traffic stream), and the low quality image and the high quality image are sent to different message queues, respectively, for any type, so that 6 message queues may be obtained. It is also possible to not distinguish between the data flow classes, so that 2 message queues are obtained.
As shown in fig. 3, for step 13: and clustering the high-quality images based on the identity information to obtain a relevant process of the high-quality quantum archives corresponding to each identity information for introduction.
Step 301, obtaining a high-quality face data stream, and performing face clustering on consumed data according to a certain time period to generate a high-quality face sub-file. For example, data of a high-quality face data stream of a certain time period (the longer the face aggregation time is, the better the effect is generally) in the message queue is read, and a clustering algorithm is utilized to generate a high-quality face sub-file.
Step 302, obtaining a high-quality human body data stream, and performing human body clustering on the consumed data according to a certain time period to generate a high-quality human body sub-file. For example, data of a high-quality human body data stream of a certain time period (the longer the human body gathers, the better the effect) in the message queue is read, and high-quality human body sub-file data is generated by using a clustering algorithm. If the high-quality human body sub-file contains the face information, the unique identification of the high-quality human body sub-file is represented by a face identification id.
And step 303, combining the high-quality face sub-file and the high-quality human body sub-file by using the relationship between the faces and the human bodies. For example, from a video stream to a picture stream and then to different analysis operators, associated face data and body data (for example, in the same scene image, a face may exist in an analyzed body picture) may be generated in this link, and the face archive and the body archive having an association relationship are merged by comparing unique identifiers of the archives, so as to obtain a high-quality quantum archive. And identity information corresponding to the face can be searched, and real-name authentication is carried out on the high-quality quantum archive.
Optionally, a high quality quantum archive containing a vehicle identification (e.g., license plate number) may also be identified, while generating a high quality trajectory of the high quality quantum archive.
As shown in fig. 4, the related processes of step 15 and step 16 will be described in detail.
Step 401, obtaining a low-quality quantum file and a high-quality quantum file containing license plate information.
Step 402, obtaining data which does not exist in the high-quality quantum archive in the low-quality quantum archive through correlation query. For example, the archive data that is not present in the low-quality archive and is of medium-high quality is obtained through SQL associated queries. The method includes the steps of determining an overlapped license plate number corresponding to both the low-quality quantum archive and the high-quality quantum archive, and filtering the low-quality quantum archive corresponding to the overlapped license plate number from the low-quality quantum archive.
And step 403, obtaining a low-quality file containing the static data by using the static data of the vehicle and performing license plate number correlation query. For example, the static data of the vehicle includes identity information corresponding to the vehicle, trajectory data of the vehicle, and the like. And obtaining the identity information corresponding to the license plate number through the association query of the license plate number of the low-quality quantum file and the static data corresponding to the vehicle.
And if only one identity information is inquired, comparing the similarity of the low-quality track of the low-quality quantum file and the accurate track of the vehicle. If the similarity threshold is met, marking a corresponding identity mark on the low-quality quantum file; if not, the process is ended.
And if a plurality of identity information are inquired, comparing the similarity of the low-quality track of the low-quality quantum file and the accurate track of the vehicle. If the similarity threshold is met, respectively searching high-quality tracks corresponding to high-quality quantum files corresponding to the identity information, comparing the similarity of the low-quality tracks and the high-quality tracks, selecting the identity with the highest similarity from the identity information, and marking the identity with the highest similarity on the low-quality quantum files; if not, the process is ended.
In step 404, the named low-quality quantum file and the named high-quality quantum file are merged. For example, a low-quality quantum archive and a high-quality quantum archive of the same identity are merged. The sub-file data of the files are only merged, the clustering module algorithm is not influenced, and the influence on subsequent clustering caused by the pollution of low-quality images to the mass center of the files is avoided.
The method comprises the steps of clustering low-quality pictures according to license plate numbers by utilizing the license plate numbers existing in low-quality images of human faces/human bodies, comparing low-quality quantum files formed by clustering the low-quality pictures with static data (such as accurate tracks) of a vehicle, associating the low-quality quantum files with identity information, merging the low-quality quantum files with high-quality quantum files formed by clustering portrait, and clustering the high-quality quantum files into an optimal file. Meanwhile, the clustering algorithm is ensured not to use low-quality pictures, and the centroid operation pollution of the archives is avoided. Through this scheme, can effectively reduce the archives orbit and lose quantity, promote and gather the shelves rate, promote for subsequent intelligence application effect and promote very profitable.
As shown in fig. 5, there is provided an image processing apparatus including:
an obtaining module 501, configured to obtain multiple images;
a processing module 502, configured to divide the multiple images into a first quality image and a second quality image, where the second quality is better than the first quality; clustering the second quality images based on identity information to obtain a second quality sub-file corresponding to each identity information; clustering the first quality images based on vehicle identifications to obtain a first quality sub-file corresponding to each vehicle identification; marking identity information for a first quality sub-file corresponding to the vehicle identification;
a merging module 503, configured to merge the first quality sub-file and the second quality sub-file with the same identity information to obtain a file corresponding to the identity information.
In one example, the obtaining module is further configured to obtain information related to an image, where the information related to the image includes one or more of the following: the system comprises a bayonet identification, time information, longitude and latitude information, scene information, coordinate information, definition, a pitch angle, width and confidence coefficient.
In one example, the processing module 502, when configured to divide the plurality of images into a first quality image and a second quality image, is specifically configured to: dividing the plurality of images into a first quality image and a second quality image based on partial related information of the images and a threshold corresponding to the partial related information; the partial relevant information includes one or more of: definition, pitch angle, width, confidence.
In one example, the processing module 502 is further configured to determine, for any vehicle identifier, a first mass track corresponding to the vehicle identifier according to a first mass sub-profile corresponding to the vehicle identifier; and comparing the first mass track corresponding to the vehicle identification with the accurate track corresponding to the vehicle identification, and determining that the similarity is greater than or equal to a set threshold value. In an example, when the processing module 502 is configured to cluster the second quality images based on the identity information to obtain a second quality sub-profile corresponding to each identity information, specifically: clustering the second quality images based on the face information to obtain a second quality face sub-file corresponding to each face information; clustering the second quality images based on the human body information to obtain a second quality human body sub-file corresponding to each human body information; identifying the second quality human body sub-file by adopting the face information; merging the second quality human face sub-file and the second quality human body sub-file with the same human face information to obtain a second quality sub-file corresponding to the human face information; and searching identity information associated with the face information, and associating the identity information to a second quality sub-file corresponding to the face information to obtain a second quality sub-file corresponding to each identity information.
In one example, the method further comprises: the filtering module 504 is configured to determine an overlapping vehicle identifier corresponding to both the first quality sub-file and the second quality image, and filter out the first quality sub-file corresponding to the overlapping vehicle identifier from the first quality sub-file.
In one example, the processing module 502, when configured to label the identity information for the first quality sub-profile corresponding to the vehicle identifier, is specifically configured to:
querying identity information associated with the vehicle identification; if one piece of identity information is inquired, the inquired identity information is marked for the first quality sub-file corresponding to the vehicle identification; if a plurality of identity information are inquired, comparing a first quality track corresponding to the vehicle identification with a second quality track corresponding to the plurality of identity information respectively, determining corresponding target identity information when the track similarity is highest, marking the target identity information for a first quality sub-file corresponding to the vehicle identification, and determining the second quality track according to a second quality file.
As shown in fig. 6, the present application provides an image processing apparatus including a processor 601 and a memory 602;
the memory 602 for storing computer programs or instructions;
the processor 601 is configured to execute part or all of the computer program or instructions in the memory, and when the part or all of the computer program or instructions are executed, the processor is configured to implement the above-described image processing method.
Embodiments of the present application further provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the computer program can enable the computer to perform the above-mentioned image processing method. Or the following steps: the computer program comprises instructions for implementing the method of image processing described above.
An embodiment of the present application further provides a computer program product, including: computer program code which, when run on a computer, enables the computer to carry out the method of image processing provided above.
In addition, the processor mentioned in the embodiment of the present application may be a Central Processing Unit (CPU), a baseband processor, and the baseband processor and the CPU may be integrated together or separated, and may also be a Network Processor (NP) or a combination of the CPU and the NP. The processor may further include a hardware chip or other general purpose processor. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The aforementioned PLDs may be Complex Programmable Logic Devices (CPLDs), field-programmable gate arrays (FPGAs), General Array Logic (GAL) and other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., or any combination thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory referred to in the embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DR RAM). It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The transceiver mentioned in the embodiments of the present application may include a separate transmitter and/or a separate receiver, or may be an integrated transmitter and receiver. The transceivers may operate under the direction of a corresponding processor. Alternatively, the sender may correspond to a transmitter in the physical device, and the receiver may correspond to a receiver in the physical device.
Those of ordinary skill in the art will appreciate that the various method steps and elements described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both, and that the steps and elements of the various embodiments have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially or partially contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
"and/or" in the present application, describing an association relationship of associated objects, means that there may be three relationships, for example, a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The plural in the present application means two or more. In addition, it is to be understood that the terms first, second, etc. in the description of the present application are used for distinguishing between the descriptions and not necessarily for describing a sequential or chronological order.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include such modifications and variations.

Claims (15)

1. An image processing method, characterized in that the method comprises:
acquiring a plurality of images;
dividing the plurality of images into a first quality image and a second quality image, wherein the second quality is superior to the first quality;
clustering the second quality images based on identity information to obtain a second quality sub-file corresponding to each identity information;
clustering the first quality images based on vehicle identifications to obtain a first quality sub-file corresponding to each vehicle identification; marking identity information for a first quality sub-file corresponding to the vehicle identification;
and merging the first quality sub-file and the second quality sub-file with the same identity information to obtain a file corresponding to the identity information.
2. The method of claim 1, wherein separating the plurality of images into a first quality image and a second quality image comprises:
dividing the plurality of images into a first quality image and a second quality image based on partial related information of the images and a threshold corresponding to the partial related information; the partial relevant information includes one or more of: definition, pitch angle, width, confidence.
3. The method of claim 1, prior to tagging identity information for the corresponding first mass sub-profile of the vehicle identification, further comprising:
aiming at any vehicle identification, determining a first mass track corresponding to the vehicle identification according to a first mass sub-file corresponding to the vehicle identification; and comparing the first mass track corresponding to the vehicle identification with the accurate track corresponding to the vehicle identification, and determining that the similarity is greater than or equal to a set threshold value.
4. The method of claim 1, wherein clustering the second quality images based on identity information to obtain a second quality sub-profile corresponding to each identity information comprises:
clustering the second quality images based on the face information to obtain a second quality face sub-file corresponding to each face information;
clustering the second quality images based on the human body information to obtain a second quality human body sub-file corresponding to each human body information; identifying the second quality human body sub-file by adopting the face information;
merging the second quality human face sub-file and the second quality human body sub-file with the same human face information to obtain a second quality sub-file corresponding to the human face information;
and searching identity information associated with the face information, and associating the identity information to a second quality sub-file corresponding to the face information to obtain a second quality sub-file corresponding to each identity information.
5. The method of claim 1, prior to determining the first mass trajectory corresponding to the vehicle identifier based on the first mass sub-profile corresponding to the vehicle identifier, further comprising:
and determining an overlapped vehicle identifier corresponding to the first quality sub-file and the second quality image together, and filtering the first quality sub-file corresponding to the overlapped vehicle identifier in the first quality sub-file.
6. The method of claim 1, wherein tagging identity information for a first mass sub-profile corresponding to the vehicle identification comprises:
querying identity information associated with the vehicle identification;
if one piece of identity information is inquired, the inquired identity information is marked for the first quality sub-file corresponding to the vehicle identification;
if a plurality of identity information are inquired, comparing a first quality track corresponding to the vehicle identification with a second quality track corresponding to the plurality of identity information respectively, determining corresponding target identity information when the track similarity is highest, marking the target identity information for a first quality sub-file corresponding to the vehicle identification, and determining the second quality track according to a second quality file.
7. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a plurality of images;
the processing module is used for dividing the images into a first quality image and a second quality image, and the second quality is superior to the first quality; clustering the second quality images based on identity information to obtain a second quality sub-file corresponding to each identity information; clustering the first quality images based on vehicle identifications to obtain a first quality sub-file corresponding to each vehicle identification; marking identity information for a first quality sub-file corresponding to the vehicle identification;
and the merging module is used for merging the first quality sub-file and the second quality sub-file with the same identity information to obtain a file corresponding to the identity information.
8. The apparatus of claim 7, wherein the processing module, when configured to separate the plurality of images into a first quality image and a second quality image, is specifically configured to:
dividing the plurality of images into a first quality image and a second quality image based on partial related information of the images and a threshold corresponding to the partial related information; the partial relevant information includes one or more of: quality score, definition, pitch angle, width, confidence.
9. The apparatus of claim 7, wherein the processing module is further configured to determine, for any vehicle identifier, a first mass trajectory corresponding to the vehicle identifier according to a first mass sub-profile corresponding to the vehicle identifier; and comparing the first mass track corresponding to the vehicle identification with the accurate track corresponding to the vehicle identification, and determining that the similarity is greater than or equal to a set threshold value.
10. The apparatus of claim 7, wherein the processing module, when configured to cluster the second quality images based on the identity information to obtain a second quality sub-profile corresponding to each identity information, is specifically configured to:
clustering the second quality images based on the face information to obtain a second quality face sub-file corresponding to each face information; clustering the second quality images based on the human body information to obtain a second quality human body sub-file corresponding to each human body information; identifying the second quality human body sub-file by adopting the face information; merging the second quality human face sub-file and the second quality human body sub-file with the same human face information to obtain a second quality sub-file corresponding to the human face information; and searching identity information associated with the face information, and associating the identity information to a second quality sub-file corresponding to the face information to obtain a second quality sub-file corresponding to each identity information.
11. The apparatus of claim 7, further comprising:
and the filtering module is used for determining an overlapped vehicle identifier corresponding to the first quality sub-file and the second quality image together, and filtering the first quality sub-file corresponding to the overlapped vehicle identifier in the first quality sub-file.
12. The apparatus of claim 7, wherein the processing module, when configured to label the first mass sub-profile corresponding to the vehicle identification with identity information, is specifically configured to:
querying identity information associated with the vehicle identification; if one piece of identity information is inquired, the inquired identity information is marked for the first quality sub-file corresponding to the vehicle identification; if a plurality of identity information are inquired, comparing a first quality track corresponding to the vehicle identification with a second quality track corresponding to the plurality of identity information respectively, determining corresponding target identity information when the track similarity is highest, marking the target identity information for a first quality sub-file corresponding to the vehicle identification, and determining the second quality track according to a second quality file.
13. An image processing apparatus comprising a processor and a memory;
the memory for storing computer programs or instructions;
the processor for executing part or all of the computer program or instructions in the memory, for implementing the method of any one of claims 1-6 when the part or all of the computer program or instructions are executed.
14. A computer-readable storage medium for storing a computer program comprising instructions for implementing the method of any one of claims 1-6.
15. A computer program product, the computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method according to any of claims 1-6.
CN202111345656.3A 2021-11-15 2021-11-15 Image processing method and device Pending CN114155576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111345656.3A CN114155576A (en) 2021-11-15 2021-11-15 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111345656.3A CN114155576A (en) 2021-11-15 2021-11-15 Image processing method and device

Publications (1)

Publication Number Publication Date
CN114155576A true CN114155576A (en) 2022-03-08

Family

ID=80460003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111345656.3A Pending CN114155576A (en) 2021-11-15 2021-11-15 Image processing method and device

Country Status (1)

Country Link
CN (1) CN114155576A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333041A (en) * 2022-03-10 2022-04-12 浙江大华技术股份有限公司 Image processing method, device, equipment and medium
CN117975071A (en) * 2024-03-28 2024-05-03 浙江大华技术股份有限公司 Image clustering method, computer device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333041A (en) * 2022-03-10 2022-04-12 浙江大华技术股份有限公司 Image processing method, device, equipment and medium
CN117975071A (en) * 2024-03-28 2024-05-03 浙江大华技术股份有限公司 Image clustering method, computer device and storage medium

Similar Documents

Publication Publication Date Title
CN110795595B (en) Video structured storage method, device, equipment and medium based on edge calculation
CN109783685B (en) Query method and device
CN117095349A (en) Appearance search system, method, and non-transitory computer readable medium
TWI740537B (en) Information processing method, device and storage medium thereof
CN114155576A (en) Image processing method and device
CN109740003B (en) Filing method and device
CN105320710B (en) The vehicle retrieval method and device of resisting illumination variation
WO2023197232A1 (en) Target tracking method and apparatus, electronic device, and computer readable medium
CN109800329B (en) Monitoring method and device
CN114357216A (en) Portrait gathering method and device, electronic equipment and storage medium
CN109784220B (en) Method and device for determining passerby track
CN111898485A (en) Parking space vehicle detection processing method and device
CN106777350B (en) Method and device for searching pictures with pictures based on bayonet data
CN114724131A (en) Vehicle tracking method and device, electronic equipment and storage medium
CN110472561B (en) Football goal type identification method, device, system and storage medium
CN109783663B (en) Archiving method and device
CN111091041A (en) Vehicle law violation judging method and device, computer equipment and storage medium
CN108040244B (en) Snapshot method and device based on light field video stream and storage medium
CN110457998B (en) Image data association method and apparatus, data processing apparatus, and medium
US20160358462A1 (en) Method and system for vehicle data integration
CN112597924B (en) Electric bicycle track tracking method, camera device and server
CN105320704B (en) Trans-regional similar vehicle search method and device
CN109815369B (en) Filing method and device
CN111597979B (en) Target object clustering method and device
CN113761263A (en) Similarity determination method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination