CN115880754A - Multi-gear combination method and device and electronic equipment - Google Patents

Multi-gear combination method and device and electronic equipment Download PDF

Info

Publication number
CN115880754A
CN115880754A CN202211561496.0A CN202211561496A CN115880754A CN 115880754 A CN115880754 A CN 115880754A CN 202211561496 A CN202211561496 A CN 202211561496A CN 115880754 A CN115880754 A CN 115880754A
Authority
CN
China
Prior art keywords
portrait
track
sequence
file
track sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211561496.0A
Other languages
Chinese (zh)
Inventor
刘备
江中毅
张宏
陈立力
周明伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211561496.0A priority Critical patent/CN115880754A/en
Publication of CN115880754A publication Critical patent/CN115880754A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to the field of image processing, in particular to a method and a device for combining multiple files and electronic equipment, which are used for solving the problem that one person has multiple files in the existing portrait archive. The method comprises the steps of obtaining M portrait track sequences generated based on a first portrait file, calculating track distance values between each portrait track sequence and a reference track sequence to obtain a plurality of track distance values, selecting a plurality of track distance values meeting preset conditions from the plurality of track distance values, determining vehicle identifications of the plurality of track distance values corresponding to the reference track sequence, further calculating file similarity values between the first portrait file and a second portrait file if other portrait files such as the second portrait file exist, wherein the other portrait files are associated with the vehicle identifications, and combining the first portrait file and the second portrait file in response to the fact that the file similarity values are larger than a preset threshold value. Based on the method, multi-file combination can be carried out to optimize the portrait archive.

Description

Multi-gear merging method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for merging multiple files, and an electronic device.
Background
In order to meet the increasing requirements of security construction in China, intelligent monitoring equipment (hereinafter referred to as vehicle cards) covers all intersections of a city. At present, massive car window face picture data can be captured through a car card, wherein the car window face picture data cluster pictures through face picture characteristic values, and further cluster into portrait files, and related security and protection services can be developed in an auxiliary mode.
However, the definition of the car window face pictures is poor due to environmental factors such as weather, shielding and car window reflection, and the phenomenon that a plurality of snap pictures of the same target object are clustered into a plurality of portrait files exists only by clustering the similarity of the car window face pictures.
Disclosure of Invention
The application provides a method, a device and an electronic device for multi-file combination, which are used for reducing the multi-file rate of portrait files.
In a first aspect, the present application provides a method for multi-gear merging, the method comprising:
acquiring M portrait track sequences generated based on a first portrait archive; wherein M is an integer greater than 0;
calculating track distance values between each portrait track sequence and the reference track sequence aiming at the M portrait track sequences to obtain a plurality of track distance values; the reference track sequence is a vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period;
selecting a plurality of track distance values meeting preset conditions from the plurality of track distance values, and determining vehicle identifications of the plurality of track distance values corresponding to the reference track sequence respectively;
if the vehicle identification is associated with a second portrait file, calculating a file similarity value between the first portrait file and the second portrait file;
and in response to the profile similarity value being greater than a preset threshold, merging the first portrait profile and the second portrait profile.
In a possible embodiment, the acquiring a sequence of M person trajectories generated based on a first person profile includes:
acquiring a first portrait file of a first target object; wherein the first portrait profile includes at least: a plurality of window face images including the first target object;
sequencing the plurality of car window face images in the first person image file according to the image acquisition time to obtain a first person image track sequence;
in the first track sequence, carrying out duplicate removal processing on the vehicle window face images continuously positioned at the same spatial position to obtain a second human image track sequence;
and dividing the car window face images in the second portrait track sequence based on M preset segmentation time periods to obtain M portrait track sequences.
In a possible implementation, the calculating a track distance value between each portrait track sequence and the reference track sequence includes:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is greater than the sequence length of the single portrait track sequence, for each element of m elements in the single portrait track sequence, obtaining n-m +1 track distance values by calculating a track distance value obtained by sequentially multiplying the spatial position of each element by the spatial position of a single element in n elements in the reference track sequence; wherein m and n are integers more than 0, and m is less than n;
and selecting the minimum track distance value from the n-m +1 track distance values as the track distance value between the single portrait track sequence and the reference track sequence.
In a possible embodiment, the obtaining n-m +1 track distance values for each element of m elements in the single portrait track sequence by calculating a track distance value obtained by multiplying the spatial position of each element by the spatial position of a single element in n elements in the reference track sequence in sequence comprises:
for a single track distance value of the n-m +1 track distance values, calculating in the following way:
determining a first row matrix formed by transversely arranging the spatial positions of m elements in the single portrait track sequence;
determining a first column matrix formed by vertically arranging the spatial positions of n elements in the single reference track sequence;
determining a second column matrix formed by sequentially selecting m elements from the first column matrix; the number of the second-row matrixes is n-m + 1;
calculating a product of the first row matrix and the second column matrix as the single trajectory distance value.
In a possible implementation, the calculating a track distance value between each portrait track sequence and the reference track sequence includes:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is smaller than that of the single portrait track sequence, performing a filling operation on the reference track sequence based on head elements or tail elements of n elements in the reference track sequence; wherein n is an integer greater than 0;
and calculating a track distance value between each portrait sequence and the reference track sequence until the sequence length of the reference track sequence is equal to or greater than the sequence length of the single portrait track sequence.
In a possible implementation, the calculating a track distance value between each portrait track sequence and the reference track sequence includes:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to the fact that the sequence length of the reference track sequence is equal to the sequence length of the single portrait track sequence, sequentially multiplying m elements of the single portrait track sequence by m elements in the reference track sequence according to the element arrangement order to obtain m products;
and calculating the sum value of the m products, and taking the sum value as the track distance value between the single portrait track sequence and the reference track sequence.
In a possible implementation, the calculating a track distance value between each portrait track sequence and the reference track sequence includes:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
if a reference track sequence matched with the single portrait track sequence in the acquisition time period does not exist in at least one vehicle track sequence, determining that a track distance value does not exist in the single portrait track sequence, or taking a specified numerical value as a track distance value corresponding to the single portrait track sequence; wherein the specified values include at least a positive infinity.
In a possible implementation, the selecting, from the plurality of trajectory distance values, a plurality of trajectory distance values that satisfy a preset condition includes:
arranging according to an ascending order to obtain an arrangement order of the plurality of track distance values;
and selecting a plurality of track distance values from the plurality of track distance values according to a front-to-back selection mode based on the arrangement order.
In one possible implementation, after the determining the vehicle identifier corresponding to each of the plurality of trajectory distance values, the method further includes:
associating the first portrait profile with the vehicle identification.
In a possible embodiment, before calculating the profile similarity value between the first person profile and the second person profile if the vehicle identifier is associated with the second person profile, the method further includes:
if the vehicle identification is associated with a plurality of portrait files, calculating file similarity values between the first portrait file and the plurality of portrait files pairwise to obtain a plurality of file similarity values;
and screening the archive similarity values larger than a preset threshold value from the plurality of archive similarity values to serve as target similarity values, and combining portrait archives corresponding to the target similarity values.
In one possible embodiment, said calculating a profile similarity value between said first person profile and said second person profile comprises:
calculating a first file characteristic value of the first portrait file based on the image characteristic value of each element in the first portrait file;
calculating a second file characteristic value of the second portrait file based on the image characteristic value of each element in the second portrait file;
and calculating the similarity value between the first file characteristic value and the second file characteristic value as the file similarity value between the first portrait file and the second portrait file.
In one possible embodiment, said merging said first portrait profile and said second portrait profile comprises:
mapping the first portrait archive to the second portrait archive to obtain a first mapping relation, and storing the first mapping relation to the second portrait archive; or
And mapping the second portrait archive to the first portrait archive to obtain a second mapping relation, and storing the second mapping relation to the first portrait archive.
In summary, the following steps:
the application provides a multi-file merging method for reducing the multi-file rate of portrait files. The method includes the steps of obtaining M portrait track sequences generated based on a first portrait file, calculating track distance values between each portrait track sequence and a reference track sequence to obtain a plurality of track distance values, selecting a plurality of track distance values meeting preset conditions from the plurality of track distance values, determining vehicle identifications of the plurality of track distance values corresponding to the reference track sequence, further calculating file similarity values between the first portrait file and a second portrait file if other portrait files such as the second portrait file exist, wherein the other portrait files are associated with the vehicle identifications, and combining the first portrait file and the second portrait file in response to the fact that the file similarity values are larger than a preset threshold value.
And M is an integer larger than 0, and the reference track sequence is a vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period.
In the embodiment of the application, the reference track sequence is introduced, namely the vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period, so that multi-gear combination based on the portrait track sequence and the reference track sequence is realized, the multi-gear problem of the optimized portrait file is solved, and the accuracy of multi-gear combination is improved.
In a second aspect, the present application provides a multi-gear combining apparatus, including:
the acquisition module acquires M portrait track sequences generated based on the first portrait archive; wherein M is an integer greater than 0;
the first calculation module is used for calculating track distance values between each portrait track sequence and the reference track sequence aiming at the M portrait track sequences to obtain a plurality of track distance values; the reference track sequence is a vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period;
the determining module is used for selecting a plurality of track distance values meeting preset conditions from the plurality of track distance values and determining vehicle identifications of the plurality of track distance values corresponding to the reference track sequence respectively;
the second calculation module is used for calculating a file similarity value between the first portrait file and the second portrait file if the vehicle identifier is associated with the second portrait file;
and the merging module is used for merging the first portrait file and the second portrait file in response to the file similarity value being larger than a preset threshold value.
In a possible implementation manner, the acquiring module is specifically configured to:
acquiring a first portrait file of a first target object; wherein the first portrait profile includes at least: a plurality of window face images including the first target object;
sequencing the multiple vehicle window face images in the first person image file according to the image acquisition time to obtain a first person image track sequence;
in the first track sequence, carrying out duplicate removal processing on the vehicle window face images continuously positioned at the same spatial position to obtain a second human image track sequence;
and dividing the car window face images in the second portrait track sequence based on M preset segmentation time periods to obtain M portrait track sequences.
In a possible implementation manner, the calculating a track distance value between each portrait track sequence and the reference track sequence, and the first calculating module is specifically configured to:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is greater than the sequence length of the single portrait track sequence, for each element of m elements in the single portrait track sequence, obtaining n-m +1 track distance values by calculating a track distance value obtained by sequentially multiplying the spatial position of each element by the spatial position of a single element in n elements in the reference track sequence; wherein m and n are integers more than 0, and m is less than n;
and selecting the minimum track distance value from the n-m +1 track distance values as the track distance value between the single portrait track sequence and the reference track sequence.
In a possible implementation, for each element of m elements in the single portrait trajectory sequence, the first calculation module is further configured to calculate a trajectory distance value obtained by multiplying and summing a spatial position of each element sequentially by a spatial position of a single element in n elements in the reference trajectory sequence, and obtain n-m +1 trajectory distance values, where the first calculation module is further configured to:
for a single track distance value of the n-m +1 track distance values, calculating in the following manner:
determining a first row matrix formed by transversely arranging the spatial positions of m elements in the single portrait track sequence;
determining a first column matrix formed by vertically arranging the spatial positions of n elements in the single reference track sequence;
determining a second column matrix formed by sequentially selecting m elements from the first column matrix; the number of the second-row matrixes is n-m + 1;
calculating a product of the first row matrix and the second column matrix as the single trajectory distance value.
In a possible implementation manner, the calculating a track distance value between each portrait track sequence and the reference track sequence, and the first calculating module is specifically configured to:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is smaller than that of the single portrait track sequence, performing a filling operation on the reference track sequence based on a head element or a tail element of n elements in the reference track sequence; wherein n is an integer greater than 0;
and calculating a track distance value between each portrait sequence and the reference track sequence until the sequence length of the reference track sequence is equal to or greater than the sequence length of the single portrait track sequence.
In a possible implementation manner, the calculating a track distance value between each portrait track sequence and the reference track sequence, and the first calculating module is specifically configured to:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is equal to the sequence length of the single portrait track sequence, sequentially multiplying m elements of the single portrait track sequence by m elements in the reference track sequence according to the element arrangement order to obtain m products;
and calculating the sum value of the m products, and taking the sum value as the track distance value between the single portrait track sequence and the reference track sequence.
In a possible implementation manner, the calculating a track distance value between each portrait track sequence and the reference track sequence, and the first calculating module is specifically configured to:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
if a reference track sequence matched with the single portrait track sequence in the acquisition time interval does not exist in at least one vehicle track sequence, determining that no track distance value exists in the single portrait track sequence, or taking an appointed numerical value as a track distance value corresponding to the single portrait track sequence; wherein the specified numerical value includes at least a positive infinity.
In a possible implementation manner, the determining module is specifically configured to select a plurality of trajectory distance values that satisfy a preset condition from the plurality of trajectory distance values:
arranging according to an ascending order to obtain an arrangement order of the plurality of track distance values;
and selecting a plurality of track distance values from the plurality of track distance values in a front-to-back selection mode based on the arrangement order.
In a possible implementation, after the determining the vehicle identifier corresponding to each of the plurality of track distance values, the determining module is further configured to:
associating the first portrait session with the vehicle identification.
In a possible implementation, before the calculating a profile similarity value between the first person profile and the second person profile if the vehicle identifier is associated with a second person profile, the second calculating module is further configured to:
if the vehicle identification is associated with a plurality of portrait files, calculating file similarity values between the first portrait file and the plurality of portrait files pairwise to obtain a plurality of file similarity values;
and screening the archive similarity values larger than a preset threshold value from the plurality of archive similarity values to serve as target similarity values, and combining portrait archives corresponding to the target similarity values.
In a possible implementation manner, the second calculating module is specifically configured to:
calculating a first file characteristic value of the first portrait file based on the image characteristic value of each element in the first portrait file;
calculating a second file characteristic value of the second portrait file based on the image characteristic value of each element in the second portrait file;
calculating a similarity value between the first file characteristic value and the second file characteristic value as a file similarity value between the first portrait file and the second portrait file.
In a possible implementation manner, the merging module is specifically configured to:
mapping the first portrait archive to the second portrait archive to obtain a first mapping relation, and storing the first mapping relation to the second portrait archive; or
And mapping the second portrait archive to the first portrait archive to obtain a second mapping relation, and storing the second mapping relation to the first portrait archive.
In a third aspect, the present application provides an electronic device, comprising:
a memory for storing a computer program;
the processor is used for realizing the method steps of the multi-gear combination when executing the computer program stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements one of the above-described method steps for multi-file merging.
For each of the second to fourth aspects and possible technical effects of each aspect, please refer to the above description of the first aspect or the possible technical effects of each of the possible solutions in the first aspect, and no repeated description is given here.
Drawings
Fig. 1 is a schematic diagram of a possible application scenario provided in the present application;
FIG. 2 is a flow chart of a method for multi-gear consolidation provided by the present application;
FIG. 3 is a flowchart of a method for generating a portrait trajectory sequence according to the present disclosure;
FIG. 4 is a schematic diagram of calculating a file similarity value according to the present application;
FIG. 5 is a schematic view of a multi-gear combining apparatus provided herein;
fig. 6 is a schematic diagram of a structure of an electronic device provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
First, some terms of the embodiments of the present application are explained so as to be easily understood by those skilled in the art.
(1) One person has multiple gears: a plurality of vehicle window face pictures of the same target object, which are captured by the vehicle card, are clustered in a plurality of portrait archives.
(2) The picture characteristic value: for example, the vehicle face image data captured by the capturing device such as the front-end capturing camera includes a capturing camera number, a GPS (Global Positioning System) coordinate, a capturing image, capturing time, and the like, and optionally, the data is stored in the server at the back end. The back-end server analyzes the feature of each dimension in the picture through an analysis algorithm, the analysis result is a vector of X dimension, the vector is a picture feature value of the picture, and of course, the picture feature value can also be stored in a database together with other data.
(3) The mean centroid of the file: and dividing the accumulated sum of the picture characteristic values of all the pictures in the file by the number of the pictures in the file, namely the average value of the picture characteristic values.
It should be noted that the scheme can be applied to one-person multi-gear combination in various application scenes such as security management, intelligent monitoring, smart cities, mobile phone intelligence and the like. The scheme can also be suitable for tasks which need to meet the requirements of multi-gear combination accuracy improvement and optimization.
The execution main body of the scheme can be a video monitor, a computing terminal, a remote server and other computing equipment. The human multi-gear problem in the human image file is optimized by deploying the human image file on related computing equipment and adopting the similarity between the human image track sequence and the vehicle track sequence, so that the accuracy of multi-gear combination is improved. Of course, the subject matter of the present solution is only exemplified here, and is not particularly limited thereto.
The following briefly introduces the design concept of the multi-gear merging method provided by the embodiment of the present application.
With the increasing demand of national defense construction, more and more devices can perform clustering processing on data. For example, for car window face image data, the same or similar faces are gathered into a class, and an image file is gathered.
Under the scene of an intelligent city, intelligent monitoring equipment (hereinafter referred to as a vehicle card) covers each intersection of the city, a large amount of vehicle window face image data is captured through the vehicle card, and then a portrait file is formed by clustering the vehicle window face image data. However, due to objective environmental factors such as weather, shading, and light reflection of the vehicle window, the definition of the vehicle window face image may be poor, and based on this, if clustering is performed only by the similarity of the vehicle window face images, there is a phenomenon that a plurality of vehicle window face images of a single person are clustered into a plurality of portrait archives.
Therefore, in the related art, the problem that the multi-gear rate of the portrait file is high exists.
In order to reduce the multi-gear rate of the portrait file, the application provides a multi-gear merging method. The method includes the steps of obtaining M portrait track sequences generated based on a first portrait file, calculating track distance values between each portrait track sequence and a reference track sequence to obtain a plurality of track distance values, selecting a plurality of track distance values meeting preset conditions from the plurality of track distance values, determining vehicle identifications of the plurality of track distance values corresponding to the reference track sequence, further calculating file similarity values between the first portrait file and a second portrait file if other portrait files such as the second portrait file exist, wherein the portrait files are not associated with the vehicle identifications, and combining the first portrait file and the second portrait file in response to the fact that the file similarity values are larger than a preset threshold value.
And M is an integer larger than 0, and the reference track sequence is a vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period.
In the embodiment of the application, the reference track sequence is introduced, namely the vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period, so that multi-gear combination based on the portrait track sequence and the reference track sequence is realized, the multi-gear problem of the optimized portrait file is solved, and the accuracy of multi-gear combination is improved.
The multi-gear merging method provided by the embodiment of the application can be applied to the implementation environment shown in fig. 1, and the implementation environment at least includes a camera node, an operation node, a management node, a computing node, and a storage node.
In fig. 1, the camera nodes include spherical and cylindrical cameras, and the camera nodes may be deployed on urban roads or intersections for capturing images or recording videos. In the embodiment of the present application, the portrait track sequence is composed of window facial images, and the vehicle track sequence is composed of vehicle images, where the window facial images or the vehicle images include but are not limited to: the image capturing method comprises the steps of capturing images captured by the camera node, capturing images captured from the captured images and capturing images from recorded videos, wherein the images comprise at least one object, such as a vehicle, a face in a window and the like.
In fig. 1, the operation node is used for interacting with a user to enable the user to perform the deployment, configuration and management of the multi-file merging task of the portrait archive.
In fig. 1, the management node is used to obtain images and videos from the camera node, for example, referring to fig. 1, the camera node uploads the images and videos to the cloud, and the management node obtains the images and videos from the cloud. The management node is also used for managing the computing node and the storage node by combining the multi-file merging task. In the management process, the management node forwards the image and the video to the computing node.
In fig. 1, the computing node is configured to complete a computing task related to a multi-file merging task according to a received image and video, so as to achieve computation acceleration.
In fig. 1, the storage node is configured to store images taken by the camera node, recorded videos, and mapping relationships generated through a multi-file merging process according to management of the management node, so as to facilitate tracing.
It should be noted that the management node, the computing node, and the storage node are different devices, or any two or three of the management node, the computing node, and the storage node may be integrated in the same device. The above-mentioned operation node is not necessary, and the present solution is not particularly limited thereto.
The following describes a multi-gear merging method provided in the embodiment of the present application. Referring to fig. 2, the method includes steps 21-25, as follows.
Step 21: acquiring M portrait track sequences generated based on a first portrait archive;
if the first portrait file is used for identifying the first target object, the first portrait file at least comprises a plurality of vehicle window face images acquired aiming at the first target object.
In the embodiment of the application, a first portrait track sequence of a first portrait archive is constructed according to a time sequence based on the acquisition time of the plurality of vehicle window face images. Then, based on the spatial positions (such as longitude and latitude coordinates) of the plurality of window face images, according to spatial logic, the window face images at the same spatial position (or similar spatial positions) in the first person image track sequence are subjected to de-duplication processing, and optionally, the window face images at the same or similar acquisition time are subjected to de-duplication processing to obtain a second person image track sequence. And finally, dividing the car window face images in the second portrait track sequence through M preset segmentation time periods (M is an integer larger than 0), and further acquiring M portrait track sequences generated based on the first portrait archive.
Illustratively, referring to fig. 3, a flowchart for generating a sequence of M portrait tracks provided by the present application includes steps 301 to 304, which are described in detail below.
Step 301: acquiring a first portrait file of a first target object;
in some embodiments, the first portrait profile is one of the exemplary clusters of facial vehicle window image data. Particularly, in the generation process of the portrait archive data, in order to save transmission, calculation and/or storage resources, the snapshot data of the car card in a central area or a key area (such as a market, a hospital, a school, etc.) in a city can be screened. And then, carrying out portrait clustering on the vehicle window face images in the screening snapshot data to form portrait archive data, namely, the vehicle window face images in the portrait archive data are captured images of roads or intersections in a central area or a key area of a city.
In some embodiments, the first portrait profile is used as the clustered portrait profile data, and the portrait profile data may include: the system comprises a file number, a vehicle window face image captured by a vehicle card, vehicle window face image characteristics, acquisition time (or capturing time), longitude and latitude coordinates and the like. Alternatively, the portrait archive data may be stored in a table in a storage node or back-end database.
Illustratively, the vehicle window face image features may be an X-dimensional matrix that is parsed from the vehicle window face image by using a Computer Vision (CV) algorithm. In addition, in order to store data conveniently, the characteristic value of the car window face image can be encoded, and accordingly, when the characteristic value of the car window face image is used, decoding is carried out according to an encoding rule so as to restore the characteristic value of the car window face image.
Step 302: sequencing the multiple vehicle window face images in the first person image file according to the image acquisition time to obtain a first person image track sequence;
illustratively, acquiring the acquisition time of each window face image in the first portrait archive, sequencing each window face image in the first portrait archive according to the sequence of the acquisition time, and then taking the sequence of the sequenced window face images as a first portrait track sequence.
The first sequence of portrait trajectories may be used to characterize a time-sequential trajectory of the first target object.
Step 303: in the first track sequence, carrying out duplicate removal processing on the vehicle window face images continuously positioned at the same spatial position to obtain a second human image track sequence;
in an actual application scene, the same car card can acquire (snapshot) a plurality of car window face images (namely, a plurality of nearly identical car window face images) of the same target object in a relatively short time period, and the car window face images are converged into the same portrait file after the gathering. Therefore, the duplication removal processing needs to be performed on the nearly consistent multiple window face images generated in the application scene.
In the embodiment of the application, the duplicate removal processing for the first trajectory sequence is to only keep the first vehicle window face image continuously located at the same spatial position (for example, longitude and latitude), and further obtain the second portrait trajectory sequence.
For example, the archive a is a first track sequence formed by 7 window face images in sequence, each window face image is taken as a sub-file of the portrait archive a, and the archive a includes 7 sub-files, which are respectively a sub-file A1, a sub-file A2, a sub-file A3, a sub-file A4, a sub-file A5, a sub-file A6, and a sub-file A7, and label (label) is used to identify the sub-files that are retained to the second portrait track sequence through the past reprocessing, for example, only the sub-file of label =1 is retained, as shown in table 1 below.
File number Sub-file numbering Longitude (G) Latitude Time of taking a snapshot label
File A Sub-gear A1 104.739835 31.489459 2022-05-01 10:02:40 1
File A Sub-gear A2 104.739835 31.489459 2022-05-01 10:02:46 0
File A Sub-gear A3 104.725586 31.50912 2022-05-01 11:12:06 1
File A Sub-gear A4 104.741385 31.498601 2022-05-01 11:31:51 1
File A Sub-gear A5 104.741385 31.498601 2022-05-01 11:32:01 0
File A Sub-gear A6 104.739835 31.489459 2022-05-02 09:45:56 1
File A Sub-gear A7 104.739835 31.489459 2022-05-02 09:45:59 0
TABLE 1
Wherein the first track sequence can be represented as: the sub-gear A1, the sub-gear A2, the sub-gear A3, the sub-gear A4, the sub-gear A5, the sub-gear A6, and the sub-gear A7. The sub-files in the same longitude and latitude are as follows: a sub-gear A1 and a sub-gear A2; sub-gear A4 and sub-gear A5; a sub-gear A6 and a sub-gear A7. And reserving a first sub-file with the same longitude and latitude, namely: a sub-gear A1, a sub-gear A4, and a sub-gear A6; in other words, remove sub-file: sub-gear A2, sub-gear A5 and sub-gear A7. After the deduplication process, the second track sequence may represent: sub-gear A1, sub-gear A3, sub-gear A4, sub-gear A6.
It should be noted that, the deduplication processing is a possible implementation manner for the same spatial position, and if a distance threshold of a spatial position is set, a window face image in which the distance between any two spatial positions is smaller than the distance threshold may be regarded as a window face image in the same spatial position.
As an optional implementation manner, the deduplication processing may also be performed on the car window face image corresponding to the same acquisition time, and the idea of the deduplication processing is similar to that of the deduplication processing based on the spatial position, and will not be described repeatedly here. Of course, if the time difference threshold of the acquisition time is set, the window face images of any two window whose time difference of the acquisition time is smaller than the time difference threshold can be regarded as the window face images corresponding to the same acquisition time.
In addition, based on the idea of the deduplication processing provided by the embodiment of the present application, the operation of the deduplication processing may also be implemented before the first portrait trajectory sequence is generated. The present solution is not particularly limited with respect to the order of implementation of the deduplication processing.
Step 304: and dividing the car window face images in the second portrait track sequence based on M preset segmentation time periods to obtain M portrait track sequences.
And setting M segmentation time periods according to the acquisition time, and grouping and dividing the vehicle window face images in the second portrait track sequence to obtain M portrait track sequences.
In some implementations, the slicing period may be understood as converting the acquisition time to a date, for example: year, day of the year, hour of the day. And grouping and dividing the car window face images in the second portrait degree column according to the converted date, namely year, day and hour to obtain M portrait track sequences.
Illustratively, the resulting sequence of 3 portrait tracks is partitioned for portrait archive a based on the conversion date, as shown in table 2 below.
Figure BDA0003984811900000161
TABLE 2
Wherein, the first track sequence [ [104.739835, 31.489459] ] is: the file A corresponds to a portrait track sequence of the acquisition time period of the 10 th hour in the 121 th day of 2022; the second trace sequence [ [104.725586, 31.50912], [104.741385, 31.498601] ] is: the file A corresponds to a portrait track sequence of the acquisition period of 11 hours in 121 th day of 2022; the third trace sequence [ [104.739835, 31.489459] ] is: profile a corresponds to a sequence of portrait trajectories for this acquisition period at hour 9 on day 122 of 2022.
It should be noted that, the above steps 301 to 304 are implemented by taking the first person image file as an example to generate M person image track sequences, and the vehicle window face image data or the person image file data may be directly processed based on the same idea to generate the person image track sequence of each person image file data, which is not limited in this embodiment.
Further, as a possible implementation manner, based on the same idea of generating the portrait track sequence in steps 301 to 304, a vehicle track sequence corresponding to the vehicle data or the vehicle profile data may also be obtained.
Illustratively, for a vehicle archive generated by clustering vehicle data, a plurality of vehicle images in each vehicle archive are sorted according to image acquisition time to obtain a first vehicle track sequence corresponding to each vehicle archive. And then, in the first vehicle track sequence corresponding to each vehicle file, performing deduplication processing on the vehicle images continuously positioned at the same spatial position or the same acquisition time to obtain a second vehicle track sequence corresponding to each vehicle file. And finally, dividing the car window face images in the second car track sequence corresponding to each car file based on the preset M segmentation time periods to obtain a single or multiple car track sequences corresponding to each car file.
Step 22: calculating track distance values between each portrait track sequence and the reference track sequence aiming at the M portrait track sequences to obtain a plurality of track distance values;
the reference track sequence is a vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period.
In the embodiment of the application, the sequence of the M portrait tracks needs to be sequentially calculated. Taking a single portrait track sequence as an example, first, the single portrait track sequence is sequentially matched with at least one (or all) vehicle track sequences, and further, a reference track sequence matched with an acquisition time period is selected from the vehicle track sequences, where, of course, the number of the selected reference track sequences is not limited.
Illustratively, the single portrait track sequence is a sequence of portrait tracks divided according to year, day and hour. The at least one (or all) vehicle trajectory sequences are one or more vehicle trajectory sequences divided according to year, day and hour. The above-mentioned collection period is a time period divided by year, day and hour.
In some embodiments, an acquisition time period of a single portrait sequence is determined, and when the single portrait trajectory sequence matches the vehicle trajectory sequence of the acquisition time period, the matched vehicle spatiotemporal trajectory is used as a reference trajectory sequence of the single portrait trajectory sequence.
Further, taking the single reference track sequence as an example, the sequence length between the single portrait track sequence and the single reference track sequence is compared, so as to calculate the track distance value between the single portrait track sequence and the single reference track sequence.
It should be noted that, in the following three cases, a single car window portrait image or a car image is used as one element in a single portrait track sequence or a reference track sequence. The sequence length of a single portrait track sequence or reference track sequence is used to characterize the total number of elements contained therein.
The first condition is as follows: the sequence length of the single reference track sequence is larger than that of the single portrait track sequence.
In some embodiments, for each element of m (m is an integer greater than 0) elements in a single portrait trajectory sequence, the trajectory distance values are obtained by calculating the spatial position of each element, and multiplying and summing sequentially by the elements of n (n is an integer greater than 0, n > m) elements in a single reference trajectory sequence to obtain the trajectory distance values, thereby obtaining n-m +1 trajectory distance values. And then, selecting the minimum track distance value from the n-m +1 track distance values as the track distance value between the single portrait track sequence and the single reference track sequence.
Based on the description of calculating a single track distance value, illustratively, for a single track distance value of n-m +1 track distance values, the following is calculated. Firstly, determining a first row matrix formed by transversely arranging spatial positions of m elements in a single portrait track sequence, wherein the shape of the first row matrix is A1 multiplied by m; and determining a first column matrix formed by the vertical arrangement of the spatial positions of n elements in a single reference track sequence, in the form of Bnx1. Then, a second column matrix formed by m elements sequentially selected from the first column matrix is determined, n-m +1 second column matrices can be determined, the product between the first row matrix and the single second column matrix is calculated aiming at the single second column matrix, and the product is used as a single track distance value.
The above-mentioned way of determining the second row matrix uses the idea of sliding window calculation.
For example, a single reference track sequence is represented as: y = [ a, B, C, D, E ], where a, B, C, D, E are each spatial coordinates (e.g., longitude and latitude coordinates) corresponding to one element (one vehicle image) in a single reference trajectory sequence. The sequence of single portrait trajectories is represented as: x = [ a, b, c ], where a, b, c are spatial coordinates (e.g., longitude and latitude coordinates) corresponding to one element (one window face image) in a single portrait trajectory sequence. That is, a straight-line distance between two spatial coordinate points a and a can be represented by aA, a first row matrix is formed by transversely arranging a single portrait trajectory sequence X, a first column matrix is formed by vertically arranging a single reference trajectory sequence Y, and 3 elements are sequentially selected from the first column matrix to obtain (5-2 + 1) second column matrices, as shown in table 3 below.
Figure BDA0003984811900000191
TABLE 3
As shown in table 3, the product between the first row matrix and the 3 second column matrices is calculated as the track distance value: the 1 st trajectory distance value distint 1= aA + bB + cC, the 2 nd trajectory distance value distint 2= aB + bC + cD, and the 3 rd trajectory distance value distint 3= aC + bD + cE. Based on this, the 3 track distance values distint 1, distint 2, distint 3 can be obtained, and then the minimum track distance value is selected from the 3 track distance values, which is used as the track distance value between the single reference track sequence and the portrait track sequence.
To more clearly illustrate the idea of the sliding window calculation applied in the above manner, see below, distance sequence Z, which is sequence X and sequence Y.
[[aA,aB,aC,aD,aE],
Z=[bA,bB,bC,bD,bE],
[cA,cB,cC,cD,cE]]
Each numerical value in the distance matrix Z can represent a linear distance between any two element-corresponding space coordinates between the single portrait track sequence and the single reference track sequence.
Illustratively, the sliding window size is 1, and the distance accumulation sum of each sliding window of the sequence X and the sequence Y is: distint 1= aA + bB + cC; distint 2= aB + bC + cD; distint 3= aC + bD + cE, i.e. 3 trajectory distance values distint 1, distint 2, distint 3, and then the minimum trajectory distance value is selected from them as the trajectory distance value between the single reference trajectory sequence and the portrait trajectory sequence.
Case two: the sequence length of the single reference track sequence is smaller than the sequence length of the single portrait track sequence.
In some embodiments, the head end or the tail end of the single sequence of reference tracks is padded based on a head element or a tail element of n (n is an integer greater than 0) elements in the single sequence of reference tracks until the sequence length of the single sequence of reference tracks is equal to or greater than the sequence length of the single sequence of portrait tracks. Further, if the sequence length of the single reference track sequence is equal to the sequence length of the single portrait track sequence, executing the operation of the third case; and if the sequence length of the single reference track sequence is greater than that of the single portrait track sequence, executing the operation of the first case. And obtaining the track distance value between the single portrait track sequence and the single reference track sequence.
Case three: the sequence length of the single sequence of reference tracks is equal to the sequence length of the single sequence of portrait tracks.
In some embodiments, m (m is an integer greater than 0) elements of a single portrait trajectory sequence are sequentially multiplied by m elements of a single reference trajectory sequence in order of their arrangement to obtain m products. Then, a sum of the m products is calculated and used as a trajectory distance value between the single sequence of portrait trajectories and the single sequence of reference trajectories.
As a possible implementation manner, if there is no reference track sequence in which the acquisition time period matches the single portrait track sequence in at least one (or all) vehicle track sequences, it is determined that there is no track distance value in the single portrait track sequence, or a specified numerical value is used as a track distance value corresponding to the single portrait track sequence.
The designated value can be, for example, infinite, or other value set according to the actual application, and in general, the designated value is designated as a larger value.
It should be noted that, in the embodiment of the present application, the track distance value is also used to characterize a similarity degree between the single reference track sequence and the single portrait track sequence, that is, the smaller the track distance value is, the greater the characterization similarity degree is, otherwise, the greater the track distance is, the smaller the characterization similarity degree is.
Step 23: selecting a plurality of track distance values meeting preset conditions from the plurality of track distance values, and determining vehicle identifications of the plurality of track distance values corresponding to the reference track sequence respectively;
in the embodiment of the application, the plurality of track distance values are arranged in an ascending order to obtain an arrangement order of the plurality of track distance values, and then a plurality of track distance values are selected from the plurality of track distance values according to a selection mode from front to back based on the arrangement order. Of course, a certain proportion of track distance values may be selected, and the present scheme does not specifically limit the number of the selected track distance values.
For each selected track distance value, a corresponding vehicle identifier, such as a license plate number, is determined based on its respective corresponding reference track sequence, where the license plate number can be identified with respect to the vehicle images constituting the reference track sequence. Of course, for the plurality of selected track distance values, the vehicle identifiers corresponding to the selected track distance values may be the same or different.
In some embodiments, an association between the first portrait profile and the determined vehicle identification is established. For example, if two different vehicle identifiers are determined, the first portrait archive and the two vehicle identifiers are respectively established in association, and of course, the same vehicle identifier may also be subjected to deduplication processing.
Based on the same idea, acquiring the incidence relation between a plurality of portrait archives except the first portrait archive and the vehicle identification, and further determining the portrait archives associated with the vehicle identifications. If the vehicle identifier is associated with a plurality of portrait files, the first portrait file and the plurality of portrait files are calculated in pairs (see step 24 below in the calculation method), so as to obtain a plurality of file similarity values. Then, from the plurality of archive similarity values, the archive similarity value larger than the preset threshold is screened as the target similarity value, and the portrait archives corresponding to the target similarity value are merged (see step 25 below).
Exemplarily, referring to fig. 4, obtaining the license plate numbers associated with the three portrait archives a, B, and C, and then converting the association relationship to obtain the portrait archives associated with the license plates X and Y. Then, the portrait files related to the same license plate are combined in pairs, the respective file similarity values are calculated, the file similarity values larger than a preset threshold value are screened as target similarity values, and the corresponding portrait files are merged.
Step 24: if the vehicle identification is associated with a second portrait file, calculating a file similarity value between the first portrait file and the second portrait file;
in order to solve the problem of one person with multiple gears, firstly, an association relation between a portrait archive and a vehicle identification is determined based on a track distance between a portrait track sequence and a vehicle track sequence. Therefore, under the prior condition, the file similarity calculation can be carried out on the portrait files related to the same vehicle identification, so that the accuracy of the gear closing operation is improved, and the phenomenon of one person with multiple gears is effectively optimized.
In the embodiment of the present application, for example, a vehicle identifier is associated with a first portrait file, and if the vehicle identifier is also associated with a second portrait file, a file similarity value between the first portrait file and the second portrait file is calculated.
Aiming at calculating the file similarity value, firstly, a first file characteristic value of a first portrait file is calculated based on the image characteristic value of each element in the first portrait file, and then a second file characteristic value of a second portrait file is calculated based on the image characteristic value of each element in the second portrait file. And finally, calculating the similarity value between the first file characteristic value and the second file characteristic value as the file similarity value between the first portrait file and the second portrait file.
Illustratively, the first profile feature value is a mean centroid of the first human profile. The second file characteristic value is the mean centroid of the second portrait file. The above-mentioned archives mean value barycenter is the sum of the picture eigenvalues of each picture in the archives divided by the number of pictures in the archives, i.e. the mean value of the picture eigenvalues. The above-mentioned archive similarity value is a cosine value of a mean centroid of two portrait archives, which is specifically shown in the following formula (1).
Figure BDA0003984811900000221
The A and the B are respectively mean centroids of any two portrait files, wherein the A can be characterized as a first file characteristic value, and the B can be characterized as a second file characteristic value.
It should be noted that the mean centroid of the above files is an X-dimensional matrix after the car card analysis, and the cosine values of the two matrices can be used to measure the similarity between the two files.
Step 25: and in response to the profile similarity value being greater than a preset threshold, merging the first portrait profile and the second portrait profile.
And calculating the file similarity value of every two portrait files according to the mean value centroid of the files, presetting a similarity threshold value, and when the file similarity value of every two portrait files is greater than the preset similarity threshold value, considering that every two portrait files can be merged.
For example, in the embodiment of the present application, according to the file similarity value between the first portrait file and the second portrait file, if the file similarity value is greater than the preset threshold, the two portrait files are merged.
For example, the file merging may be a method of constructing a mapping relationship and executing file merging based on the mapping relationship, and in the embodiment of the present application, the method is mainly divided into the following three mapping methods.
As a possible mapping method, the first portrait archive is mapped to the second portrait archive to obtain a first mapping relationship, and the first mapping relationship is stored in the second portrait archive. For example, in the embodiment of the present application, a first portrait archive a and a second portrait archive B can be archived, and a can be mapped to B to form the same archive, and a first mapping relationship (a — > B) is saved in B for tracing.
As a possible mapping manner, the second portrait archive is mapped to the first portrait archive to obtain a second mapping relationship, and the second mapping relationship is stored in the first portrait archive. For example, in the embodiment of the present application, a first portrait archive a and a second portrait archive B can be archived, and then B can be mapped to a to form the same archive, and a second mapping relationship (B — > a) is saved in a for tracing.
As a possible mapping method, the first portrait file and the second portrait file are combined, the second portrait file and the third portrait file are combined, then the first portrait file is mapped into the second portrait file to obtain the first mapping relationship, and the third portrait file is mapped into the second portrait file to obtain the third mapping relationship, and the first mapping relationship and the third mapping relationship are saved into the second portrait file. For example, in the embodiment of the present application, a first portrait archive a and a second portrait archive B can be archived, and a can be mapped to B to form the same archive, and a first mapping relationship (a — > B) is saved in B for tracing. In addition, a third portrait archive C and a second portrait archive B can be archived, and C can be mapped to B to form the same archive, while a third mapping relationship (C- > B) is saved to B for tracing. In this way, the first portrait archive a, the second portrait archive B and the third portrait archive C can be mapped to each other, i.e. the first portrait archive a, the second portrait archive B and the third portrait archive C can be merged into one archive.
On the one hand, the human image file track is similar to the vehicle track, the target file is obtained from the massive human image files, and the file similarity threshold is reduced to optimize the multi-file problem of the human image files.
On the other hand, the scheme provides a calculation mode of the distance between the track sequences with different lengths, specifically, the distance between the two track sequences is calculated through sequence completion and sliding window, and the similarity of the two track sequences is judged.
Based on the same inventive concept, the present application further provides a multi-gear merging device, configured to merge multiple gears for one person, so as to reduce a multi-gear ratio, with reference to fig. 5, where the device includes:
the acquiring module 51 acquires M portrait track sequences generated based on the first portrait archive; wherein M is an integer greater than 0;
the first calculation module 52 is configured to calculate, for the M portrait track sequences, track distance values between each portrait track sequence and the reference track sequence to obtain a plurality of track distance values; the reference track sequence is a vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period;
the determining module 53 selects a plurality of track distance values meeting preset conditions from the plurality of track distance values, and determines vehicle identifiers of the reference track sequences corresponding to the plurality of track distance values respectively;
a second calculating module 54, configured to calculate a profile similarity value between the first portrait profile and the second portrait profile if the vehicle identifier is associated with a second portrait profile;
a merging module 55, configured to merge the first portrait file and the second portrait file in response to the file similarity value being greater than a preset threshold.
In a possible implementation, the acquiring module 51 is specifically configured to acquire a sequence of M person trajectories generated based on a first person profile, and:
acquiring a first portrait file of a first target object; wherein the first portrait session includes at least: a plurality of window face images including the first target object;
sequencing the multiple vehicle window face images in the first person image file according to the image acquisition time to obtain a first person image track sequence;
in the first track sequence, carrying out duplicate removal processing on the vehicle window face images continuously positioned at the same spatial position to obtain a second human image track sequence;
and dividing the vehicle window face image in the second portrait track sequence based on preset M segmentation time periods to obtain M portrait track sequences.
In a possible implementation manner, the first calculating module 52 is specifically configured to calculate a track distance value between each portrait track sequence and a reference track sequence:
for a single portrait track sequence in the M portrait track sequences, executing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference trajectory sequence is larger than that of the single portrait trajectory sequence, for each element of m elements in the single portrait trajectory sequence, obtaining n-m +1 trajectory distance values by calculating a trajectory distance value of the spatial position of each element, which is multiplied by the spatial position of a single element in n elements in the reference trajectory sequence in sequence; wherein m and n are integers more than 0, and m is less than n;
and selecting the minimum track distance value from the n-m +1 track distance values as the track distance value between the single portrait track sequence and the reference track sequence.
In a possible embodiment, for each element of m elements in the single portrait trajectory sequence, the first calculation module 52 is further configured to calculate a trajectory distance value obtained by multiplying and summing the spatial position of each element by the spatial position of a single element in n elements in the reference trajectory sequence in sequence, and obtain n-m +1 trajectory distance values, where:
for a single track distance value of the n-m +1 track distance values, calculating in the following manner:
determining a first row matrix formed by transversely arranging the spatial positions of m elements in the single portrait track sequence;
determining a second column matrix formed by sequentially selecting m elements from the first column matrix; the number of the second-row matrixes is n-m + 1;
calculating a product of the first row matrix and the second column matrix as the single trajectory distance value.
In a possible implementation manner, the first calculating module 52 is specifically configured to calculate a track distance value between each portrait track sequence and the reference track sequence, and:
for a single portrait track sequence in the M portrait track sequences, executing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is smaller than that of the single portrait track sequence, performing a filling operation on the reference track sequence based on head elements or tail elements of n elements in the reference track sequence; wherein n is an integer greater than 0;
and calculating a track distance value between each portrait sequence and the reference track sequence until the sequence length of the reference track sequence is equal to or greater than the sequence length of the single portrait track sequence.
In a possible implementation manner, the first calculating module 52 is specifically configured to calculate a track distance value between each portrait track sequence and a reference track sequence:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is equal to the sequence length of the single portrait track sequence, sequentially multiplying m elements of the single portrait track sequence by m elements in the reference track sequence according to the element arrangement order to obtain m products;
and calculating the sum value of the m products, and taking the sum value as the track distance value between the single portrait track sequence and the reference track sequence.
In a possible implementation manner, the first calculating module 52 is specifically configured to calculate a track distance value between each portrait track sequence and a reference track sequence:
for a single portrait track sequence in the M portrait track sequences, executing the following operations:
if a reference track sequence matched with the single portrait track sequence in the acquisition time period does not exist in at least one vehicle track sequence, determining that a track distance value does not exist in the single portrait track sequence, or taking a specified numerical value as a track distance value corresponding to the single portrait track sequence; wherein the specified values include at least a positive infinity.
In a possible implementation manner, the determining module 53 is specifically configured to select a plurality of trajectory distance values that satisfy a preset condition from the plurality of trajectory distance values:
arranging according to an ascending order to obtain an arrangement order of the plurality of track distance values;
and selecting a plurality of track distance values from the plurality of track distance values in a front-to-back selection mode based on the arrangement order.
In a possible implementation, after the determining the vehicle identifier corresponding to each of the plurality of track distance values, the determining module 53 is further configured to:
associating the first portrait profile with the vehicle identification.
In a possible implementation, before the calculating a profile similarity value between the first person profile and the second person profile if the vehicle identifier is associated with a second person profile, the second calculating module 54 is further configured to:
if the vehicle identification is associated with a plurality of portrait files, calculating the file similarity values between the first portrait file and the plurality of portrait files pairwise to obtain a plurality of file similarity values;
and screening the archive similarity values larger than a preset threshold value from the plurality of archive similarity values to serve as target similarity values, and combining portrait archives corresponding to the target similarity values.
In a possible implementation manner, the second calculating module 54 is specifically configured to:
calculating a first file characteristic value of the first portrait file based on the image characteristic value of each element in the first portrait file;
calculating a second portrait file characteristic value based on the image characteristic value of each element in the second portrait file;
calculating a similarity value between the first file characteristic value and the second file characteristic value as a file similarity value between the first portrait file and the second portrait file.
In a possible implementation manner, the merging module 55 is specifically configured to merge the first portrait profile and the second portrait profile:
mapping the first portrait archive to the second portrait archive to obtain a first mapping relation, and storing the first mapping relation to the second portrait archive; or
And mapping the second portrait archive to the first portrait archive to obtain a second mapping relation, and storing the second mapping relation to the first portrait archive.
Based on the device, the multi-gear combination based on the portrait track sequence and the reference track sequence is realized by introducing the reference track sequence, namely the vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period, so that the multi-gear problem of the optimized portrait file is solved, and the accuracy of the multi-gear combination is improved.
Based on the same inventive concept, an embodiment of the present application further provides an electronic device, where the electronic device can implement the function of the foregoing multi-gear merging device, and with reference to fig. 6, the electronic device includes:
at least one processor 61, and a memory 62 connected to the at least one processor 61, in this embodiment, a specific connection medium between the processor 61 and the memory 62 is not limited in this application, and fig. 6 illustrates an example in which the processor 61 and the memory 62 are connected through a bus 60. The bus 60 is shown in fig. 6 by a thick line, and the connection between other components is merely illustrative and not intended to be limiting. The bus 60 may be divided into an address bus, a data bus, a control bus, etc., and is shown with only one thick line in fig. 6 for ease of illustration, but does not represent only one bus or type of bus. Alternatively, the processor 61 may also be referred to as a controller, without limitation to name a few.
In the embodiment of the present application, the memory 62 stores instructions executable by the at least one processor 61, and the at least one processor 61 can execute the multi-gear merging method discussed above by executing the instructions stored in the memory 62. The processor 61 may implement the functions of the various modules in the apparatus/system shown in fig. 5.
The processor 61 is a control center of the apparatus/system, and various interfaces and lines can be used to connect various parts of the entire control device, and various functions and processing data of the apparatus/system are executed or executed by operating or executing instructions stored in the memory 62 and calling up data stored in the memory 62, so that the apparatus/system is monitored as a whole.
In one possible design, processor 61 may include one or more processing units, and processor 61 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, and the like, and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 61. In some embodiments, the processor 61 and the memory 62 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 61 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the multi-file merging method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
The memory 62, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 62 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and the like. The memory 62 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 62 in the embodiments of the present application may also be circuitry or any other device/system capable of performing a storage function for storing program instructions and/or data.
The processor 61 is programmed to solidify the codes corresponding to the multi-file merging method described in the foregoing embodiment into the chip, so that the chip can execute the steps of the multi-file merging method of the embodiment shown in fig. 2 when running. How to program the processor 61 is well known to those skilled in the art and will not be described in detail herein.
Based on the same inventive concept, the present application also provides a storage medium storing computer instructions, which when executed on a computer, cause the computer to execute the aforementioned multi-file merging method.
In some possible embodiments, the aspects of the multi-file merging method provided by the present application may also be implemented in the form of a program product comprising program code for causing the control apparatus to perform the steps of the multi-file merging method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on a device.
It should be apparent to one skilled in the art that embodiments of the present application may be provided as a method, apparatus/system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (15)

1. A method of multi-gear consolidation, the method comprising:
acquiring M portrait track sequences generated based on a first portrait archive; wherein M is an integer greater than 0;
calculating a track distance value between each portrait track sequence and a reference track sequence aiming at the M portrait track sequences to obtain a plurality of track distance values; the reference track sequence is a vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period;
selecting a plurality of track distance values meeting preset conditions from the plurality of track distance values, and determining vehicle identifications of the plurality of track distance values corresponding to the reference track sequence respectively;
if the vehicle identification is associated with a second portrait file, calculating a file similarity value between the first portrait file and the second portrait file;
and in response to the file similarity value being larger than a preset threshold value, combining the first portrait file and the second portrait file.
2. The method of claim 1, wherein obtaining a sequence of M portrait tracks generated based on a first portrait archive comprises:
acquiring a first portrait file of a first target object; wherein the first portrait profile includes at least: a plurality of window face images including the first target object;
sequencing the multiple vehicle window face images in the first person image file according to the image acquisition time to obtain a first person image track sequence;
in the first track sequence, carrying out duplicate removal processing on the vehicle window face images continuously positioned at the same spatial position to obtain a second human image track sequence;
and dividing the car window face images in the second portrait track sequence based on M preset segmentation time periods to obtain M portrait track sequences.
3. The method of claim 1, wherein calculating a trajectory distance value between each sequence of portrait trajectories and a sequence of reference trajectories comprises:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is greater than the sequence length of the single portrait track sequence, for each element of m elements in the single portrait track sequence, obtaining n-m +1 track distance values by calculating a track distance value obtained by sequentially multiplying the spatial position of each element by the spatial position of a single element in n elements in the reference track sequence; wherein m and n are integers more than 0, and m is less than n;
and selecting the minimum track distance value from the n-m +1 track distance values as the track distance value between the single portrait track sequence and the reference track sequence.
4. A method as claimed in claim 3 wherein said deriving n-m +1 trajectory distance values for each of m elements in said sequence of single portrait trajectories by calculating a trajectory distance value that is a summation of the spatial position of said each element multiplied in turn by the spatial position of a single element in n elements in said sequence of reference trajectories comprises:
for a single track distance value of the n-m +1 track distance values, calculating in the following manner:
determining a row matrix formed by transversely arranging the spatial positions of m elements in the single portrait track sequence;
determining a first column matrix formed by vertically arranging the spatial positions of n elements in the single reference track sequence;
determining a second column matrix formed by sequentially selecting m elements from the first column matrix; the number of the second-row matrixes is n-m + 1;
calculating a product of the first row matrix and the second column matrix as the single trajectory distance value.
5. The method of claim 1, wherein calculating a trajectory distance value between each portrait trajectory sequence and a reference trajectory sequence comprises:
for a single portrait track sequence in the M portrait track sequences, executing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is smaller than that of the single portrait track sequence, performing a filling operation on the reference track sequence based on a head element or a tail element of n elements in the reference track sequence; wherein n is an integer greater than 0;
and calculating a track distance value between each portrait sequence and the reference track sequence until the sequence length of the reference track sequence is equal to or greater than the sequence length of the single portrait track sequence.
6. The method of claim 1, wherein calculating a trajectory distance value between each portrait trajectory sequence and a reference trajectory sequence comprises:
for a single portrait track sequence of the M portrait track sequences, performing the following operations:
determining a reference track sequence matched with the single portrait track sequence in an acquisition time period from at least one vehicle track sequence;
in response to that the sequence length of the reference track sequence is equal to the sequence length of the single portrait track sequence, sequentially multiplying m elements of the single portrait track sequence by m elements in the reference track sequence according to the element arrangement order to obtain m products;
and calculating the sum value of the m products, and taking the sum value as the track distance value between the single portrait track sequence and the reference track sequence.
7. The method of claim 1, wherein calculating a trajectory distance value between each portrait trajectory sequence and a reference trajectory sequence comprises:
for a single portrait track sequence in the M portrait track sequences, executing the following operations:
if a reference track sequence matched with the single portrait track sequence in the acquisition time interval does not exist in at least one vehicle track sequence, determining that no track distance value exists in the single portrait track sequence, or taking an appointed numerical value as a track distance value corresponding to the single portrait track sequence; wherein the specified numerical value includes at least a positive infinity.
8. The method of claim 1, wherein the selecting a plurality of track distance values from the plurality of track distance values that satisfy a predetermined condition comprises:
arranging according to an ascending order to obtain an arrangement order of the plurality of track distance values;
and selecting a plurality of track distance values from the plurality of track distance values in a front-to-back selection mode based on the arrangement order.
9. The method of claim 1, after said determining the vehicle identification to which each of the plurality of trajectory distance values corresponds, further comprising:
associating the first portrait profile with the vehicle identification.
10. The method of claim 1 or 9, further comprising, prior to said calculating a profile similarity value between said first portrait profile and said second portrait profile if said vehicle identification is associated with a second portrait profile:
if the vehicle identification is associated with a plurality of portrait files, calculating file similarity values between the first portrait file and the plurality of portrait files pairwise to obtain a plurality of file similarity values;
and screening the archive similarity values larger than a preset threshold value from the plurality of archive similarity values as target similarity values, and combining portrait archives corresponding to the target similarity values.
11. The method of claim 1, wherein said calculating a dossier similarity value between the first portrait dossier and the second portrait dossier comprises:
calculating a first file characteristic value of the first portrait file based on the image characteristic value of each element in the first portrait file;
calculating a second portrait file characteristic value based on the image characteristic value of each element in the second portrait file;
and calculating the similarity value between the first file characteristic value and the second file characteristic value as the file similarity value between the first portrait file and the second portrait file.
12. The method of claim 1, wherein said merging the first portrait profile and the second portrait profile comprises:
mapping the first portrait archive to the second portrait archive to obtain a first mapping relation, and storing the first mapping relation to the second portrait archive; or
And mapping the second portrait archive to the first portrait archive to obtain a second mapping relation, and storing the second mapping relation to the first portrait archive.
13. A multi-gear consolidation apparatus, the apparatus comprising:
the acquisition module is used for acquiring M portrait track sequences generated based on the first portrait archive; wherein M is an integer greater than 0;
the first calculation module is used for calculating track distance values between each portrait track sequence and the reference track sequence aiming at the M portrait track sequences to obtain a plurality of track distance values; the reference track sequence is a vehicle track sequence matched with the corresponding portrait track sequence in the acquisition time period;
the determining module is used for selecting a plurality of track distance values meeting preset conditions from the plurality of track distance values and determining vehicle identifications of the plurality of track distance values corresponding to the reference track sequence;
the second calculation module is used for calculating the file similarity value between the first portrait file and the second portrait file if the vehicle identifier is associated with the second portrait file;
and the merging module is used for merging the first portrait file and the second portrait file in response to the file similarity value being larger than a preset threshold value.
14. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1-12 when executing the computer program stored on the memory.
15. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1-12.
CN202211561496.0A 2022-12-07 2022-12-07 Multi-gear combination method and device and electronic equipment Pending CN115880754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211561496.0A CN115880754A (en) 2022-12-07 2022-12-07 Multi-gear combination method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211561496.0A CN115880754A (en) 2022-12-07 2022-12-07 Multi-gear combination method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115880754A true CN115880754A (en) 2023-03-31

Family

ID=85766278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211561496.0A Pending CN115880754A (en) 2022-12-07 2022-12-07 Multi-gear combination method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115880754A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056551A (en) * 2023-07-07 2023-11-14 北京瑞莱智慧科技有限公司 File aggregation method and device for driving path, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117056551A (en) * 2023-07-07 2023-11-14 北京瑞莱智慧科技有限公司 File aggregation method and device for driving path, computer equipment and storage medium
CN117056551B (en) * 2023-07-07 2024-04-02 北京瑞莱智慧科技有限公司 File aggregation method and device for driving path, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109783685B (en) Query method and device
CN109740004B (en) Filing method and device
US9418297B2 (en) Detecting video copies
CN102265598A (en) Arranging images into pages using content-based filtering and theme-based clustering
CN109815829A (en) A kind of method and device of determining passerby track
CN108897757B (en) Photo storage method, storage medium and server
CN109784220B (en) Method and device for determining passerby track
CN109800318B (en) Filing method and device
CN110781911A (en) Image matching method, device, equipment and storage medium
CN113052079B (en) Regional passenger flow statistical method, system, equipment and medium based on face clustering
CN109800329B (en) Monitoring method and device
CN113570635B (en) Target motion trail restoration method and device, electronic equipment and storage medium
CN109800664B (en) Method and device for determining passersby track
CN103929644A (en) Video fingerprint database building method and device and video fingerprint recognition method and device
CN109740003B (en) Filing method and device
CN108932509A (en) A kind of across scene objects search methods and device based on video tracking
CN115880754A (en) Multi-gear combination method and device and electronic equipment
CN114078277A (en) One-person-one-file face clustering method and device, computer equipment and storage medium
CN109800673A (en) A kind of archiving method and device
CN114549873A (en) Image archive association method and device, electronic equipment and storage medium
CN111598176A (en) Image matching processing method and device
CN111899279A (en) Method and device for detecting motion speed of target object
CN109783663B (en) Archiving method and device
CN115170851A (en) Image clustering method and device
CN114898420A (en) Abnormal face archive identification method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination