CN116401392A - Image retrieval method, electronic equipment and storage medium - Google Patents

Image retrieval method, electronic equipment and storage medium Download PDF

Info

Publication number
CN116401392A
CN116401392A CN202211730300.6A CN202211730300A CN116401392A CN 116401392 A CN116401392 A CN 116401392A CN 202211730300 A CN202211730300 A CN 202211730300A CN 116401392 A CN116401392 A CN 116401392A
Authority
CN
China
Prior art keywords
image
feature vector
list
target
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211730300.6A
Other languages
Chinese (zh)
Other versions
CN116401392B (en
Inventor
谢士俊
李凡平
石柱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ISSA Technology Co Ltd
Original Assignee
ISSA Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ISSA Technology Co Ltd filed Critical ISSA Technology Co Ltd
Priority to CN202211730300.6A priority Critical patent/CN116401392B/en
Publication of CN116401392A publication Critical patent/CN116401392A/en
Application granted granted Critical
Publication of CN116401392B publication Critical patent/CN116401392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides an image retrieval method, which not only retrieves in an original image database according to retrieval conditions input by a user, but also displays the original image conforming to the image retrieval conditions as the user, wherein the user selects an intermediate image with a person to be retrieved through dragging, and retrieves an associated image in the original image database according to the intermediate image.

Description

Image retrieval method, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image retrieval technologies, and in particular, to an image retrieval method, an electronic device, and a storage medium.
Background
With the continuous development of the search technology, the search requirement on images is gradually increased, the existing image search technology generally only searches the database to be searched once, and the existing image search technology generally only uses the search condition input or selected by the user to search the database to be searched, so that the search condition is less, the search accuracy is low, and the user experience is poor, therefore, an image search method which can make the search accuracy higher and improve the user experience is urgently needed.
Disclosure of Invention
Aiming at the technical problems, the invention adopts the following technical scheme:
the image retrieval method is applied to an image retrieval system, and the image retrieval system comprises an original image database and a display interface, wherein the original image database is used for storing a plurality of original images and original image information corresponding to the original images, and the original image information comprises event types corresponding to the original images;
the method comprises the following steps:
s100, acquiring image retrieval conditions input by a user; the image retrieval condition includes at least one of: a shooting time period corresponding to the image, a shooting place corresponding to the image, an event type corresponding to the image and a shooting subject corresponding to the image;
s200, performing image retrieval in an original image database according to image retrieval conditions input by a user to obtain a plurality of original images conforming to the image retrieval conditions;
s300, displaying a plurality of original images meeting the image retrieval conditions in an image display area in a display interface;
s400, responding to the dragging of the intermediate image by a user, and displaying an image receiving area at a preset position of a display interface; wherein the intermediate image is any one of a plurality of original images which meet the image retrieval condition;
S500, responding to the fact that a user drags the intermediate image into an image receiving area, and performing image retrieval in an original image database according to the intermediate image to determine a plurality of associated images; the associated image is an original image of which the person corresponding to the intermediate image is the same person;
and S600, displaying the plurality of associated images in an image display area of the display interface.
The invention has at least the following beneficial effects: the method comprises the steps of obtaining image retrieval conditions input by a user, carrying out image retrieval in an original image database according to the image retrieval conditions input by the user, obtaining a plurality of original images conforming to the image retrieval conditions, displaying the plurality of original images conforming to the image retrieval conditions in an image display area in a display interface, displaying an image receiving area at a preset position of the display interface in response to the dragging of an intermediate image by the user, carrying out image retrieval in the original image database according to the intermediate image in response to the dragging of the intermediate image to the image receiving area, determining a plurality of associated images, and displaying the plurality of associated images in the image display area of the display interface. The method and the device not only search in the original image database according to the search conditions input by the user, but also display the original image conforming to the image search conditions for the user, the user selects the intermediate image with the person to be searched by dragging, and searches the intermediate image in the original image database according to the intermediate image to obtain the associated image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for image retrieval according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The image retrieval method is characterized by being applied to an image retrieval system, wherein the image retrieval system comprises an original image database and a display interface, the original image database is used for storing a plurality of original images and original image information corresponding to the original images, and the original image information comprises event types corresponding to the original images.
In embodiments of the present invention, the event types include, but are not limited to, not wearing a helmet, illicitly carrying a person, running a red light, driving in reverse, occupying a motor vehicle lane, and illicitly parking.
As shown in fig. 1, the method comprises the steps of:
s100, acquiring image retrieval conditions input by a user; the image retrieval condition includes at least one of: the method comprises the steps of shooting time periods corresponding to images, shooting places corresponding to the images, event types corresponding to the images and shooting subjects corresponding to the images.
S200, performing image retrieval in an original image database according to the image retrieval conditions input by the user, and obtaining a plurality of original images conforming to the image retrieval conditions.
Specifically, according to the image retrieval conditions input by the user, generating feature vectors corresponding to the image retrieval conditions, and carrying out image retrieval in an original image database according to the feature vectors corresponding to the image retrieval conditions so as to obtain a plurality of original images conforming to the image retrieval conditions.
S300, displaying a plurality of original images meeting the image retrieval conditions in an image display area in a display interface.
Specifically, the original images meeting the image retrieval conditions are arranged in parallel in the image display area, and are not overlapped, and when the image display area cannot completely display the original images meeting the image retrieval conditions, a user can view the original images meeting the image retrieval conditions by scrolling the selection control.
S400, responding to the dragging of the intermediate image by a user, and displaying an image receiving area at a preset position of a display interface; wherein the intermediate image is any one of a plurality of original images conforming to the image retrieval condition.
Specifically, when a user is sensed to drag any original image which meets the image retrieval condition, an image receiving area is displayed at a preset position of a display interface.
Further, the preset position of the display interface is at the upper left corner of the display interface, and the area of the image receiving area is smaller than that of the image display area, so that excessive shielding of the image display area in the image receiving area is avoided.
S500, responding to the fact that a user drags the intermediate image into an image receiving area, and performing image retrieval in an original image database according to the intermediate image to determine a plurality of associated images; the related image is an original image in which the person corresponding to the intermediate image is the same person.
Specifically, when the image receiving area receives the intermediate image, the intermediate image is displayed at the image receiving area, and the size of the intermediate image is adaptively adjusted to be suitable for the size of the image receiving area, it can be understood that the intermediate image does not exceed the image receiving area, and the resolution of the intermediate image does not change.
Further, the step S500 specifically includes the following steps:
s511, extracting image features of the intermediate image, and determining a target image feature vector list corresponding to the intermediate image features; the feature type corresponding to any target image feature vector is a key feature or a general feature; the feature dimension of the target image feature vector with any feature type being a key feature is larger than the feature dimension of the target image feature vector with any feature type being a general feature.
In the embodiment of the present invention, the target image feature vector list may further include feature vectors corresponding to image features input by a user.
Specifically, while the intermediate image is acquired, an input field and an image retrieval condition input by a user are displayed at an image retrieval area of a display interface, the user can input image features in the input field or click the image retrieval condition by selecting a control, convert the image features input by the user in the input field and the clicked image retrieval condition into image feature vectors, and add the image feature vectors to a target image feature vector list.
Further, the general features include: the vehicles used by the target person, the gender corresponding to the target person and the color of the helmet worn by the target person; the key features include: facial features corresponding to the target person, body type features corresponding to the target person, and clothing features corresponding to the target person.
S512, obtaining a target image feature vector with a feature type of a general feature in the target image feature vector list to obtain a general image feature vector list YB= (YB) 1 ,YB 2 ,……,YB i ,……,YB m ) I=1, 2, … …, m; wherein m is the number of general image feature vectors in the target image feature vector list, YB i Is the ith general image feature vector in the target image feature vector list.
S513, searching in the original image database according to the general image feature vector to determine a first image list set TY= (TY) 1 ,TY 2 ,……,TY i ,……,TY m ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein YB i Corresponding first image list TY i =(TY i1 ,TY i2 ,……,TY ij ,……,TY in(i) ) J=1, 2, … …, n (i), n (i) being YB i The corresponding number of first images TY ij Is according to YB i And searching the obtained j first image in the original image database.
S514, performing intersection processing according to TY, and determining a second image list TE= (TE) 1 ,TE 2 ,……,TE f ,……,TE F ) F=1, 2, … …, F; wherein F is the number of second images in the second image list, TE f Is the f second image in the second image list, and the second image is the existence ofThe first images in each first image list.
S515, obtaining a target image feature vector with the feature type as a key feature in the target image feature vector list to obtain a key image feature vector list EX= (EX) 1 ,EX 2 ,……,EX r ,……,EX R ) R=1, 2, … …, R; wherein R is the number of key image feature vectors in the target image feature vector list, EX r And the r key image feature vector in the target image feature vector list.
S516, searching in the second image list according to the key image feature vector list, and determining a third image list TH= (TH) 1 ,TH 2 ,……,TH t ,……,TH T ) T=1, 2, … …, T; wherein T is the number of third images in the third image list, TH t And the t third image is obtained by searching in the second image list according to each key image feature vector.
S517, acquiring a definition list QX= (QX) corresponding to TH 1 ,QX 2 ,……,QX t ,……,QX T ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein QX is t Is TH t Corresponding sharpness.
S518, acquiring a third image feature vector list set XT= (XT) corresponding to the TH 1 ,XT 2 ,……,XT t ,……,XT T ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TH is that t Corresponding third image feature vector list XT t =(XT t1 ,XT t2 ,……,XT tr ,……,XT tR ),XT tr Is TH t And the corresponding r third image feature vector, wherein the feature type of the third image feature vector is a key feature.
S519, acquiring an intermediate image feature vector list Gx= (GX) from QX, EX, and XT 1 ,GX 2 ,……,GX r ,……,GX R ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein GX r GX for the r-th intermediate image feature vector r Meets the following conditions:
Figure SMS_1
b1 is a preset first image feature vector weight, b2 is a preset second image feature vector weight, b1+b2=1 and b1 is greater than or equal to b2.
Specifically, a person skilled in the art can set the value of b1 according to the actual requirement, because b1 is the image feature weight corresponding to the intermediate image, and the intermediate image is the original image which is dragged by the user and meets the image retrieval condition, the target person corresponding to the intermediate image is the target person which the user needs to retrieve, so that the difference between the image feature vector in the key image feature vector list and the feature vector corresponding to the target person is not great, and the retrieved image is not the image required by the user.
S520, searching in an original image database according to the GX, and determining a plurality of associated images corresponding to the GX.
The method comprises the steps of extracting image features from an intermediate image to determine a target image feature vector list, obtaining a general image feature vector list with fewer feature dimensions than key image feature vectors in the target image feature vector list, searching in an original image database according to the general image feature vectors to determine a first image list, performing intersection processing on the first image list, determining a second image, searching in the second image list according to the key image feature vector list to determine a third image, obtaining a feature vector list with definition corresponding to the third image and corresponding to the third image, obtaining the intermediate image feature vector list according to the key image feature vector list corresponding to the intermediate image and the feature vector list with definition corresponding to the third image, and searching in the original image database according to the intermediate image feature vector list to determine a plurality of associated images. The general image feature vector and the key image feature vector are distinguished and are subjected to image retrieval twice, the general vector is used for the first retrieval to retrieve in the original image database, the feature dimension of the used image vector is small, but the retrieval amount is large, the key image feature vector is used for the second retrieval to retrieve in a second image list obtained through union processing, and the feature dimension of the used image vector is large, but the retrieval amount is small, so that the time used by the two retrieval is balanced, and the problem of overlong retrieval time is avoided; and setting weights for the third image feature vectors according to the definition corresponding to the third image obtained by the twice retrieval, wherein the third image feature vector weight corresponding to the third image with higher definition is larger, and the key image feature vector weight corresponding to the intermediate image selected by the user is the maximum value, so that the accuracy of the key image feature vector is improved, and the association degree between the selected associated image and the intermediate image is strongest.
In another embodiment of the present invention, the step S500 specifically includes the following steps:
s551, performing similarity calculation on the target person corresponding to the intermediate image and the target person corresponding to each original image meeting the image retrieval conditions, and determining a target person similarity list.
S552, determining a key person similarity list RX= (RX) according to the target person similarity list 1 ,RX 2 ,……,RX g ,……,RX G ) G=1, 2, … …, G; wherein G is the number of key person similarities, RX g The similarity of the g-th key person is equal to or greater than a preset similarity threshold x 0 Target person similarity of (c).
S553, when
Figure SMS_2
At this time, a fourth image list Tf= (TF) corresponding to RX is acquired 1 ,TF 2 ,……,TF g ,……,TF G ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TF is g For RX g A corresponding fourth image.
In particular, when
Figure SMS_3
At this time, step S511 is performed.
S554, acquiring a definition list QF= (QF) corresponding to the TF 1 ,QF 2 ,……,QF g ,……,QF G ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein QF (quad Flat No lead) g For TF g Corresponding sharpness.
S555, extracting image features of the intermediate image, and determining a target image feature vector list corresponding to the intermediate image features; the feature type corresponding to any target image feature vector is a key feature or a general feature; the feature dimension of the target image feature vector with any feature type being a key feature is larger than the feature dimension of the target image feature vector with any feature type being a general feature.
S556, obtaining a target image feature vector with a feature type being a key feature in the target image feature vector list to obtain a key image feature vector list EX= (EX) 1 ,EX 2 ,……,EX r ,……,EX R ) R=1, 2, … …, R; wherein R is the number of key image feature vectors in the target image feature vector list, EX r And the r key image feature vector in the target image feature vector list.
S557, a fourth image feature vector list set XF= (XF) corresponding to TF is obtained 1 ,XF 2 ,……,XF g ,……,XF G ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TF is g Corresponding third image feature vector list XF g =(XF g1 ,XF g2 ,……,XF gr ,……,XF gR ) R=1, 2, … …, R is TF g Number of corresponding fourth image feature vectors, XF gr For TF g And the corresponding r fourth image feature vector, wherein the feature type of the fourth image feature vector is a key feature.
S558, acquiring a designated image feature vector list ZX= (ZX) according to QF, EX and XF 1 ,ZX 2 ,……,ZX r ,……,ZX R ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein ZX r Designating an image feature vector for the r < th >, ZX r Meets the following conditions:
Figure SMS_4
b1 is a preset first image feature vector weight, b2 is a preset second image feature vector weight, b1+b2=1 and b1 is greater than or equal to b2.
S559, searching in an original image database according to ZX, and determining a plurality of associated images corresponding to the ZX.
The similarity calculation is carried out on the target person corresponding to the intermediate image and the target person corresponding to each original image meeting the image retrieval conditions, a target person similarity list is determined, and accordingly the target person similarity which is larger than or equal to a preset similarity threshold value is determined to be used as the key person similarity, and a fourth image corresponding to the key person similarity is determined; and acquiring a key image feature vector list corresponding to the intermediate image and a feature vector list corresponding to the fourth image and having definition corresponding to the fourth image, acquiring a specified image feature vector list according to the key image feature vector list corresponding to the intermediate image and the feature vector list corresponding to the fourth image and having definition corresponding to the fourth image, and searching in an original image database according to the specified image feature vector list to determine an associated image corresponding to the specified image feature vector list. And determining the similarity of the target person which is greater than or equal to a preset similarity threshold value as the similarity of the key person by calculating the similarity of the target person.
And S600, displaying the plurality of associated images in an image display area of the display interface.
According to the method, the image retrieval conditions input by the user are obtained, the image retrieval is carried out in the original image database according to the image retrieval conditions input by the user, a plurality of original images conforming to the image retrieval conditions are obtained, the plurality of original images conforming to the image retrieval conditions are displayed in the image display area in the display interface, the image receiving area is displayed at the preset position of the display interface in response to the dragging of the intermediate image by the user, the image retrieval is carried out in the original image database according to the intermediate image in response to the dragging of the intermediate image to the image receiving area, a plurality of associated images are determined, and the plurality of associated images are displayed in the image display area of the display interface. The method and the device not only search in the original image database according to the search conditions input by the user, but also display the original image conforming to the image search conditions for the user, the user selects the intermediate image with the person to be searched by dragging, and searches the intermediate image in the original image database according to the intermediate image to obtain the associated image.
In the embodiment of the invention, the original image database is acquired through the following steps:
acquiring target detection task information corresponding to a target camera; the target detection task information includes: target task type list d2= (d 2) 1 ,d2 2 ,……,d2 ω ,……,d2 ψ ) Target mark information list set d3= (d 3) corresponding to d2 1 ,d3 2 ,……,d3 ω ,……,d3 ψ ) D3-corresponding target determination auxiliary parameter list set d4= (d 4) 1 ,d4 2 ,……,d4 ω ,……,d4 ψ ), ω =1, 2, … … ψ, which is the number of target task types in the target detection task, d2 ω Is the first ω Target task types, d2 ω Marking information list d3 of corresponding target area ω =(d3 ω1 ,d3 ω2 ,……,d3 ωw ,……,d3 ωW(ω) ) W=1, 2, … …, W (ω) is d2 ω Number of corresponding target areas, d3 ωw Is d3 ω W-th tag information of d3 ω Corresponding target judgment auxiliary parameter list d4 ω =(d4 ω1 ,d4 ω2 ,……,d4 ωw ,……,d4 ωw(ω) ),d4 ωw Is d3 ωw Corresponding target judgment auxiliary parameters, wherein the target area is an area formed by corresponding a corner coordinates of the target mark in the target image, and the target area is a region formed by corresponding a corner coordinates of the target mark in the target imageThe target image is an image shot by the target camera.
Preferably, a=4.
Further, the target tasks include, but are not limited to: helmet-not-worn, illegal manned, red light running, reverse driving, motor vehicle lane occupation and illegal parking.
Further, the target image is any video frame of historical shooting corresponding to the target camera.
Further, a task establishment request aiming at the target camera is sent by a user, and a target image corresponding to the target camera is obtained.
Acquiring a first type identification list set D= (D) marked on a target image by a user 1 ,D 2 ,……,D ω ,……D ψ ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein d2 ω Corresponding first class identification list D ω =(D ω1 ,D ω2 ,……,D ωw ,……,D ωw(ω) ),D ωw Is d2 ω Corresponding w first type identifier, D ωw And the w-th target area corresponding to the omega-th target task type on the target image is marked.
Specifically, the first type of identification can be understood as identification corresponding to a corner point of the target area a; further, the target area may be understood as an area surrounding a city by connecting the a corner points.
And according to the D, determining a target mark information set D3 of the target area corresponding to the D.
Specifically, the step 130 specifically includes:
will D ωw The pixel coordinates of each corner of (a) in the target image are determined as D ωw Corresponding marking information d3 ωw To obtain the target mark information set d3.
According to D, a second type identification list set E= (E) marked by the user on the target image is obtained 1 ,E 2 ,……,E ω ,……,E ψ ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein D is ω Corresponding second type identification list E ω =(E ω1 ,E ω2 ,……,E ωw ,……,E ωw(ω) ),E ωw For D ωw And the corresponding second type task identification is marked in the target area.
Specifically, the second type of identification includes, but is not limited to: arrow form designations and indicator line form designations.
And determining a target judgment auxiliary parameter list set d4 corresponding to the E according to the E.
The method comprises the steps that a first type identification list set marked on a target image by a user is obtained to generate a target mark information set of a corresponding target area; acquiring a corresponding target judgment auxiliary parameter list set by acquiring a second type identification list set marked on a target image by a user; therefore, the detection of all the areas of the video frame acquired by the target camera is not performed, but corresponding identification information is generated according to the marks of the user, so that the detection area is more accurate and meets the needs of the user.
Specifically, according to the direction corresponding to the first sub-mark in the second type mark, determining an included angle W between the first sub-mark and the y axis of the target image; the first sub-identifier is an arrow-shaped identifier corresponding to the second type identifier.
In particular, the y-axis is understood to be the y-axis in a planar rectangular coordinate system.
Acquiring a pixel point coordinate list K= (K) corresponding to the first sub-mark 1 ,K 2 ,……,K q ,……,K Q ) Q=1, 2, … …, Q; wherein Q is the number of pixel point coordinates corresponding to the first sub-mark, K q Identifying the corresponding coordinate of the qth pixel point on the target image for the first sub-component, and K q =(HK q ,ZK q ),HK q For K q Corresponding abscissa, ZK q For K q Corresponding ordinate.
According to K q Acquiring the maximum difference HK of the abscissa corresponding to the first sub-mark ψax Maximum difference ZK of ordinate corresponding to first sub-mark ψax The method comprises the steps of carrying out a first treatment on the surface of the The maximum difference value of the abscissa is the maximum value of the difference between the pixel points corresponding to any two first sub-marks and the abscissaThe maximum difference value of the ordinate is the maximum value of the difference between the ordinate of the pixel points corresponding to any two first sub-marks.
When W is less than or equal to 45 degrees, acquiring the pixel point coordinate quantity F1 corresponding to the first sub-mark first area and the pixel point coordinate quantity F2 corresponding to the second area; the first region is ZK in rectangular coordinate system ψax The second region is ZK in rectangular coordinate system ψax The area under the ordinate corresponding to/2.
When F1 > F2, the direction of the first sub-mark is the direction along the y-axis in the rectangular coordinate system.
When F1 is smaller than F2, the direction of the first sub-mark is the downward direction along the y axis in the rectangular coordinate system.
When W is larger than 45 degrees, acquiring the pixel point coordinate number F3 corresponding to the third area of the first sub-mark and the pixel point coordinate number F4 corresponding to the fourth area; the third region is HK in rectangular coordinate system ψax The area to the left of the abscissa corresponding to/2, the fourth area is HK in the rectangular coordinate system ψax The area to the right of the abscissa corresponding to/2.
When F3 > F4, the direction of the first sub-mark is the left direction along the x-axis in the rectangular coordinate system.
And when F3 is smaller than F4, the direction of the first sub-mark is the right direction along the x-axis in the rectangular coordinate system.
And acquiring an included angle between the first sub-mark and the y axis of the target image, acquiring coordinates of each pixel point corresponding to the first sub-mark, acquiring a maximum difference value of an abscissa and a maximum difference value of an ordinate through the coordinates of each pixel point corresponding to the first sub-mark, determining a corresponding placement direction (transverse direction or longitudinal direction) of the first sub-mark in the target image according to the included angle between the first sub-mark and the y axis of the target image, and acquiring the corresponding pointing direction of the first sub-mark according to the number of the pixel points determined by the maximum difference value of the abscissa and the maximum difference value of the ordinate in each region of the first sub-mark.
Obtaining a video frame list Z= (Z) corresponding to a target camera 1 ,Z 2 ,……,Z v ,……,Z V ) V=1, 2, … …, V; wherein V is the number of video frames in Z, Z v And the v video frame corresponding to the target camera.
Specifically, Z is a video frame formed by converting a video captured by a target camera in real time.
According to d3 to Z v Extracting the image to obtain Z v Corresponding extracted region image list set Y v =(Y v1 ,Y v2 ,……,Y ,……,Y ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein, the extracted region image list Y corresponding to the omega target task type =(Y vω1 ,Y vω2 ,……,Y vωw ,……,Y vωw(ω) ),Y vωw According to d3 ωw For Z v And extracting the obtained extracted regional image.
Specifically, one of ordinary skill in the art knows that either is according to d3 ωw For Z v The method for extracting the extracted regional image falls into the protection scope of the present invention, and is not described herein.
Further, in the embodiments of the present invention, only the pair Z is shown v The processing procedure is used as an example, and each video frame in Z is processed in its entirety in practical applications.
According to d4 to Y v Detecting to obtain Y v Corresponding judgment result list set S v =(S v1 ,S v2 ,……,S ,……,S ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein Y is Corresponding judgment result list S =(S vω1 ,S vω2 ,……,S vωw ,……,S vωw(ω) ),S vωw Is Y vωw In accordance with d4 ωw In the case of detecting the generated judgment rule, Y vωw And the corresponding judgment result is a result violating the judgment rule.
Specifically, one of ordinary skill in the art knows that either is according to d4 ωw The method of the generated judgment rule falls into the protection scope of the present invention, and is not described herein.
Further, the judging rule may include: the vehicle driving direction and the vehicle driving speed are in accordance with the specified driving direction and the vehicle mounting standard.
According to S v Obtaining S v Corresponding judging result information list set X v =(X v1 ,X v2 ,……,X ,……,X ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein S is Corresponding judgment target result information list X =(X vω1 ,X vω2 ,……,X vωφ ,……,X vωσ(ω) ) Phi=1, 2, … …, σ (ω), σ (ω) is S The corresponding number of judgment result information, X Phi-th judgment result information X vωφ =(d2 ω ,QX ωφ ),QX ωφ Is X ωφ And marking information of the corresponding extracted area image.
Acquisition of X r The corresponding image is used as an original image, and an original image database is constructed.
Specifically, σ (ω) satisfies the following condition: s is S vω1 Corresponding judgment result number S vω2 Corresponding judgment result number S vωw Corresponding judgment result number S vωw(ω) And the sum of the corresponding judgment result numbers.
The method comprises the steps that through various marks made on a target image by a user, target mark information and target judgment auxiliary parameters are obtained, and a judgment rule is generated through the target judgment auxiliary parameters; image extraction is carried out on a video frame corresponding to a target camera according to target mark information to obtain a region image list set, a judgment result list set is obtained by judging a judgment rule corresponding to each region image in a region image list, and judgment result information corresponding to each judgment result is obtained; therefore, all the areas of the video frame acquired by the target camera are not detected, corresponding identification information is generated according to the marks of the user, so that the detection areas are more accurate, the detection results more meet the needs of the user, and each extracted area image is detected at the same time during detection, so that the time efficiency is improved.
In the present inventionIn an explicit embodiment, the identifying task request further includes a target task type identifier list d5= (d 5) corresponding to d2 1 ,d5 2 ,……,d5 ω ,……,d5 ψ ) Target task detection period list d6= (d 6) corresponding to d5 1 ,d6 2 ,……,d6 ω ,……,d6 ψ ),d5 ω Is d2 ω Corresponding target task type identifier, d6 ω Is d5 ω And detecting a time period of the corresponding target task.
In the embodiment of the invention, the task detection time periods corresponding to each task type can be set, and the task detection time periods corresponding to each task type can be the same or different; therefore, the task type is not required to be detected every moment, the calculation resources of the server are saved, the detection time is shorter, and the time efficiency is improved.
Specifically, according to d6 vs X v Detecting to obtain X v Corresponding key result information list set G meeting preset time condition v =(G v1 ,G v2 ,……,G ,……,G ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein X is Corresponding key result information list G meeting preset time conditions =(G vω1 ,G vω2 ,……,G vωb ,……,G vωB(ω) ) B=1, 2, … …, B is X Corresponding quantity of key result information meeting preset time condition, and the b < th > X Corresponding key result information G meeting preset time conditions vωb =(d2 ω ,d5 ω ,QG vωb ,TG vωb ),QG vωb Is G vωb Corresponding marker information of the extracted region image, TG vωb Is Z v Corresponding shooting time, wherein the preset time condition is TG vωb ∈d6 ω
Specifically, according to X v D5 and d6, obtain X v Corresponding intermediate result information list set F v =(F v1 ,F v2 ,……,F ,……,F ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein X is Corresponding intermediate result information list F =(F vω1 ,F vω2 ,……,F vωφ ,……,F vωσ(ω) ),X Corresponding phi-th intermediate result information F vωφ =(d2 ω ,d5 ω ,QF vωφ ,TF vωφ ,d6 0 φ ,d6 1 φ ),QF vωφ Is F vωφ Corresponding marking information of the extracted region image, TF vωφ Is Z v Corresponding shooting time, d6 0 φ For d6 ω Corresponding task detection initial time point, d6 1 φ For d6 ω A corresponding task detection end time point;
if d6 0 φ ≤TF vωφ ≤d6 1 φ S630 is performed; if TF is vωφ <d6 0 φ Or TF vωφ >d6 1 φ S640 is performed;
TF is set to vωφ Corresponding intermediate result information is determined to be key result information;
TF is set to vωφ And deleting the corresponding intermediate result information.
Acquiring a task execution time period list corresponding to the task type identifier, acquiring a task execution initial time point and a task execution end time point according to the task execution time period, and judging the judgment result information according to the task execution initial time point and the task execution end time point to obtain the judgment result information that the shooting time of the judgment result information is in the corresponding task execution time period as key result information; only the judgment result information corresponding to the designated task execution time period is obtained, and the judgment result information corresponding to the task execution time period which is not designated by the user is deleted, so that the utilization efficiency of the storage space of the server is improved.
D6 is detected once at preset time intervals to obtain a first task type list d2 0 =(d2 0 1 ,d2 0 2 ,……,d2 0 h ,……,d2 0 H )、d2 0 Corresponding first tag information list set d3 0 =(d3 0 1 ,d3 0 2 ,……,d3 0 h ,……,d3 0 H ) D3 0 Corresponding first judging auxiliary parameter list set d4 0 =(d4 0 1 ,d4 0 2 ,……,d4 0 h ,……,d4 0 H ) H=1, 2, … …, H; wherein H is the number of the first task types, H is less than or equal to psi and d2 0 h D3 for the h first task type 0 h Is d2 0 h Corresponding first tag information list, d4 0 h Is d3 0 h The corresponding first judging auxiliary parameter list is a task type of which the current detection time is in a task execution detection time period; h is less than or equal to psi.
Specifically, the preset time can be set by those skilled in the art according to actual needs, and will not be described herein.
And the task detection time period is detected once every preset time interval, the target task type, the first mark information list set and the first judgment auxiliary parameter list set which are in the task execution detection time period at the current detection time are obtained, and the tasks which do not need to be detected are closed, so that the detection resources of the server are not occupied, and the data processing amount in the server is reduced.
When receiving a change identification task request sent by a target terminal, acquiring a second task type list d2 1 =(d2 1 1 ,d2 1 2 ,……,d2 1 u ,……,d2 1 U )、d2 1 Corresponding second tag information list set d3 1 =(d3 1 1 ,d3 1 2 ,……,d3 1 u ,……,d3 1 U ) D3 1 Corresponding second judging auxiliary parameter list set d4 1 =(d4 1 1 ,d4 1 2 ,……,d4 1 u ,……,d4 1 U ) U=1, 2, … …, U; wherein U is the number of the second task types, d2 1 u D3 for the u-th second task type 1 u Is d2 1 u A corresponding second tag information list, d4 1 u Is d3 1 u The corresponding second judgment auxiliary parameter list is provided with a second task type corresponding to the changed identification task request;
because the user can change the target detection task information during the target detection task, the changed target task detection information is updated immediately, and a second task type list, a second mark information list set and a second judgment auxiliary parameter list set are acquired.
When the object to be identified is a specified image corresponding to the specified camera, acquiring judgment result information through the following steps:
acquiring specified image information ZP= (XP, TP) corresponding to a specified camera, wherein XP is a specified image corresponding to the specified camera, and TP is shooting time corresponding to XP.
Acquiring appointed detection task information corresponding to the ZP; the specified detection task information includes: designated task type list p2= (p 2) 1 ,p2 2 ,……,p2 α ,……,p2 β ) Designated mark information list set p3= (p 3) corresponding to p2 1 ,p3 2 ,……,p3 α ,……,p3 β ) Designated judgment auxiliary parameter list set p4= (p 4) corresponding to p3 1 ,p4 2 ,……,p4 α ,……,p4 β ) Designated task type identification list p5= (p 5) corresponding to p2 1 ,p5 2 ,……,p5 α ,……,p5 β ) Designated task detection period list p6= (p 6) corresponding to p5 1 ,p6 2 ,……,p6 α ,……,p6 β ) α=1, 2, … … β, β being the number of task types specified for the specified detection task, p2 α Designating a task type for alpha, p5 α Is p2 α Corresponding assigned task type identifier, p6 α Is p5 α Corresponding designated task detection period, p2 α Corresponding mark information list p3 of designated area α =(p3 α1 ,p3 α2 ,……,p3 αγ ,……,p3 αδ(α) ) γ=1, 2, … …, δ (α), δ (α) is p2 α The number of corresponding designated areas, p3 αw Is p3 α In (3) gamma-tag information, p3 α Corresponding specified judgment auxiliary parameter list p4 α =(p4 α1 ,p4 α2 ,……,p4 αγ ,……,p4 αδ(α) ),p4 αγ Is p3 αγ The corresponding appointed judgment auxiliary parameter is that the appointed task type accords with a first preset time condition, and the first preset time condition is that: TP εp6 α The specified area is an area formed by corresponding A corner coordinates of the specified mark in the specified image, and the specified image is an image shot by the specified camera.
Preferably, a=4.
Further, the specified tasks include, but are not limited to: helmet-not-worn, illegal manned, red light running, reverse driving, motor vehicle lane occupation and illegal parking.
Further, the specified image is any image shot by the specified camera.
In the embodiment of the invention, the preset detection task type information corresponding to the appointed camera is obtained, and the preset detection task information comprises: preset task type list p2' = (p2 ') ' 1 ,p2′ 2 ,……,p2′ c ,……,p2′ u ) Preset mark information list set p3' = (p3 ') corresponding to p2' 1 ,p3′ 2 ,……,p3′ c ,……,p3′ u ) Preset auxiliary parameter list set p4 '= (p4' corresponding to p3 ')' 1 ,p4′ 2 ,……,p4′ c ,……,p4′ u ) Preset task type identification list p5' = (p5 ') corresponding to p2' 1 ,p5′ 2 ,……,p5′ c ,……,p5′ u ) Preset task detection time corresponding to p5Segment list p6 '= (p6' 1 ,p6′ 2 ,……,p6′ c ,……,p6′ u ) C=1, 2, … … u, u being the number of preset task types in the preset detection task, p2 '' c P5 'for the c-th preset task type' c Is p2' c Corresponding preset task type identifier, p6' c Is p5' c Corresponding preset task detection time period, p2' c Corresponding mark information list p3 'of preset area' c =(p3′ c1 ,p3′ c2 ,……,p3′ cy ,……,p3′ cY(c) ) Y=1, 2, … …, Y (c) is p2' c The number of corresponding preset regions, p3' cy Is p3' c In c-th tag information, p3' c Corresponding preset judging auxiliary parameter list p4' c =(p4′ c1 ,p4′ c2 ,……,p4′ cy ,……,p4′ cY(c) ),p4′ cy Is p3' cy Corresponding preset judging auxiliary parameters;
in the embodiment of the invention, the appointed image corresponding to the appointed camera is acquired in response to the task establishment request aiming at the appointed camera sent by the user.
Acquiring a third type identification list set D ' = (D ' marked on a specified image by a user ' 1 ,D′ 2 ,……,D′ c ,……D′ u ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein p2' c Corresponding third class identification list D' c =(D′ c1 ,D′ c2 ,……,D′ cy ,……,D′ cY(c) ),D′ cy For d2' c Corresponding c third type mark, D' cy And marking a c preset area corresponding to the c preset task type on the appointed image.
Specifically, the third type of identifier may be understood as an identifier corresponding to the a corner points of the designated area; further, the designated area may be understood as an area surrounding a city by connecting the a corner points.
And determining a target mark information set p3' of the target area corresponding to the D ' according to the D '.
In the embodiment of the invention, D' cy The pixel coordinates of each corner of the specified image are determined as D' cy Corresponding marking information p3' cy To obtain a preset mark information set p3'.
According to D ', a fourth type identification list set E' = (E 'marked by the user on the target image is obtained' 1 ,E′ 2 ,……,E′ c ,……,E′ u ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein D' c Corresponding fourth class identification list E' c =(E′ c1 ,E′ c2 ,……,E′ cy ,……,E′ cY(c) ),E′ cy Is D' cy And the corresponding fourth type task identification is marked in the preset area.
Specifically, the fourth class of identification includes, but is not limited to: arrow form designations and indicator line form designations.
And determining a preset judgment auxiliary parameter list set p4' corresponding to E ' according to E '.
The method comprises the steps that a preset mark information set of a corresponding preset area is generated by obtaining a third type mark list set marked on a specified image by a user; acquiring a corresponding preset auxiliary judgment parameter list set by acquiring a fourth type identification list set marked on a designated image by a user; therefore, the detection of all the areas of the designated image acquired by the designated camera is not required, and the corresponding identification information is generated according to the marks of the user, so that the detection area is more accurate and meets the needs of the user.
Acquiring a preset task detection time period information list Lp6' = (Lp6 ') ' 1 ,Lp6′ 2 ,……,Lp6′ c ,……,Lp6′ u ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein, p6' c Corresponding preset task detection time period information Lp6' c =(Cp6′ c ,Zp6′ c ),Cp6′ c Is p6' c Corresponding to the initial time point of detection of the preset task, zp6' c Is p6' c Detecting a corresponding preset task detection ending time point;
if Cp6' c ≤TP≤Zp6′ c P6 'is used for' c Adding corresponding intermediate task detection sub-information to the appointed task detection information, wherein the intermediate task detection sub-information comprises: p2' c ,p3′ c ,p4′ c ,p5′ c ,p6′ c
Acquiring intermediate task detection sub-information from preset detection task type information, thereby generating specified task detection information; all preset task types corresponding to the specified image are not detected, but the specified detection task types required by the user when the specified image time point is shot are detected, the calculation resources of the server are saved while the detection result is accurate, the detection time is shorter, and the time efficiency is improved.
When receiving a change identification task request sent by a target terminal, acquiring a third task type list p2 1 =(p2 1 1 ,p2 1 2 ,……,p2 1 ε ,……,p2 1 η )、p2 1 Corresponding third tag information list set p3 1 =(p3 1 1 ,p3 1 2 ,……,p3 1 ε ,……,p3 1 η ) P3 1 Corresponding third judging auxiliary parameter list set p4 1 =(p4 1 1 ,p4 1 2 ,……,p4 1 ε ,……,p4 1 η ) Epsilon=1, 2, … …, eta; where η is the number of third task types, p2 1 ε For the epsilon second task type, p3 1 u Is p2 1 ε Corresponding third tag information list, p4 1 ε Is p3 1 ε And the corresponding third judging auxiliary parameter list is a task type corresponding to the changed identification task request.
Image extraction is carried out on XP according to p3, and an extracted region image list set HP= (HP) corresponding to XP is obtained 1 ,HP 2 ,……,HP α ,……,HP β ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein,,the extracted region image list HP corresponding to the alpha-th designated task type α =(HP α1 ,HP α2 ,……,HP αγ ,……,HP αδ(α) ),HP αγ According to p3 α And extracting the XP to obtain an extracted area image.
Specifically, those skilled in the art know that any method for extracting an image of an extracted region obtained by extracting an image of XP according to p3 falls into the protection scope of the present invention, and will not be described herein.
Detecting HP according to p4 to obtain a judging result list set DP= (DP) corresponding to HP 1 ,DP 2 ,……,DP α ,……,DP β ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein, HP α Corresponding judgment result list DP α =(DP α1 ,DP α2 ,……,DP αγ ,……,DP αδ(α) ),DP αγ Is HP αγ At p4 αγ In the case of detecting the generated judgment rule, the HP αγ And the corresponding judgment result is a result violating the judgment rule.
Specifically, one of ordinary skill in the art knows that either is according to p4 αγ The method of the generated judgment rule falls into the protection scope of the present invention, and is not described herein.
Further, the judging rule may include: the vehicle driving direction and the vehicle driving speed are in accordance with the specified driving direction and the vehicle mounting standard.
Obtaining a judgment result information list set JP= (JP) corresponding to the DP according to the DP 1 ,JP 2 ,……,JP α ,……,JP β ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein DP α Corresponding judgment target result information list JP α =(JP α1 ,JP α2 ,……,JP αv ,……,JP αw(α) ) V=1, 2, … …, w (α), w (α) is DP α Corresponding number of judgment target result information, JP α V-th judgment result information JP αv =(p2 α ,p5 α ,TP,QP αv ),QP αv For JP (JP) αv Corresponding post extractionMarking information of the area image.
Acquisition of JP αv The corresponding image is used as an original image, and an original image database is constructed.
Acquiring the specified image information corresponding to the specified camera, acquiring the specified detection task information according to the specified image information, acquiring the specified mark information and the set-top-post judgment auxiliary parameter, and generating a judgment rule through the specified judgment auxiliary parameter; carrying out image extraction on a specified image corresponding to a specified camera according to specified mark information to obtain a region image list set, judging a judging rule corresponding to each region image in the region image list to obtain a judging result list set, and obtaining judging result information corresponding to each judging result; therefore, the detection is not carried out on all the areas of the designated image, corresponding identification information is generated according to the marks of the user, so that the detection area is more accurate, the detection result is more in line with the needs of the user, and the images of each extracted area are detected simultaneously during detection, so that the time efficiency is improved.
Embodiments of the present invention also provide a non-transitory computer readable storage medium that may be disposed in an electronic device to store at least one instruction or at least one program for implementing one of the methods embodiments, the at least one instruction or the at least one program being loaded and executed by the processor to implement the methods provided by the embodiments described above.
Embodiments of the present invention also provide an electronic device comprising a processor and the aforementioned non-transitory computer-readable storage medium.
Embodiments of the present invention also provide a computer program product comprising program code for causing an electronic device to carry out the steps of the method according to the various exemplary embodiments of the invention as described in the specification, when said program product is run on the electronic device.
While certain specific embodiments of the invention have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the invention. Those skilled in the art will also appreciate that many modifications may be made to the embodiments without departing from the scope and spirit of the invention. The scope of the invention is defined by the appended claims.

Claims (10)

1. The image retrieval method is characterized by being applied to an image retrieval system, wherein the image retrieval system comprises an original image database and a display interface, the original image database is used for storing a plurality of original images and original image information corresponding to the original images, and the original image information comprises event types corresponding to the original images;
the method comprises the following steps:
s100, acquiring image retrieval conditions input by a user; the image retrieval condition includes at least one of: a shooting time period corresponding to the image, a shooting place corresponding to the image, an event type corresponding to the image and a shooting subject corresponding to the image;
s200, performing image retrieval in an original image database according to image retrieval conditions input by a user to obtain a plurality of original images conforming to the image retrieval conditions;
s300, displaying a plurality of original images meeting the image retrieval conditions in an image display area in a display interface;
s400, responding to the dragging of the intermediate image by a user, and displaying an image receiving area at a preset position of a display interface; wherein the intermediate image is any one of a plurality of original images which meet the image retrieval condition;
S500, responding to the fact that a user drags the intermediate image into an image receiving area, and performing image retrieval in an original image database according to the intermediate image to determine a plurality of associated images; the associated image is an original image of which the person corresponding to the intermediate image is the same person;
and S600, displaying the plurality of associated images in an image display area of the display interface.
2. The method according to claim 1, wherein S500 comprises the steps of:
s511, extracting image features of the intermediate image, and determining a target image feature vector list corresponding to the intermediate image features; the feature type corresponding to any target image feature vector is a key feature or a general feature; the feature dimension of the target image feature vector with any feature type as a key feature is larger than the feature dimension of the target image feature vector with any feature type as a general feature;
s512, obtaining a target image feature vector with a feature type of a general feature in the target image feature vector list to obtain a general image feature vector list YB= (YB) 1 ,YB 2 ,……,YB i ,……,YB m ) I=1, 2, … …, m; wherein m is the number of general image feature vectors in the target image feature vector list, YB i The ith general image feature vector in the target image feature vector list is used for obtaining a target image feature vector;
s513, searching in the original image database according to the general image feature vector to determine a first image list set TY= (TY) 1 ,TY 2 ,……,TY i ,……,TY m ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein YB i Corresponding first image list TY i =(TY i1 ,TY i2 ,……,TY ij ,……,TY in(i) ) J=1, 2, … …, n (i), n (i) being YB i The corresponding number of first images TY ij Is according to YB i Searching a j-th first image in an original image database;
s514, performing intersection processing according to TY, and determining a second image list TE= (TE) 1 ,TE 2 ,……,TE f ,……,TE F ) F=1, 2, … …, F; wherein F is the number of second images in the second image list, TE f An f second image in the second image list;
s515, obtaining a target image feature vector with the feature type as a key feature in the target image feature vector list to obtain a key image feature vector list EX= (EX) 1 ,EX 2 ,……,EX r ,……,EX R ) R=1, 2, … …, R; wherein R is the number of key image feature vectors in the target image feature vector list, EX r The method comprises the steps of taking the r-th key image feature vector in a target image feature vector list as a key image feature vector;
s516, searching in the second image list according to the key image feature vector list, and determining a third image list TH= (TH) 1 ,TH 2 ,……,TH t ,……,TH T ) T=1, 2, … …, T; wherein T is the number of third images in the third image list, TH t A t third image obtained by searching in the second image list according to the feature vector of each key image;
s517, acquiring a definition list QX= (QX) corresponding to TH 1 ,QX 2 ,……,QX t ,……,QX T ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein QX is t Is TH t Corresponding definition;
s518, acquiring a third image feature vector list set XT= (XT) corresponding to the TH 1 ,XT 2 ,……,XT t ,……,XT T ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TH is that t Corresponding third image feature vector list XT t =(XT t1 ,XT t2 ,……,XT tr ,……,XT tR ),XT tr Is TH t A corresponding r third image feature vector, wherein the feature type of the third image feature vector is a key feature;
s519, acquiring an intermediate image feature vector list Gx= (GX) from QX, EX, and XT 1 ,GX 2 ,……,GX r ,……,GX R ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein GX r GX for the r-th intermediate image feature vector r Meets the following conditions:
Figure FDA0004031291010000021
b1 is a preset first image feature vector weight, b2 is a preset second image feature vector weight, b1+b2=1 and b1 is greater than or equal to b2;
s520, searching in an original image database according to the GX, and determining a plurality of associated images corresponding to the GX.
3. The method according to claim 1, wherein S500 comprises the steps of:
s551, performing similarity calculation on the target person corresponding to the intermediate image and the target person corresponding to each original image meeting the image retrieval conditions, and determining a target person similarity list;
S552, determining a key person similarity list RX= (RX) according to the target person similarity list 1 ,RX 2 ,……,RX g ,……,RX G ) G=1, 2, … …, G; wherein G is the number of key person similarities, RX g The similarity of the g-th key person is equal to or greater than a preset similarity threshold x 0 Target person similarity of (2);
s553, when
Figure FDA0004031291010000032
At this time, a fourth image list Tf= (TF) corresponding to RX is acquired 1 ,TF 2 ,……,TF g ,……,TF G ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TF is g For RX g A corresponding fourth image;
s554, acquiring a definition list QF= (QF) corresponding to the TF 1 ,QF 2 ,……,QF g ,……,QF G ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein QF (quad Flat No lead) g For TF g Corresponding definition;
s555, extracting image features of the intermediate image, and determining a target image feature vector list corresponding to the intermediate image features; the feature type corresponding to any target image feature vector is a key feature or a general feature; the feature dimension of the target image feature vector with any feature type as a key feature is larger than the feature dimension of the target image feature vector with any feature type as a general feature;
s556, acquiring the key feature type in the feature vector list of the target imageThe feature vector of the target image is characterized to obtain a key image feature vector list EX= (EX) 1 ,EX 2 ,……,EX r ,……,EX R ) R=1, 2, … …, R; wherein R is the number of key image feature vectors in the target image feature vector list, EX r The method comprises the steps of taking the r-th key image feature vector in a target image feature vector list as a key image feature vector;
s557, a fourth image feature vector list set XF= (XF) corresponding to TF is obtained 1 ,XF 2 ,……,XF g ,……,XF G ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TF is g Corresponding third image feature vector list XF g =(XF g1 ,XF g2 ,……,XF gr ,……,XF gR ) R=1, 2, … …, R is TF g Number of corresponding fourth image feature vectors, XF gr For TF g A corresponding r fourth image feature vector, wherein the feature type of the fourth image feature vector is a key feature;
s558, acquiring a designated image feature vector list ZX= (ZX) according to QF, EX and XF 1 ,ZX 2 ,……,ZX r ,……,ZX R ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein ZX r Designating an image feature vector for the r < th >, ZX r Meets the following conditions:
Figure FDA0004031291010000031
b1 is a preset first image feature vector weight, b2 is a preset second image feature vector weight, b1+b2=1 and b1 is greater than or equal to b2;
s559, searching in an original image database according to ZX, and determining a plurality of associated images corresponding to the ZX.
4. A method according to claim 3, characterized in that x= [0.9-0.95].
5. The method of claim 5, wherein x = 0.95.
6. The method of claim 1, wherein the event type comprises: helmet-not-worn, illegal manned, red light running, reverse driving, motor vehicle lane occupation and illegal parking.
7. The method of claim 2, wherein the target image feature vector list further comprises feature vectors corresponding to image features entered by a user.
8. The method of claim 2, wherein the general features include: the vehicles used by the target person, the gender corresponding to the target person and the color of the helmet worn by the target person; the key features include: facial features corresponding to the target person, body type features corresponding to the target person, and clothing features corresponding to the target person.
9. A non-transitory computer readable storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement the method of any one of claims 1-8.
10. An electronic device comprising a processor and the non-transitory computer readable storage medium of claim 9.
CN202211730300.6A 2022-12-30 2022-12-30 Image retrieval method, electronic equipment and storage medium Active CN116401392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211730300.6A CN116401392B (en) 2022-12-30 2022-12-30 Image retrieval method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211730300.6A CN116401392B (en) 2022-12-30 2022-12-30 Image retrieval method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116401392A true CN116401392A (en) 2023-07-07
CN116401392B CN116401392B (en) 2023-10-27

Family

ID=87008145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211730300.6A Active CN116401392B (en) 2022-12-30 2022-12-30 Image retrieval method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116401392B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001160062A (en) * 1999-12-03 2001-06-12 Mitsubishi Electric Corp Device for retrieving image data
JP2010072749A (en) * 2008-09-16 2010-04-02 Olympus Imaging Corp Image search device, digital camera, image search method, and image search program
CN102436491A (en) * 2011-11-08 2012-05-02 张三明 System and method used for searching huge amount of pictures and based on BigBase
CN105426529A (en) * 2015-12-15 2016-03-23 中南大学 Image retrieval method and system based on user search intention positioning
CN105677868A (en) * 2016-01-08 2016-06-15 上海律巢网络科技有限公司 Method and system for displaying search interface
CN106547744A (en) * 2015-09-16 2017-03-29 杭州海康威视数字技术股份有限公司 A kind of image search method and system
CN107657008A (en) * 2017-09-25 2018-02-02 中国科学院计算技术研究所 Across media training and search method based on depth discrimination sequence study
CN108563792A (en) * 2018-05-02 2018-09-21 百度在线网络技术(北京)有限公司 Image retrieval processing method, server, client and storage medium
CN110209866A (en) * 2019-05-30 2019-09-06 苏州浪潮智能科技有限公司 A kind of image search method, device, equipment and computer readable storage medium
CN111177440A (en) * 2019-12-20 2020-05-19 北京旷视科技有限公司 Target image retrieval method and device, computer equipment and storage medium
CN111209331A (en) * 2020-01-06 2020-05-29 北京旷视科技有限公司 Target object retrieval method and device and electronic equipment
CN111209446A (en) * 2018-11-22 2020-05-29 深圳云天励飞技术有限公司 Method and device for presenting personnel retrieval information and electronic equipment
CN111476319A (en) * 2020-05-08 2020-07-31 网易(杭州)网络有限公司 Commodity recommendation method and device, storage medium and computing equipment
CN111581423A (en) * 2020-05-29 2020-08-25 上海依图网络科技有限公司 Target retrieval method and device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001160062A (en) * 1999-12-03 2001-06-12 Mitsubishi Electric Corp Device for retrieving image data
JP2010072749A (en) * 2008-09-16 2010-04-02 Olympus Imaging Corp Image search device, digital camera, image search method, and image search program
CN102436491A (en) * 2011-11-08 2012-05-02 张三明 System and method used for searching huge amount of pictures and based on BigBase
CN106547744A (en) * 2015-09-16 2017-03-29 杭州海康威视数字技术股份有限公司 A kind of image search method and system
CN105426529A (en) * 2015-12-15 2016-03-23 中南大学 Image retrieval method and system based on user search intention positioning
CN105677868A (en) * 2016-01-08 2016-06-15 上海律巢网络科技有限公司 Method and system for displaying search interface
CN107657008A (en) * 2017-09-25 2018-02-02 中国科学院计算技术研究所 Across media training and search method based on depth discrimination sequence study
CN108563792A (en) * 2018-05-02 2018-09-21 百度在线网络技术(北京)有限公司 Image retrieval processing method, server, client and storage medium
CN111209446A (en) * 2018-11-22 2020-05-29 深圳云天励飞技术有限公司 Method and device for presenting personnel retrieval information and electronic equipment
CN110209866A (en) * 2019-05-30 2019-09-06 苏州浪潮智能科技有限公司 A kind of image search method, device, equipment and computer readable storage medium
CN111177440A (en) * 2019-12-20 2020-05-19 北京旷视科技有限公司 Target image retrieval method and device, computer equipment and storage medium
CN111209331A (en) * 2020-01-06 2020-05-29 北京旷视科技有限公司 Target object retrieval method and device and electronic equipment
CN111476319A (en) * 2020-05-08 2020-07-31 网易(杭州)网络有限公司 Commodity recommendation method and device, storage medium and computing equipment
CN111581423A (en) * 2020-05-29 2020-08-25 上海依图网络科技有限公司 Target retrieval method and device
WO2021237967A1 (en) * 2020-05-29 2021-12-02 上海依图网络科技有限公司 Target retrieval method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHAO BAOHUI,HUANG WENZHUN,WANG HARRY HAOXIANG,LIU ZHE: "Image Retrieval based on Color Features and Information Entropy", IEEE, pages 1211 - 1214 *
王海波;艾斯卡尔・艾木都拉;: "基于颜色和边缘的快速图像检索研究", 通信技术, no. 03, pages 60 - 65 *

Also Published As

Publication number Publication date
CN116401392B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN110781350B (en) Pedestrian retrieval method and system oriented to full-picture monitoring scene
EP2290560A1 (en) Image search device and image search method
EP3164811B1 (en) Method for adding images for navigating through a set of images
JP6134168B2 (en) Information processing apparatus, information processing method, and system
US20140205186A1 (en) Techniques for Ground-Level Photo Geolocation Using Digital Elevation
JP6992883B2 (en) Model delivery system, method and program
US10810466B2 (en) Method for location inference from map images
WO2018041475A1 (en) Driver assistance system for determining a position of a vehicle
CN105716567A (en) Method for determining the distance between an object and a motor vehicle by means of a monocular imaging device
CN109074757B (en) Method, terminal and computer readable storage medium for establishing map
US11544926B2 (en) Image processing apparatus, method of processing image, and storage medium
WO2017046838A1 (en) Specific person detection system and specific person detection method
CN113052008A (en) Vehicle weight recognition method and device
US10140555B2 (en) Processing system, processing method, and recording medium
CN116401392B (en) Image retrieval method, electronic equipment and storage medium
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
WO2021196551A1 (en) Image retrieval method and apparatus, computer device, and storage medium
CN112926426A (en) Ship identification method, system, equipment and storage medium based on monitoring video
CN116069801B (en) Traffic video structured data generation method, device and medium
CN111986299B (en) Point cloud data processing method, device, equipment and storage medium
CN115131826B (en) Article detection and identification method, and network model training method and device
US20200003574A1 (en) Method and device for fast detection of repetitive structures in the image of a road scene
JP2021033494A (en) Annotation support method, annotation support device, and annotation support program
US20210042933A1 (en) Image processing apparatus and image processing method
JP2018124740A (en) Image retrieval system, image retrieval method and image retrieval program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant