CN115344738A - Retrieval method and system based on artificial intelligence - Google Patents

Retrieval method and system based on artificial intelligence Download PDF

Info

Publication number
CN115344738A
CN115344738A CN202211269591.3A CN202211269591A CN115344738A CN 115344738 A CN115344738 A CN 115344738A CN 202211269591 A CN202211269591 A CN 202211269591A CN 115344738 A CN115344738 A CN 115344738A
Authority
CN
China
Prior art keywords
image
character
information
pixel
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211269591.3A
Other languages
Chinese (zh)
Other versions
CN115344738B (en
Inventor
邹洋
刘思思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhiguo Technology Co ltd
Original Assignee
Nantong Zhiguo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Zhiguo Technology Co ltd filed Critical Nantong Zhiguo Technology Co ltd
Priority to CN202211269591.3A priority Critical patent/CN115344738B/en
Publication of CN115344738A publication Critical patent/CN115344738A/en
Application granted granted Critical
Publication of CN115344738B publication Critical patent/CN115344738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5846Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • G06V30/1456Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields based on user interactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/19007Matching; Proximity measures
    • G06V30/19093Proximity measures, i.e. similarity or distance measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/1918Fusion techniques, i.e. combining data from various sources, e.g. sensor fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Character Discrimination (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a retrieval method and a retrieval system based on artificial intelligence, which comprise the following steps: acquiring a first image of a trademark to be retrieved, performing character extraction on the first image based on a character recognition technology to obtain first character information, and adding first remark information to the first image based on the first character information; after judging that the first character information in the first image is extracted, deleting the corresponding first character information in the first image to obtain a second image, and carrying out binarization processing on the second image according to pixel points meeting requirements in the second image to obtain a third image; acquiring a binarized fourth image which is pre-configured in an image database of the trademark and second remark information corresponding to the fourth image, and calculating according to the third image, the fourth image, the first remark information and the second remark information to obtain an image similarity coefficient of the first image and each fourth image; and counting trademarks corresponding to a preset number of fourth images according to the image similarity coefficient to generate a trademark approximate retrieval list.

Description

Retrieval method and system based on artificial intelligence
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a retrieval method and a retrieval system based on artificial intelligence.
Background
The trademark search, i.e., trademark inquiry, refers to a case where a trademark application applicant or an agent inquires whether a trademark applied for registration is the same as or similar to a previously-authorized trademark in a trademark office, so as to know whether the trademark applied for registration by the owner or the agent is the same as a trademark already registered by another person.
Trademarks are at least divided into character trademarks, graphic trademarks and comprehensive trademarks combining graphics and characters, in most of current search engines, searching is carried out according to characters of trademarks which users want to register, corresponding and approximate character trademarks are obtained, so that the users cannot effectively search when searching the comprehensive trademarks combining the graphic trademarks, the graphics and the characters, and trademark search scenes are limited greatly.
Disclosure of Invention
The embodiment of the invention provides a retrieval method and a retrieval system based on artificial intelligence, which can be used for efficiently retrieving trademarks combining images and characters.
In a first aspect of the embodiments of the present invention, a retrieval method based on artificial intelligence is provided, including:
acquiring a first image of a trademark to be retrieved, performing character extraction on the first image based on a character recognition technology to obtain first character information, and adding first remark information to the first image based on the first character information;
after judging and extracting the first character information in the first image, deleting the corresponding first character information in the first image to obtain a second image, and carrying out binarization processing on the second image according to pixel points meeting requirements in the second image to obtain a third image;
acquiring a binarized fourth image which is pre-configured in an image database of a trademark and second remark information corresponding to the fourth image, and calculating according to the third image, the fourth image, the first remark information and the second remark information to obtain an image similarity coefficient of the first image and each fourth image;
and counting trademarks corresponding to a preset number of fourth images according to the image similarity coefficient to generate a trademark approximate retrieval list.
Optionally, in a possible implementation manner of the first aspect, the obtaining a first image of a trademark to be retrieved, performing text extraction on the first image based on a text recognition technology to obtain first text information, and adding first remark information to the first image based on the first text information includes:
the method comprises the steps of photographing an image of a trademark to be retrieved to obtain a photographed image, if the resolution of the photographed image is judged to be larger than a preset resolution, generating a first capturing frame, and initially positioning the first capturing frame to enable the first capturing frame to be located at a first position of the photographed image;
if the confirmation information of the user is received, taking the image in the first capture frame at the first position as a first image;
if the dragging information of the intercepting frame of the user is received, dragging the first intercepting frame at the first position to a second position according to the dragging information of the intercepting frame, automatically correcting the second position, and taking the image in the first intercepting frame at the second position as a first image;
and if the first image is identified to have the first character information based on the character identification technology, adding first remark information to the first image based on the first character information, and acquiring a character pixel coordinate set corresponding to the first character information.
Optionally, in a possible implementation manner of the first aspect, the photographing an image to be retrieved to obtain a photographed image, generating a first capturing frame if it is determined that a resolution of the photographed image is greater than a preset resolution, and initially positioning the first capturing frame so that the first capturing frame is located at a first position of the photographed image includes:
performing coordinate processing on the photographed image by taking a central pixel point as an origin, calling a preset first intercepting frame, and determining an intercepting frame central point of the first intercepting frame;
and aligning the center point of the capturing frame with the center pixel point of the photographed image, and finishing initial positioning of the first capturing frame to enable the first capturing frame to be located at the first position of the photographed image.
Optionally, in a possible implementation manner of the first aspect, if dragging information of an intercept box of a user is received, dragging the first intercept box at the first position to the second position according to the dragging information of the intercept box, automatically correcting the second position, and taking an image in the first intercept box at the second position as the first image includes:
moving the first capturing frame according to the capturing frame dragging information, dragging the first capturing frame to a second position, and acquiring first pixel values of all pixel points of the photographed image in the first capturing frame when the first capturing frame is dragged to the second position;
determining a first pixel point of which the first pixel value is in a first preset pixel interval to obtain a first pixel point set, and extracting an extreme value pixel point coordinate in the first pixel point set;
calculating according to the extreme value X-axis maximum coordinate, the extreme value X-axis minimum coordinate, the extreme value Y-axis maximum coordinate and the extreme value Y-axis minimum coordinate included in the extreme value pixel point coordinates to obtain a corrected coordinate point;
and overlapping the center point of the first intercepting frame with the correction coordinate point to correct and adjust the first intercepting frame, and taking the position corresponding to the dragged first intercepting frame after correction and adjustment as a final second position.
Optionally, in a possible implementation manner of the first aspect, the calculating according to the extreme value X-axis maximum coordinate, the extreme value X-axis minimum coordinate, the extreme value Y-axis maximum coordinate, and the extreme value Y-axis minimum coordinate included in the extreme value pixel point coordinate to obtain the corrected coordinate point includes:
calculating according to the extreme value X-axis maximum coordinate and the extreme value X-axis minimum coordinate included in the extreme value pixel point coordinates to obtain a first X-axis coordinate of the correction coordinate point, and calculating according to the extreme value Y-axis maximum coordinate and the extreme value Y-axis minimum coordinate to obtain a first Y-axis coordinate of the correction coordinate point;
determining all pixel points smaller than the first X-axis coordinate to obtain a first pixel point number, and determining all pixel points larger than the first X-axis coordinate to obtain a second pixel point number;
determining all pixel points smaller than the first Y-axis coordinate to obtain a third pixel point number, and determining all pixel points larger than the first Y-axis coordinate to obtain a fourth pixel point number;
adjusting the first X-axis coordinate according to the number of the first pixel points and the number of the second pixel points to obtain a second X-axis coordinate, and adjusting the first Y-axis coordinate according to the number of the third pixel points and the number of the fourth pixel points to obtain a second Y-axis coordinate;
the second X-axis coordinate and the second Y-axis coordinate of the correction coordinate point are obtained by the following formulas,
Figure 226574DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 550239DEST_PATH_IMAGE002
the number of the first pixel points is the number of the first pixel points,
Figure 861135DEST_PATH_IMAGE003
the number of the second pixel points is the number of the second pixel points,
Figure 521923DEST_PATH_IMAGE004
is the second X-axis coordinate and,
Figure 742820DEST_PATH_IMAGE005
is the maximum coordinate of the X-axis of the extreme value,
Figure 830862DEST_PATH_IMAGE006
is the minimum coordinate of the X-axis of the extremum,
Figure 2955DEST_PATH_IMAGE007
the value is normalized for the quantity,
Figure 467434DEST_PATH_IMAGE008
is a weight value of the X-axis coordinate,
Figure 667472DEST_PATH_IMAGE009
is a second Y-axis coordinate and is,
Figure 801781DEST_PATH_IMAGE010
is the maximum coordinate of the extreme value Y-axis,
Figure 87269DEST_PATH_IMAGE011
is the minimum coordinate of the extreme Y-axis,
Figure 230805DEST_PATH_IMAGE012
the number of the third pixel points is the number of the third pixel points,
Figure 19770DEST_PATH_IMAGE013
the number of the fourth pixel points is the number of the fourth pixel points,
Figure 980772DEST_PATH_IMAGE014
is a weight value of the Y-axis coordinate.
Optionally, in a possible implementation manner of the first aspect, after the determining that the first text information in the first image is extracted, deleting the corresponding first text information in the first image to obtain a second image, and performing binarization processing on the second image according to pixel points that meet requirements in the second image to obtain a third image includes:
acquiring a first image and a character pixel point corresponding to the character pixel coordinate set, and replacing the current pixel value of the character pixel point with a preset pixel value so as to delete corresponding first character information in the first image to obtain a second image;
determining pixel points in a preset pixel interval in the second image as pixel points meeting the requirements, and replacing the pixel values of all the pixel points meeting the requirements with black pixel values;
and replacing the pixel values of all the pixel points which are not positioned in the preset pixel interval with white pixel values so as to enable the second image to realize binarization processing to obtain a third image.
Optionally, in a possible implementation manner of the first aspect, the obtaining an image similarity coefficient between the first image and each fourth image by calculating, according to the third image, the fourth image, the first memo information, and the second memo information, the binarized fourth image preconfigured in the image database of trademarks, and the second memo information corresponding to the fourth image includes:
determining first initial pixel points of a third image, and sequentially acquiring pixel values of the third image according to the first initial pixel points to obtain an image fingerprint set of the third image;
determining second initial pixel points corresponding to the first initial pixel point coordinates in the fourth image, and sequentially acquiring pixel values of the fourth image according to the second initial pixel points to obtain an image fingerprint set of the fourth image;
determining the number of the same pixel points in the image fingerprint set of the third image and the image fingerprint set of the fourth image to obtain an image similarity sub-number;
respectively carrying out character recognition on the first remark information and the second remark information, eliminating irrelevant characters in the first remark information and the second remark information to obtain first relevant characters and second relevant characters, determining the same characters in the first relevant characters and the second relevant characters, and obtaining a character similarity sub-number;
and performing fusion calculation according to the image similarity sub-coefficient and the character similarity sub-coefficient to obtain an image similarity coefficient of the first image and each fourth image.
Optionally, in a possible implementation manner of the first aspect, the determining the number of the same pixel points in the image fingerprint set of the third image and the image fingerprint set of the fourth image to obtain a sub-number of image similarity includes:
counting pixel points with the same coordinate and the same pixel value in the image fingerprint set of the third image and the image fingerprint set of the fourth image to obtain the number of the same pixel points;
and calculating according to the number of the same pixel points and the total number of the pixel points in the image fingerprint set to obtain the image similarity sub-number.
Optionally, in a possible implementation manner of the first aspect, the performing character recognition on the first remark information and the second remark information respectively, removing irrelevant characters in the first remark information and the second remark information to obtain first relevant characters and second relevant characters, determining the same characters in the first relevant characters and the second relevant characters, and obtaining a character similarity sub-number includes:
calling a preset irrelevant information corresponding table, wherein the irrelevant information corresponding table comprises a plurality of irrelevant characters, and performing word segmentation processing on the first remark information and the second remark information respectively to obtain corresponding first participles and second participles;
if the first remark information and the second remark information are judged to have a first word segmentation and a second word segmentation which correspond to the irrelevant words in the irrelevant information corresponding table, deleting the corresponding first word segmentation and second word segmentation from the first remark information and the second remark information;
counting a first number and a second number of the first participles and the second participles, taking the smaller number of the first number and the second number as a to-be-compared number, sequentially obtaining the participles corresponding to the to-be-compared number to be compared with the participles corresponding to the non-to-be-compared number one by one, and associating the participles corresponding to the to-be-compared number with the participles corresponding to the nearest non-to-be-compared number;
and counting the number of the associated participles and the number of the same characters in each associated participle, and calculating according to the number of the associated participles, the number of the same characters, the first number and the second number to obtain a character similarity sub-coefficient.
Optionally, in a possible implementation manner of the first aspect, the sequentially obtaining the segmentations corresponding to the number to be compared and the segmentations corresponding to the number not to be compared one by one, and associating the segmentations corresponding to the number to be compared with the nearest segmentations corresponding to the number not to be compared includes:
obtaining the participles corresponding to the number to be compared, comparing the characters of the corresponding participles with the characters of the participles corresponding to the number not to be compared one by one, and if the participles corresponding to the number to be compared are judged to be the most characters which are the same as the participles corresponding to the number not to be compared after comparison;
the two participles having the most number of the same word are associated.
Optionally, in a possible implementation manner of the first aspect, the performing fusion calculation according to the image similarity sub-coefficient and the text similarity sub-coefficient to obtain an image similarity coefficient between the first image and each fourth image includes:
weighting the image similarity sub-coefficient according to the image calculation weight to obtain an image sub-coefficient, and weighting the character similarity sub-coefficient according to the character calculation weight to obtain a character sub-coefficient;
the image sub-coefficients and the character sub-coefficients are fused to obtain image similarity coefficients of the first image and each fourth image, the image similarity coefficients are calculated through the following formula,
Figure 363343DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 169625DEST_PATH_IMAGE016
is a coefficient of similarity of the images,
Figure 78675DEST_PATH_IMAGE017
the number of the same pixel points is the same,
Figure 318902DEST_PATH_IMAGE018
is the total number of pixel points in the image fingerprint set,
Figure 578982DEST_PATH_IMAGE019
the weights are calculated for the images and,
Figure 798742DEST_PATH_IMAGE020
for the number of associated segmented words,
Figure 827877DEST_PATH_IMAGE021
is the first number of the first segmented word,
Figure 865104DEST_PATH_IMAGE022
is the second number of the second participles,
Figure 956688DEST_PATH_IMAGE023
is a normalized value of the number of the participles,
Figure 370351DEST_PATH_IMAGE024
is as follows
Figure 129360DEST_PATH_IMAGE025
The number of identical words of the associated participle,
Figure 603067DEST_PATH_IMAGE026
for the upper limit value of the associated participle,
Figure 306580DEST_PATH_IMAGE027
is a normalized value of the same word,
Figure 366678DEST_PATH_IMAGE028
weights are calculated for the text.
In a second aspect of the embodiments of the present invention, there is provided an artificial intelligence-based retrieval system, including:
the extraction module is used for acquiring a first image of a trademark to be retrieved, performing character extraction on the first image based on a character recognition technology to obtain first character information, and adding first remark information to the first image based on the first character information;
the binarization processing module is used for deleting corresponding first character information in the first image to obtain a second image after judging and extracting the first character information in the first image, and carrying out binarization processing on the second image to obtain a third image according to pixel points meeting requirements in the second image;
the similarity calculation module is used for acquiring a binarized fourth image which is pre-configured in an image database of the trademark and second remark information corresponding to the fourth image, and calculating according to the third image, the fourth image, the first remark information and the second remark information to obtain an image similarity coefficient between the first image and each fourth image;
and the generating module is used for counting trademarks corresponding to a preset number of fourth images according to the image similarity coefficient and generating a trademark approximate retrieval list.
A third aspect of the embodiments of the present invention provides a storage medium, in which a computer program is stored, and the computer program is used for implementing the method according to the first aspect of the present invention and various possible designs of the first aspect when the computer program is executed by a processor.
Has the advantages that:
1. according to the scheme, the character information and the image information in the trademark can be extracted, the character dimension information and the image dimension information are respectively analyzed according to the respective characteristics of the character information and the image information to obtain respective analysis results, then the analysis results of the two dimensions are summarized to obtain a fusion result, and the trademark with the combination of the image and the character can be accurately and efficiently retrieved.
2. According to the scheme, when the information of the image dimensionality is analyzed, the required image is extracted by adopting the intercepting frame, and a scheme of positioning the intercepting frame twice is set in the extraction process, wherein the intercepting frame and the central point of the photographed image are positioned in one-time positioning, so that the positioning method has the advantages of small data processing amount and high positioning speed; the secondary positioning adopts the number of pixel points at different positions to perform positioning, and the trademark image content can be positioned at a more central position of the intercepted image in the positioning mode; after the image information is processed, the quantity dimensionality of the same pixel points is adopted for comparison, and the similarity coefficient of the image dimensionality can be obtained quickly and efficiently.
3. When the character dimension information is analyzed, the word segmentation is carried out on the character information, irrelevant characters are eliminated, the subsequent comparison amount is reduced, meanwhile, the influence of the irrelevant characters is eliminated, and the comparison accuracy is improved; after the information processing of the character dimension is finished, the method and the device can perform comprehensive analysis according to the quantity dimension of the associated participles and the quantity dimension of the same characters of the associated participles, and calculate the similarity coefficient of the character dimension more accurately.
4. According to the scheme, after the image sub-coefficients and the character sub-coefficients are obtained, the image sub-coefficients and the character sub-coefficients are fused according to the image calculation weight and the character calculation weight, in the fusion process, the image calculation weight and the character calculation weight can be set according to requirements, calculation occupation ratios of different dimensions are adjusted flexibly, and the requirements of users are met better.
Drawings
Fig. 1 is a schematic flow chart of an artificial intelligence-based retrieval method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a retrieval system based on artificial intelligence according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprising a, B and C", "comprising a, B, C" means that all three of a, B, C are comprised, "comprising a, B or C" means comprising one of three of a, B, C, "comprising a, B and/or C" means comprising any 1 or any 2 or 3 of three of a, B, C.
It should be understood that in the present invention, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined from a. Determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at \8230; \8230when" or "when 8230; \8230when" or "in response to a determination" or "in response to a detection", depending on the context.
The technical solution of the present invention will be described in detail below with specific examples. These several specific embodiments may be combined with each other below, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Referring to fig. 1, which is a schematic flowchart of an artificial intelligence based retrieval method according to an embodiment of the present invention, an execution subject of the method shown in fig. 1 may be a software and/or hardware device. The execution subject of the present application may include, but is not limited to, at least one of: user equipment, network equipment, etc. The user equipment may include, but is not limited to, a computer, a smart phone, a Personal Digital Assistant (PDA), and the electronic devices mentioned above. The network device may include, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a cloud consisting of a large number of computers or network servers based on cloud computing, wherein cloud computing is one of distributed computing, one super virtual computer consisting of a group of loosely coupled computers. The present embodiment does not limit this. The artificial intelligence-based retrieval method comprises the following steps of S1-S4:
s1, acquiring a first image of a trademark to be retrieved, performing character extraction on the first image based on a character recognition technology to obtain first character information, and adding first remark information to the first image based on the first character information.
Generally, a trademark is an image, characters or a combination of character images, a first image of the trademark to be retrieved is obtained, then character recognition technology is used for carrying out character extraction on the first image to obtain first character information, and after the first character information is obtained, first remark information is added to the first image by using the first character information. The text recognition technology may be an OCR technology, which is a prior art and is not described herein again.
On the basis of the above embodiment, S1 (obtaining a first image of a trademark to be retrieved, performing text extraction on the first image based on a text recognition technology to obtain first text information, and adding first remark information to the first image based on the first text information) includes S11 to S14:
s11, shooting an image of a trademark to be retrieved to obtain a shot image, if the resolution of the shot image is judged to be greater than a preset resolution, generating a first intercepting frame, and initially positioning the first intercepting frame to enable the first intercepting frame to be located at a first position of the shot image;
in the scheme, in order to process the image of the trademark to be retrieved, the image of the trademark to be retrieved is photographed to obtain a photographed image, and it can be understood that the photographed image corresponds to the whole image of the trademark to be retrieved.
According to the scheme, the resolution of the photographed image can be judged, if the resolution of the photographed image is judged to be larger than the preset resolution, the first capturing frame can be generated, and meanwhile the first capturing frame can be initially positioned to enable the first capturing frame to be located at the first position of the photographed image. The first capture frame is used for capturing part of images in the photographed images, so that the resolution of the photographed images meets the requirement of the preset resolution.
In some embodiments, S11 (the step of photographing the image to be retrieved to obtain the photographed image, and if the resolution of the photographed image is determined to be greater than the preset resolution, generating a first capturing frame, and initially positioning the first capturing frame so that the first capturing frame is located at the first position of the photographed image) includes S111-S112:
and S111, performing coordinate processing on the photographed image by taking the central pixel point of the photographed image as an original point, calling a preset first capturing frame, and determining the capturing frame central point of the first capturing frame.
In order to position the first capturing frame, the scheme can perform coordinate processing on the photographed image by taking the central pixel point as the origin, and can call the preset first capturing frame and determine the capturing frame central point of the first capturing frame.
And S112, aligning and setting the center point of the capturing frame with the center pixel point of the photographed image, and finishing initial positioning of the first capturing frame to enable the first capturing frame to be located at the first position of the photographed image.
After the central point of the intercepting frame and the central pixel point of the photographed image are determined, the central point of the intercepting frame and the central pixel point of the photographed image are aligned, and at the moment, the initial positioning of the first intercepting frame is completed, so that the first intercepting frame is located at the first position of the photographed image.
And S12, if the confirmation information of the user is received, taking the image in the first capture frame at the first position as a first image.
After the initial positioning, if confirmation information of the user is received, which indicates that the current image is considered to be appropriate by the staff member, the scheme takes the image in the first capture frame at the first position as the first image.
S13, if the intercepting frame dragging information of a user is received, dragging the first intercepting frame at the first position to the second position according to the intercepting frame dragging information, automatically correcting the second position, and taking the image in the first intercepting frame at the second position as a first image;
after initial positioning, if the user does not meet the requirement of the currently obtained first image, the user can drag the first capture frame, if the capture frame dragging information of the user is detected, the scheme can drag the first capture frame at the first position to the second position according to the capture frame dragging information, automatically correct the second position, and take the image in the first capture frame at the second position as the first image.
S14, if the first character information is identified to be in the first image based on the character identification technology, adding first remark information to the first image based on the first character information, and acquiring a character pixel coordinate set corresponding to the first character information.
It can be understood that when it is judged that the first image has the first character information based on the character recognition technology, the first character information is utilized to add the first remark information to the first image, and meanwhile, the scheme locates the first character information to obtain a character pixel coordinate set corresponding to the first character information.
And S2, after judging that the first character information in the first image is extracted, deleting the corresponding first character information in the first image to obtain a second image, and carrying out binarization processing on the second image according to pixel points meeting requirements in the second image to obtain a third image.
It can be understood that after the first text information in the first image is extracted, the text information in the first image does not need to be processed, at this time, the scheme deletes the corresponding first text information in the first image to obtain a second image, then determines pixel points meeting requirements in the second image, and performs binarization processing on the second image to obtain a third image.
In some embodiments, S2 (after determining to extract the first text information in the first image, deleting the corresponding first text information in the first image to obtain a second image, and performing binarization processing on the second image according to pixel points that meet requirements in the second image to obtain a third image) includes S21 to S23:
s21, character pixel points corresponding to the first image and the character pixel coordinate set are obtained, and the current pixel values of the character pixel points are replaced by preset pixel values, so that the corresponding first character information in the first image is deleted, and a second image is obtained.
According to the scheme, the character pixel points in the first image are positioned in a coordinate mode, and then the current pixel values of the character pixel points are replaced by the preset pixel values, so that the corresponding first character information in the first image is deleted to obtain the second image.
The preset pixel value can be a pixel value corresponding to white, and it should be noted that the background of the first image in the scheme is white, and the pixel value corresponding to the text pixel point is adjusted to the pixel value corresponding to white, so that the text can be hidden, and the first text information can be deleted.
And S22, determining pixel points in the preset pixel interval in the second image as pixel points meeting the requirements, and replacing the pixel values of all the pixel points meeting the requirements with black pixel values.
According to the scheme, the pixel points in the preset pixel interval in the second image are determined to be the pixel points meeting the requirements, and then the pixel values of all the pixel points meeting the requirements are replaced by the black pixel values.
It can be understood that, in the second image, except for the white background, the other colors are colors corresponding to the trademark image, and may be colors such as red, yellow, green, and the like, except for white, in this scheme, a preset pixel interval corresponding to a plurality of colors is provided, and all colors such as red, yellow, green, and the like, except for white, are uniformly converted into black.
And S23, replacing the pixel values of all the pixel points which are not positioned in the preset pixel interval with white pixel values so as to enable the second image to realize binarization processing to obtain a third image.
Meanwhile, the scheme replaces the pixel values of all the pixel points which are not located in the preset pixel interval with the white pixel values so as to realize binarization processing of the second image and obtain a third image.
Based on the foregoing embodiment, S13 (when the information about dragging the capture frame by the user is received, the first capture frame at the first position is dragged to the second position according to the information about dragging the capture frame, the second position is automatically corrected, and the image in the first capture frame at the second position is used as the first image) includes:
s131, the first capture frame is moved according to the capture frame dragging information, the first capture frame is dragged to a second position, and first pixel values of all pixel points of the photographed image in the first capture frame are obtained when the first capture frame is dragged to the second position.
According to the scheme, the first capture frame is moved according to the capture frame dragging information, the first capture frame is dragged to the second position, and then first pixel values of all pixel points of the photographed image in the first capture frame are obtained when the first capture frame is dragged to the second position.
S132, determining first pixel points of which the first pixel values are in a first preset pixel interval to obtain a first pixel point set, and extracting extreme value pixel point coordinates in the first pixel point set.
The scheme is provided with a first preset pixel interval which can be an interval corresponding to a black pixel point, then first pixel points with first pixel values in the first preset pixel interval are determined, all the first pixel points are counted to obtain a first pixel point set, and then extreme value pixel point coordinates in the first pixel point set are extracted. It can be understood that the pixel point corresponding to the first pixel point set is the pixel point corresponding to the trademark image.
After the first pixel point set is obtained, the extreme value pixel point coordinates in the first pixel point set are extracted, wherein the extreme value pixel point coordinates can be coordinates corresponding to the leftmost, rightmost, uppermost and lowermost representatives in the first pixel point set.
And S133, calculating according to the extreme value X-axis maximum coordinate, the extreme value X-axis minimum coordinate, the extreme value Y-axis maximum coordinate and the extreme value Y-axis minimum coordinate included in the extreme value pixel point coordinates to obtain a corrected coordinate point.
After the extreme value pixel point coordinates are obtained, the extreme value X-axis maximum coordinates, the extreme value X-axis minimum coordinates, the extreme value Y-axis maximum coordinates and the extreme value Y-axis minimum coordinates which are included in the extreme value pixel point coordinates are used for calculation, and correction coordinate points are obtained.
In some embodiments, the step S133 (obtaining the corrected coordinate point by performing the calculation according to the maximum extreme value X-axis coordinate, the minimum extreme value X-axis coordinate, the maximum extreme value Y-axis coordinate, and the minimum extreme value Y-axis coordinate included in the extreme value pixel point coordinates) includes steps S1331 to S1334:
and S1331, calculating according to the extreme value X-axis maximum coordinate and the extreme value X-axis minimum coordinate included in the extreme value pixel point coordinates to obtain a first X-axis coordinate of the correction coordinate point, and calculating according to the extreme value Y-axis maximum coordinate and the extreme value Y-axis minimum coordinate to obtain a first Y-axis coordinate of the correction coordinate point.
For example, the corrected coordinate point may be obtained by using half of the sum of the extreme value X-axis maximum coordinate and the extreme value X-axis minimum coordinate as the X coordinate of the corrected coordinate point, and using half of the sum of the extreme value Y-axis maximum coordinate and the extreme value Y-axis minimum coordinate as the Y coordinate of the corrected coordinate point.
And S1332, determining all pixel points smaller than the first X-axis coordinate to obtain a first pixel point number, and determining all pixel points larger than the first X-axis coordinate to obtain a second pixel point number.
According to the scheme, all the pixel points smaller than the first X-axis coordinate are counted to obtain the first pixel point quantity, and all the pixel points larger than the first X-axis coordinate are counted to obtain the second pixel point quantity. It can be understood that the larger the number of the pixel points is, the larger the area corresponding to the trademark image is.
And S1333, determining all pixel points smaller than the first Y-axis coordinate to obtain the third pixel point quantity, and determining all pixel points larger than the first Y-axis coordinate to obtain the fourth pixel point quantity.
According to the scheme, all the pixel points smaller than the first Y-axis coordinate are counted to obtain the number of the third pixel points, and all the pixel points larger than the first Y-axis coordinate are counted to obtain the number of the fourth pixel points. It can be understood that the larger the number of the pixel points is, the larger the area corresponding to the trademark image is.
S1334, adjusting the first X-axis coordinate according to the number of the first pixel points and the number of the second pixel points to obtain a second X-axis coordinate, and adjusting the first Y-axis coordinate according to the number of the third pixel points and the number of the fourth pixel points to obtain a second Y-axis coordinate.
According to the scheme, the first X-axis coordinate can be adjusted according to the number of the first pixel points and the number of the second pixel points to obtain the second X-axis coordinate, and the first Y-axis coordinate can be adjusted according to the number of the third pixel points and the number of the fourth pixel points to obtain the second Y-axis coordinate.
The second X-axis coordinate and the second Y-axis coordinate of the correction coordinate point are obtained by the following formulas,
Figure 370406DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 749435DEST_PATH_IMAGE002
the number of the first pixel points is the number of the first pixel points,
Figure 81190DEST_PATH_IMAGE003
the number of the second pixel points is the number of the second pixel points,
Figure 571077DEST_PATH_IMAGE004
is the second X-axis coordinate and,
Figure 39099DEST_PATH_IMAGE005
is the maximum coordinate of the X axis of the extreme value,
Figure 854608DEST_PATH_IMAGE006
is the minimum coordinate of the extreme X-axis,
Figure 532714DEST_PATH_IMAGE007
the value is normalized for the quantity,
Figure 701658DEST_PATH_IMAGE008
is the weight value of the X-axis coordinate,
Figure 414400DEST_PATH_IMAGE009
is the second Y-axis coordinate and,
Figure 509132DEST_PATH_IMAGE010
is the maximum coordinate of the extreme value Y-axis,
Figure 674534DEST_PATH_IMAGE011
is the minimum coordinate of the extreme Y-axis,
Figure 771803DEST_PATH_IMAGE012
the number of the third pixel points is,
Figure 479996DEST_PATH_IMAGE013
the number of the fourth pixel points is,
Figure 106150DEST_PATH_IMAGE014
is a weight value of the Y-axis coordinate.
In the above-mentioned formula,
Figure 758848DEST_PATH_IMAGE030
represents the first X-axis coordinate when
Figure 535174DEST_PATH_IMAGE031
When the number of the pixels on the left side of the image is larger than that of the pixels on the right side of the image, the first X-axis coordinate needs to be adjusted to the left, namely the first X-axis coordinate is reduced and adjusted,
Figure 691349DEST_PATH_IMAGE032
representing the magnitude to be reduced, wherein the first number of pixels
Figure 894928DEST_PATH_IMAGE002
And the number of the second pixel points
Figure 34923DEST_PATH_IMAGE003
Difference of (2)
Figure 473994DEST_PATH_IMAGE033
The larger the corresponding adjustment amplitude
Figure 389735DEST_PATH_IMAGE032
The larger, wherein the X-axis coordinate weight value
Figure 623271DEST_PATH_IMAGE008
Can be preset by a worker; when the temperature is higher than the set temperature
Figure 125927DEST_PATH_IMAGE034
When the number of pixels on the left side of the image is smaller than that of pixels on the right side of the image, the first X-axis coordinate needs to be adjusted to the right, namely the first X-axis coordinate is increased and adjusted,
Figure 368690DEST_PATH_IMAGE035
representing the need for increased amplitude.
Figure 499457DEST_PATH_IMAGE036
Represents the first Y-axis coordinate when
Figure 44839DEST_PATH_IMAGE037
When the image is taken, the number of the lower pixels of the image is larger than that of the upper pixels of the image, and the first Y-axis coordinate is required to be adjusted downwards, namely the first Y-axis coordinate is reduced and adjusted,
Figure 893846DEST_PATH_IMAGE038
representing the magnitude to be reduced, wherein the number of third pixels
Figure 550086DEST_PATH_IMAGE012
And the number of the fourth pixel
Figure 800939DEST_PATH_IMAGE013
Difference of (2)
Figure 641856DEST_PATH_IMAGE039
The larger the corresponding adjustment amplitude
Figure 617640DEST_PATH_IMAGE038
The larger, wherein the Y-axis coordinate weight value
Figure 936626DEST_PATH_IMAGE014
Can be preset by a worker; when the temperature is higher than the set temperature
Figure 182931DEST_PATH_IMAGE040
When the image is taken, the number of the lower pixels of the image is less than that of the upper pixels of the image, and the first Y-axis coordinate is required to be adjusted upwards, namely the first Y-axis coordinate is increased and adjusted,
Figure 194749DEST_PATH_IMAGE041
representing the need for increased amplitude.
It should be noted that, according to the scheme, the center of the content of the trademark image can be located, so that the central point (the correction coordinate point) is located at the center of the content of the trademark image, thereby achieving secondary location of the first capturing frame, and enabling the captured image content to be centered.
And S134, overlapping the center point of the first capture frame with the correction coordinate point to correct and adjust the first capture frame, and using the position corresponding to the dragged first capture frame after correction and adjustment as a final second position.
After the center (correction coordinate point) of the image is determined again, the scheme can perform secondary positioning adjustment on the first cutting frame, so that the center point of the first cutting frame and the correction coordinate point are arranged in a superposition mode.
And S3, acquiring a binarized fourth image which is configured in advance in an image database of the trademark and second remark information corresponding to the fourth image, and calculating according to the third image, the fourth image, the first remark information and the second remark information to obtain an image similarity coefficient between the first image and each fourth image.
According to the scheme, the image database of the trademark is pre-configured with the binarized fourth image and the second remark information corresponding to the fourth image, and it can be understood that there may be a plurality of fourth images, and the scheme is not limited thereto.
After the third image is processed, the third image, the fourth image, the first remark information and the second remark information can be calculated to obtain the image similarity coefficient between the first image and each fourth image.
In some embodiments, S3 (obtaining, according to the third image, the fourth image, the first remark information, and the second remark information, an image similarity coefficient between the first image and each fourth image is obtained) includes S31 to S35:
and S31, determining first initial pixel points of the third image, and sequentially acquiring pixel values of the third image according to the first initial pixel points to obtain an image fingerprint set of the third image.
When similarity comparison is carried out, the scheme firstly determines a first initial pixel point of the third image, wherein the first initial pixel point can be a pixel point of the third image at the leftmost position, and then sequentially obtains the pixel values of the third image according to the first initial pixel point to obtain the image fingerprint set of the third image.
It will be appreciated that the set of image fingerprints of the third image represent the pixel values of all the pixel points of the third image.
And S32, determining second initial pixel points corresponding to the first initial pixel point coordinates in the fourth image, and sequentially obtaining pixel values of the fourth image according to the second initial pixel points to obtain an image fingerprint set of the fourth image.
According to the scheme, a second initial pixel point corresponding to the first initial pixel point coordinate in the fourth image is also determined, and then the second initial pixel point is used for sequentially obtaining the pixel value of the fourth image to obtain the image fingerprint set of the fourth image.
It can be understood that the image fingerprint set of the fourth image represents pixel values of all pixel points of the fourth image, and the image fingerprint set of the third image and the coordinates in the image fingerprint set of the fourth image are in one-to-one correspondence.
S33, determining the number of the same pixel points in the image fingerprint set of the third image and the image fingerprint set of the fourth image to obtain an image similarity sub-number;
according to the scheme, the number of the same pixel points in the image fingerprint set of the third image and the number of the same pixel points in the image fingerprint set of the fourth image are counted to obtain the image similarity sub-coefficient, and it can be understood that the more the number of the same pixel points is, the larger the image similarity sub-coefficient is.
In some embodiments, S33 (the determining the number of the same pixel points in the image fingerprint set of the third image and the image fingerprint set of the fourth image to obtain the image similarity sub-number) includes S331-S332:
and S331, counting pixel points with the same coordinate and the same pixel value in the image fingerprint set of the third image and the image fingerprint set of the fourth image to obtain the number of the same pixel points.
When the number of the same pixel points is determined, the pixel points with the same coordinate and the same pixel value in the image fingerprint set of the third image and the image fingerprint set of the fourth image are counted, that is, the coordinate needs to be the same, and the pixel values also need to be the same pixel points in the scheme.
S332, calculating according to the number of the same pixel points and the total number of the pixel points in the image fingerprint set to obtain the image similarity sub-number.
It can be understood that the greater the number of the same pixel points, the greater the number of the image similarity subsystems.
And S34, respectively carrying out character recognition on the first remark information and the second remark information, eliminating irrelevant characters in the first remark information and the second remark information to obtain first associated characters and second associated characters, determining the same characters in the first associated characters and the second associated characters, and obtaining a character similarity sub-coefficient.
The method comprises the steps of comparing characters, firstly, respectively identifying characters of first remark information and second remark information, and then eliminating irrelevant characters in the first remark information and the second remark information to obtain first relevant characters and second relevant characters.
Illustratively, the first remark information is "happy oceanic technology limited company", and the second remark information is "yang sheep culture limited company", then the irrelevant word in the first remark information may be "technology limited company", and the irrelevant word in the second remark information may be "culture limited company". The first associated character is "happy" or "ocean", and the second associated character is "ocean sheep".
In some embodiments, S34 (the step of performing character recognition on the first remark information and the second remark information respectively, removing irrelevant characters in the first remark information and the second remark information to obtain first relevant characters and second relevant characters, determining the same characters in the first relevant characters and the second relevant characters, and obtaining a character similarity sub-number) includes S341-S344:
and S341, calling a preset irrelevant information corresponding table, wherein the irrelevant information corresponding table comprises a plurality of irrelevant characters, and performing word segmentation processing on the first remark information and the second remark information respectively to obtain corresponding first participles and second participles.
The scheme is provided with an irrelevant information corresponding table, a plurality of irrelevant characters are included in the irrelevant information corresponding table, the irrelevant characters are, for example, "science and technology limited company", "culture limited company", "limited company" and the like, and word segmentation processing is respectively carried out on first remark information and second remark information to obtain corresponding first word segmentation and second word segmentation.
For example, the first word obtained by word segmentation of the first remark information may be "happy", "ocean", "science and technology limited". The second participle obtained by participling the second remark information may be "sheep" or "culture limited".
And S342, if the first remark information and the second remark information are judged to have the first participle and the second participle corresponding to the irrelevant word in the irrelevant information corresponding table, deleting the corresponding first participle and second participle from the first remark information and the second remark information.
For example, the scheme can delete the 'science and technology limited' in the first segmentation and delete the 'culture limited' in the second segmentation.
And S343, counting the first quantity and the second quantity of the first participles and the second participles, taking the smaller quantity of the first quantity and the second quantity as the quantity to be compared, sequentially obtaining the participles corresponding to the quantity to be compared and the participles corresponding to the quantity not to be compared one by one, and associating the participles corresponding to the quantity to be compared with the participles corresponding to the nearest quantity not to be compared.
The scheme can count the first number and the second number of the first participle and the second participle, and then find a smaller number as the number to be compared, for example, the number of the first participle is 2, the number of the second participle is 1, and then the number to be compared of the scheme is 1.
After the number to be compared is determined, the scheme can sequentially obtain the word segmentations corresponding to the number to be compared and compare the word segmentations corresponding to the number not to be compared one by one, and associate the word segmentations corresponding to the number to be compared with the word segmentations corresponding to the nearest number not to be compared.
In some embodiments, the step S343 (the step of sequentially obtaining the tokens corresponding to the to-be-compared number and comparing the tokens corresponding to the to-be-compared number with the tokens corresponding to the non-to-be-compared number one by one, and associating the tokens corresponding to the to-be-compared number with the closest tokens corresponding to the non-to-be-compared number) includes:
acquiring the participles corresponding to the number to be compared, comparing the characters of the corresponding participles with the characters of the participles corresponding to the number not to be compared one by one, and if the participles corresponding to the number to be compared and the participles corresponding to the number not to be compared are judged to be the most characters after comparison, associating the two participles with the most characters.
Illustratively, the participles corresponding to the number to be compared are acquired as the 'foreign sheep', then the 'foreign sheep' is compared with the 'happy sheep' and the 'foreign sheep', wherein the number of the 'foreign sheep' is the most of characters identical to that of the 'foreign sheep', and the 'foreign sheep' is associated with the 'foreign sheep'.
And S344, counting the number of the associated participles and the number of the same characters in each associated participle, and calculating according to the number of the associated participles, the number of the same characters, the first number and the second number to obtain a character similarity sub-coefficient.
According to the method, the number of the associated participles is counted, the number of the same characters in each associated participle is obtained at the same time, and then the number of the associated participles, the number of the same characters, the first number and the second number are used for calculating to obtain the character similarity sub-coefficient. It can be understood that the larger the number of the same characters in each associated participle is, the higher the corresponding character similarity sub-number is.
And S35, performing fusion calculation according to the image similarity sub-coefficient and the character similarity sub-coefficient to obtain an image similarity coefficient of the first image and each fourth image.
After the image similarity sub-number and the character similarity sub-number are obtained, the image similarity sub-number and the character similarity sub-number are comprehensively calculated to obtain an image similarity coefficient between the first image and each fourth image.
In some embodiments, S35 (the image similarity coefficient of the first image and each fourth image obtained by performing the fusion calculation according to the image similarity sub-coefficient and the text similarity sub-coefficient) includes S351-S352:
s351, weighting the image similarity sub-coefficient according to the image calculation weight to obtain an image sub-coefficient, and weighting the character similarity sub-coefficient according to the character calculation weight to obtain a character sub-coefficient.
The scheme is provided with an image calculation weight and a character calculation weight, the image calculation weight is used for carrying out weighting processing on the image similarity sub-number to obtain an image sub-coefficient, and the character calculation weight is used for carrying out weighting processing on the character similarity sub-number to obtain a character sub-coefficient. In practical application, the image calculation weight can be set to be larger than the character calculation weight so as to improve the calculation ratio of image dimensionality.
S352, the image sub-coefficients and the character sub-coefficients are fused to obtain image similarity coefficients of the first image and each fourth image, the image similarity coefficients are calculated through the following formula,
Figure 752770DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 16392DEST_PATH_IMAGE016
is a coefficient of the degree of similarity of the image,
Figure 976258DEST_PATH_IMAGE017
the number of the same pixel points is the same,
Figure 34343DEST_PATH_IMAGE018
is the total number of pixel points in the image fingerprint set,
Figure 345239DEST_PATH_IMAGE019
the weights are calculated for the images and,
Figure 271607DEST_PATH_IMAGE020
for the number of the associated segmented words,
Figure 459880DEST_PATH_IMAGE021
is a first number of the first segmented word,
Figure 79080DEST_PATH_IMAGE022
is the second number of the second participles,
Figure 611693DEST_PATH_IMAGE023
is a normalized value of the number of the participles,
Figure 217118DEST_PATH_IMAGE024
is as follows
Figure 151576DEST_PATH_IMAGE025
The number of identical words of the associated participle,
Figure 551464DEST_PATH_IMAGE026
is the upper limit value of the associated segmented word,
Figure 836952DEST_PATH_IMAGE027
is a normalized value of the same word and,
Figure 839543DEST_PATH_IMAGE028
weights are calculated for the text.
In the above-mentioned formula,
Figure 503874DEST_PATH_IMAGE042
representing image sub-coefficients, number of identical pixels
Figure 464877DEST_PATH_IMAGE017
The more, the larger the corresponding image sub-coefficient;
Figure 181920DEST_PATH_IMAGE043
quantity dimension representing associated participlesDegree sub-system number, number of associated participles
Figure 253781DEST_PATH_IMAGE020
The larger, the corresponding
Figure 162831DEST_PATH_IMAGE043
The larger the size of the resulting beam is,
Figure 638943DEST_PATH_IMAGE044
number of children representing number dimension of same words, sum of number of same words of associated participles
Figure 899023DEST_PATH_IMAGE045
The larger, the greater the number of corresponding subsystems in the same word size dimension,
Figure 243417DEST_PATH_IMAGE046
representing the character sub-coefficient; wherein the image calculates the weight
Figure 147919DEST_PATH_IMAGE019
And calculating the weight of the character
Figure 919566DEST_PATH_IMAGE028
May be preset by the operator.
It should be noted that, in the scheme, after the image sub-coefficients and the text sub-coefficients are obtained, the image sub-coefficients and the text sub-coefficients are fused according to the image calculation weight and the text calculation weight, and in the fusion process, the image calculation weight and the text calculation weight can be set according to requirements, so that the calculation ratios of different dimensions can be flexibly adjusted, and the requirements of users can be better met.
And S4, counting trademarks corresponding to a preset number of fourth images according to the image similarity coefficient, and generating a trademark approximate retrieval list.
In the scheme, after the image similarity coefficient is obtained through S1-S3 comprehensive calculation, corresponding trademarks are screened out according to a preset number (for example, 100) and presented to a user.
Referring to fig. 2, which is a schematic structural diagram of a retrieval system based on artificial intelligence according to an embodiment of the present invention, the retrieval system based on artificial intelligence includes:
the extraction module is used for acquiring a first image of a trademark to be retrieved, performing character extraction on the first image based on a character recognition technology to obtain first character information, and adding first remark information to the first image based on the first character information;
the binarization processing module is used for deleting corresponding first character information in the first image to obtain a second image after judging and extracting the first character information in the first image, and performing binarization processing on the second image to obtain a third image according to pixel points meeting requirements in the second image;
the similarity calculation module is used for acquiring a binarized fourth image which is pre-configured in an image database of the trademark and second remark information corresponding to the fourth image, and calculating according to the third image, the fourth image, the first remark information and the second remark information to obtain an image similarity coefficient between the first image and each fourth image;
and the generating module is used for counting trademarks corresponding to a preset number of fourth images according to the image similarity coefficient to generate a trademark approximate retrieval list.
The present invention also provides a storage medium having a computer program stored therein, the computer program being executable by a processor to implement the methods provided by the various embodiments described above.
The storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media can be any available media that can be accessed by a general purpose or special purpose computer. For example, a storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Additionally, the ASIC may reside in user equipment. Of course, the processor and the storage medium may reside as discrete components in a communication device. The storage medium may be read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and the like.
The present invention also provides a program product comprising execution instructions stored in a storage medium. The at least one processor of the device may read the execution instructions from the storage medium, and the execution of the execution instructions by the at least one processor causes the device to implement the methods provided by the various embodiments described above.
In the embodiment of the terminal or the server, it should be understood that the Processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The artificial intelligence based retrieval method is characterized by comprising the following steps:
acquiring a first image of a trademark to be retrieved, performing character extraction on the first image based on a character recognition technology to obtain first character information, and adding first remark information to the first image based on the first character information;
after judging and extracting the first character information in the first image, deleting the corresponding first character information in the first image to obtain a second image, and carrying out binarization processing on the second image according to pixel points meeting requirements in the second image to obtain a third image;
acquiring a binarized fourth image which is pre-configured in an image database of a trademark and second remark information corresponding to the fourth image, and calculating according to the third image, the fourth image, the first remark information and the second remark information to obtain an image similarity coefficient of the first image and each fourth image;
counting trademarks corresponding to a preset number of fourth images according to the image similarity coefficient to generate a trademark approximate retrieval list;
the method for acquiring the first image of the trademark to be retrieved, performing character extraction on the first image based on a character recognition technology to obtain first character information, and adding first remark information to the first image based on the first character information includes the following steps:
the method comprises the steps of photographing an image of a trademark to be retrieved to obtain a photographed image, if the resolution of the photographed image is judged to be larger than a preset resolution, generating a first capturing frame, and initially positioning the first capturing frame to enable the first capturing frame to be located at a first position of the photographed image;
if the confirmation information of the user is received, taking the image in the first capturing frame at the first position as a first image;
if the dragging information of the intercepting frame of the user is received, dragging the first intercepting frame at the first position to the second position according to the dragging information of the intercepting frame, automatically correcting the second position, and taking the image in the first intercepting frame at the second position as a first image;
if the first image is identified to have the first character information based on the character identification technology, adding first remark information to the first image based on the first character information, and acquiring a character pixel coordinate set corresponding to the first character information;
the method includes the steps of photographing an image to be retrieved to obtain a photographed image, generating a first capturing frame if the resolution of the photographed image is judged to be greater than a preset resolution, and initially positioning the first capturing frame to enable the first capturing frame to be located at a first position of the photographed image, and includes the following steps:
performing coordinate processing on the photographed image by taking a central pixel point of the photographed image as an original point, calling a preset first intercepting frame, and determining an intercepting frame central point of the first intercepting frame;
and aligning the center point of the capturing frame with the center pixel point of the photographed image, and finishing initial positioning of the first capturing frame to enable the first capturing frame to be located at the first position of the photographed image.
2. The artificial intelligence based retrieval method according to claim 1,
if the capture frame dragging information of the user is received, dragging the first capture frame at the first position to the second position according to the capture frame dragging information, automatically correcting the second position, and taking the image in the first capture frame at the second position as the first image, wherein the method comprises the following steps:
moving the first capture frame according to the capture frame dragging information, dragging the first capture frame to a second position, and acquiring first pixel values of all pixel points of the photographed image in the first capture frame when the first capture frame is dragged to the second position;
determining a first pixel point of which the first pixel value is in a first preset pixel interval to obtain a first pixel point set, and extracting an extreme value pixel point coordinate in the first pixel point set;
calculating according to the extreme value X-axis maximum coordinate, the extreme value X-axis minimum coordinate, the extreme value Y-axis maximum coordinate and the extreme value Y-axis minimum coordinate included in the extreme value pixel point coordinates to obtain a corrected coordinate point;
and overlapping the center point of the first intercepting frame with the correction coordinate point to correct and adjust the first intercepting frame, and taking the position corresponding to the dragged first intercepting frame after correction and adjustment as a final second position.
3. The artificial intelligence based retrieval method according to claim 2,
the method for calculating the maximum coordinate of the extreme value X axis, the minimum coordinate of the extreme value X axis, the maximum coordinate of the extreme value Y axis and the minimum coordinate of the extreme value Y axis according to the extreme value pixel point coordinates comprises the following steps of:
calculating according to the extreme value X-axis maximum coordinate and the extreme value X-axis minimum coordinate included in the extreme value pixel point coordinates to obtain a first X-axis coordinate of the correction coordinate point, and calculating according to the extreme value Y-axis maximum coordinate and the extreme value Y-axis minimum coordinate to obtain a first Y-axis coordinate of the correction coordinate point;
determining all pixel points smaller than the first X-axis coordinate to obtain a first pixel point number, and determining all pixel points larger than the first X-axis coordinate to obtain a second pixel point number;
determining all pixel points smaller than the first Y-axis coordinate to obtain a third pixel point number, and determining all pixel points larger than the first Y-axis coordinate to obtain a fourth pixel point number;
adjusting the first X-axis coordinate according to the number of the first pixel points and the number of the second pixel points to obtain a second X-axis coordinate, and adjusting the first Y-axis coordinate according to the number of the third pixel points and the number of the fourth pixel points to obtain a second Y-axis coordinate;
the second X-axis coordinate and the second Y-axis coordinate of the corrected coordinate point are obtained by the following formulas,
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
the number of the first pixel points is the number of the first pixel points,
Figure DEST_PATH_IMAGE006
the number of the second pixel points is the number of the second pixel points,
Figure DEST_PATH_IMAGE008
is the second X-axis coordinate and,
Figure DEST_PATH_IMAGE010
is the maximum coordinate of the X axis of the extreme value,
Figure DEST_PATH_IMAGE012
is the minimum coordinate of the extreme X-axis,
Figure DEST_PATH_IMAGE014
the value is normalized for the quantity,
Figure DEST_PATH_IMAGE016
is the weight value of the X-axis coordinate,
Figure DEST_PATH_IMAGE018
is a second Y-axis coordinate and is,
Figure DEST_PATH_IMAGE020
is the maximum coordinate of the extreme value Y-axis,
Figure DEST_PATH_IMAGE022
is the minimum coordinate of the extreme Y-axis,
Figure DEST_PATH_IMAGE024
the number of the third pixel points is,
Figure DEST_PATH_IMAGE026
the number of the fourth pixel points is the number of the fourth pixel points,
Figure DEST_PATH_IMAGE028
is a weight value of the Y-axis coordinate.
4. The artificial intelligence based retrieval method according to claim 2,
after judging that the first text information in the first image is extracted, deleting the corresponding first text information in the first image to obtain a second image, and performing binarization processing on the second image according to pixel points meeting requirements in the second image to obtain a third image, wherein the method comprises the following steps of:
acquiring a first image and a character pixel point corresponding to the character pixel coordinate set, and replacing the current pixel value of the character pixel point with a preset pixel value so as to delete corresponding first character information in the first image to obtain a second image;
determining pixel points in a preset pixel interval in the second image as pixel points meeting the requirements, and replacing the pixel values of all the pixel points meeting the requirements with black pixel values;
and replacing the pixel values of all the pixel points which are not positioned in the preset pixel interval with white pixel values so as to realize binarization processing on the second image and obtain a third image.
5. The artificial intelligence based retrieval method of claim 4,
the obtaining of the binarized fourth image preconfigured in the image database of the trademark and the second remark information corresponding to the fourth image is performed according to the third image, the fourth image, the first remark information and the second remark information, so as to obtain an image similarity coefficient between the first image and each fourth image, and includes:
determining first initial pixel points of a third image, and sequentially acquiring pixel values of the third image according to the first initial pixel points to obtain an image fingerprint set of the third image;
determining second initial pixel points corresponding to the first initial pixel point coordinates in the fourth image, and sequentially acquiring pixel values of the fourth image according to the second initial pixel points to obtain an image fingerprint set of the fourth image;
determining the number of the same pixel points in the image fingerprint set of the third image and the image fingerprint set of the fourth image to obtain an image similarity sub-number;
respectively carrying out character recognition on the first remark information and the second remark information, eliminating irrelevant characters in the first remark information and the second remark information to obtain first relevant characters and second relevant characters, determining the same characters in the first relevant characters and the second relevant characters, and obtaining a character similarity sub-number;
and performing fusion calculation according to the image similarity sub-coefficient and the character similarity sub-coefficient to obtain an image similarity coefficient of the first image and each fourth image.
6. The artificial intelligence based retrieval method of claim 5,
the determining the number of the same pixel points in the image fingerprint set of the third image and the image fingerprint set of the fourth image to obtain the image similarity sub-number comprises:
counting pixel points with the same coordinate and the same pixel value in the image fingerprint set of the third image and the image fingerprint set of the fourth image to obtain the number of the same pixel points;
and calculating according to the number of the same pixel points and the total number of the pixel points in the image fingerprint set to obtain the image similarity sub-number.
7. The artificial intelligence based retrieval method of claim 6,
the character recognition is respectively carried out on the first remark information and the second remark information, irrelevant characters in the first remark information and the second remark information are removed to obtain first relevant characters and second relevant characters, the same characters in the first relevant characters and the second relevant characters are determined, and a character similarity sub-number is obtained, and the character similarity sub-number comprises the following steps:
calling a preset irrelevant information corresponding table, wherein the irrelevant information corresponding table comprises a plurality of irrelevant characters, and performing word segmentation processing on the first remark information and the second remark information respectively to obtain corresponding first participles and second participles;
if the first remark information and the second remark information are judged to have a first word segmentation and a second word segmentation which correspond to the irrelevant words in the irrelevant information corresponding table, deleting the corresponding first word segmentation and second word segmentation from the first remark information and the second remark information;
counting a first number and a second number of the first participles and the second participles, taking the smaller number of the first number and the second number as a to-be-compared number, sequentially obtaining the participles corresponding to the to-be-compared number to be compared with the participles corresponding to the non-to-be-compared number one by one, and associating the participles corresponding to the to-be-compared number with the participles corresponding to the nearest non-to-be-compared number;
and counting the number of the associated participles and the number of the same characters in each associated participle, and calculating according to the number of the associated participles, the number of the same characters, the first number and the second number to obtain a character similarity sub-coefficient.
8. The artificial intelligence based retrieval method of claim 7,
the sequentially obtaining the participles corresponding to the quantity to be compared and the participles corresponding to the quantity not to be compared one by one, and associating the participles corresponding to the quantity to be compared with the nearest participles corresponding to the quantity not to be compared, comprises:
obtaining the participles corresponding to the number to be compared, comparing the characters of the corresponding participles with the characters of the participles corresponding to the number not to be compared one by one, and if the participles corresponding to the number to be compared are judged to be the most characters which are the same as the participles corresponding to the number not to be compared after comparison;
the two participles having the most words of the same word are associated.
9. The artificial intelligence based retrieval method of claim 8,
the obtaining of the image similarity coefficient of the first image and each fourth image by performing fusion calculation according to the image similarity sub-coefficient and the character similarity sub-coefficient comprises:
weighting the image similarity sub-coefficient according to the image calculation weight to obtain an image sub-coefficient, and weighting the character similarity sub-coefficient according to the character calculation weight to obtain a character sub-coefficient;
the image sub-coefficient and the character sub-coefficient are fused to obtain an image similarity coefficient of the first image and each fourth image, the image similarity coefficient is calculated through the following formula,
Figure DEST_PATH_IMAGE030
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE032
is a coefficient of similarity of the images,
Figure DEST_PATH_IMAGE034
is the number of the same pixel points and is,
Figure DEST_PATH_IMAGE036
is the total number of pixel points in the image fingerprint set,
Figure DEST_PATH_IMAGE038
the weights are calculated for the images and,
Figure DEST_PATH_IMAGE040
for the number of associated segmented words,
Figure DEST_PATH_IMAGE042
is a first number of the first segmented word,
Figure DEST_PATH_IMAGE044
is the second number of the second participles,
Figure DEST_PATH_IMAGE046
is a normalized value of the number of the participles,
Figure DEST_PATH_IMAGE048
is a first
Figure DEST_PATH_IMAGE050
The number of identical words of the associated participle,
Figure DEST_PATH_IMAGE052
for the upper limit value of the associated participle,
Figure DEST_PATH_IMAGE054
is a normalized value of the same word and,
Figure DEST_PATH_IMAGE056
weights are calculated for the text.
10. Retrieval system based on artificial intelligence, characterized by, includes:
the extraction module is used for acquiring a first image of a trademark to be retrieved, performing character extraction on the first image based on a character recognition technology to obtain first character information, and adding first remark information to the first image based on the first character information;
the binarization processing module is used for deleting corresponding first character information in the first image to obtain a second image after judging and extracting the first character information in the first image, and carrying out binarization processing on the second image to obtain a third image according to pixel points meeting requirements in the second image;
the similarity calculation module is used for acquiring a pre-configured binarized fourth image in an image database of trademarks and second remark information corresponding to the fourth image, and calculating according to the third image, the fourth image, the first remark information and the second remark information to obtain an image similarity coefficient between the first image and each fourth image;
the generating module is used for counting trademarks corresponding to a preset number of fourth images according to the image similarity coefficient and generating a trademark approximate retrieval list;
the method for acquiring the first image of the trademark to be retrieved, performing character extraction on the first image based on a character recognition technology to obtain first character information, and adding first remark information to the first image based on the first character information includes the following steps:
the method comprises the steps of photographing an image of a trademark to be retrieved to obtain a photographed image, if the resolution of the photographed image is judged to be larger than a preset resolution, generating a first capturing frame, and initially positioning the first capturing frame to enable the first capturing frame to be located at a first position of the photographed image;
if the confirmation information of the user is received, taking the image in the first capturing frame at the first position as a first image;
if the dragging information of the intercepting frame of the user is received, dragging the first intercepting frame at the first position to the second position according to the dragging information of the intercepting frame, automatically correcting the second position, and taking the image in the first intercepting frame at the second position as a first image;
if the first image is identified to have the first character information based on the character identification technology, adding first remark information to the first image based on the first character information, and acquiring a character pixel coordinate set corresponding to the first character information;
the method includes the steps of photographing an image to be retrieved to obtain a photographed image, generating a first capturing frame if the resolution of the photographed image is judged to be greater than a preset resolution, and initially positioning the first capturing frame to enable the first capturing frame to be located at a first position of the photographed image, and includes the following steps:
performing coordinate processing on the photographed image by taking a central pixel point of the photographed image as an original point, calling a preset first intercepting frame, and determining an intercepting frame central point of the first intercepting frame;
and aligning the center point of the capturing frame with the center pixel point of the photographed image, and finishing initial positioning of the first capturing frame to enable the first capturing frame to be located at the first position of the photographed image.
CN202211269591.3A 2022-10-18 2022-10-18 Retrieval method and system based on artificial intelligence Active CN115344738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211269591.3A CN115344738B (en) 2022-10-18 2022-10-18 Retrieval method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211269591.3A CN115344738B (en) 2022-10-18 2022-10-18 Retrieval method and system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN115344738A true CN115344738A (en) 2022-11-15
CN115344738B CN115344738B (en) 2023-02-28

Family

ID=83957212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211269591.3A Active CN115344738B (en) 2022-10-18 2022-10-18 Retrieval method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN115344738B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108763380A (en) * 2018-05-18 2018-11-06 徐庆 Brand recognition search method, device, computer equipment and storage medium
US20190114505A1 (en) * 2016-04-14 2019-04-18 Ader Bilgisayar Hizmetleri Ve Ticaret A.S. Content based search and retrieval of trademark images
CN109670072A (en) * 2018-11-01 2019-04-23 广州企图腾科技有限公司 A kind of trade mark similarity-rough set method extracted based on interval
CN110378350A (en) * 2019-07-23 2019-10-25 中国工商银行股份有限公司 A kind of method, apparatus and system of Text region
CN112347284A (en) * 2020-09-16 2021-02-09 华南师范大学 Combined trademark image retrieval method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190114505A1 (en) * 2016-04-14 2019-04-18 Ader Bilgisayar Hizmetleri Ve Ticaret A.S. Content based search and retrieval of trademark images
CN108763380A (en) * 2018-05-18 2018-11-06 徐庆 Brand recognition search method, device, computer equipment and storage medium
CN109670072A (en) * 2018-11-01 2019-04-23 广州企图腾科技有限公司 A kind of trade mark similarity-rough set method extracted based on interval
CN110378350A (en) * 2019-07-23 2019-10-25 中国工商银行股份有限公司 A kind of method, apparatus and system of Text region
CN112347284A (en) * 2020-09-16 2021-02-09 华南师范大学 Combined trademark image retrieval method

Also Published As

Publication number Publication date
CN115344738B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN109697416B (en) Video data processing method and related device
CN110569731B (en) Face recognition method and device and electronic equipment
CN109635686B (en) Two-stage pedestrian searching method combining human face and appearance
CN110866466B (en) Face recognition method, device, storage medium and server
EP1530157A1 (en) Image matching system using 3-dimensional object model, image matching method, and image matching program
CN107423306B (en) Image retrieval method and device
CN110008943B (en) Image processing method and device, computing equipment and storage medium
CN111930985A (en) Image retrieval method and device, electronic equipment and readable storage medium
Timmerman et al. Video camera identification from sensor pattern noise with a constrained convnet
CN111459922A (en) User identification method, device, equipment and storage medium
CN110245573A (en) A kind of register method, apparatus and terminal device based on recognition of face
CN110781195B (en) System, method and device for updating point of interest information
Feng et al. A novel saliency detection method for wild animal monitoring images with WMSN
CN111259792A (en) Face living body detection method based on DWT-LBP-DCT characteristics
CN111476070A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115344738B (en) Retrieval method and system based on artificial intelligence
JP4967045B2 (en) Background discriminating apparatus, method and program
CN116958795A (en) Method and device for identifying flip image, electronic equipment and storage medium
CN112214639B (en) Video screening method, video screening device and terminal equipment
CN113065559B (en) Image comparison method and device, electronic equipment and storage medium
CN111402281B (en) Book edge detection method and device
CN112101479B (en) Hair style identification method and device
WO2017143979A1 (en) Image search method and device
CN112434547B (en) User identity auditing method and device
CN113780424A (en) Real-time online photo clustering method and system based on background similarity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240411

Address after: Room 505, 5th Floor, Building 4, No. 38 Linfeng Second Road, Haidian District, Beijing, 100089

Patentee after: BEIJING ZHIGUO TECHNOLOGY CO.,LTD.

Country or region after: China

Address before: 226010 908, Building C, Entrepreneurship Outsourcing Center, Nantong Economic and Technological Development Zone, Jiangsu Province

Patentee before: Nantong Zhiguo Technology Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right