CN110929057A - Image processing method, device and system, storage medium and electronic device - Google Patents

Image processing method, device and system, storage medium and electronic device Download PDF

Info

Publication number
CN110929057A
CN110929057A CN201811005314.5A CN201811005314A CN110929057A CN 110929057 A CN110929057 A CN 110929057A CN 201811005314 A CN201811005314 A CN 201811005314A CN 110929057 A CN110929057 A CN 110929057A
Authority
CN
China
Prior art keywords
target
image
target image
image object
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811005314.5A
Other languages
Chinese (zh)
Inventor
蔡健
张晓泉
程昊
詹焯扬
袁子斌
李文文
邬龙
江涛
乔宝琛
杨妤卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Blue lantern fish Intelligent Technology Co.,Ltd.
Original Assignee
Shenzhen Blue Lantern Fish Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Blue Lantern Fish Intelligent Technology Co ltd filed Critical Shenzhen Blue Lantern Fish Intelligent Technology Co ltd
Priority to CN201811005314.5A priority Critical patent/CN110929057A/en
Publication of CN110929057A publication Critical patent/CN110929057A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image processing method, an image processing device, an image processing system, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring a reference image for retrieval; acquiring an image object contained in the reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training; acquiring a selected target image object in the identified image objects; retrieving a target image that matches the target image object. The invention solves the technical problem of low retrieval accuracy of the similar image retrieval mode provided in the related technology.

Description

Image processing method, device and system, storage medium and electronic device
Technical Field
The present invention relates to the field of computers, and in particular, to an image processing method, apparatus and system, a storage medium, and an electronic apparatus.
Background
At present, the following method is generally adopted for similar image retrieval: extracting a characteristic vector of a reference image from the reference image; and searching a feature vector similar to the feature vector in an image database by using the extracted feature vector, and further retrieving a target image similar to the reference image.
However, due to the insufficient capability of the extracted feature vectors to characterize the reference image, there are many cases where the feature vectors of the retrieved target images are close to the extracted feature vectors in distance, but the target image is not similar to the reference image, or vice versa. That is, the similar image retrieval method provided in the related art has a problem of low retrieval accuracy.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, an image processing system, a storage medium and an electronic device, which are used for at least solving the technical problem that a similar image retrieval mode provided in the related technology has low retrieval accuracy.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including: acquiring a reference image for retrieval; acquiring an image object contained in the reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training; acquiring a selected target image object in the identified image objects; retrieving a target image that matches the target image object.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus including: a first acquisition unit configured to acquire a reference image for retrieval; a second obtaining unit, configured to obtain an image object included in the reference image, where the image object is identified by using an image object identification model, and the image object identification model is obtained by performing machine training using a sample image; a third acquiring unit, configured to acquire a selected target image object from the identified image objects; and the retrieval unit is used for retrieving the target image matched with the target image object.
According to still another aspect of the embodiments of the present invention, there is also provided an image processing system including: the image processing device comprises a target terminal and a server, wherein the target terminal comprises the image processing device of any one of the above items, and the server is used for identifying the image object contained in the reference image by using an image object identification model.
According to a further aspect of the embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is configured to perform the above method when executed.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the method by the computer program.
In the embodiment of the invention, the reference image for retrieval is acquired by adopting a mode of identifying the image object contained in the reference image by using an image object identification model and acquiring the selected target image object in the identified image object; acquiring an image object contained in a reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training; acquiring a selected target image object in the identified image objects; the target image matched with the target image object is searched, the target image is searched based on the target image object in the reference image instead of the global image similarity of the reference image, so that the target image designed by the similar image object with the reference image can be accurately searched, the accuracy of the search result is improved, and the technical problem of low search accuracy of a similar image search mode provided in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of an application environment of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flow diagram illustrating an alternative image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an alternative image processing method according to an embodiment of the invention;
FIG. 4 is a schematic flow diagram of an alternative image processing method according to an embodiment of the invention;
FIG. 5 is a schematic diagram of yet another alternative image processing method according to an embodiment of the invention;
FIG. 6 is a schematic diagram of an alternative image processing apparatus according to an embodiment of the present invention;
FIG. 7 is a block diagram of an alternative image processing system according to an embodiment of the present invention; and the number of the first and second groups,
fig. 8 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present invention, there is provided an image processing method. Alternatively, the above-described image processing method may be applied, but not limited to, in an application environment as shown in fig. 1. As shown in fig. 1, a client in a terminal 102 acquires a reference image for retrieval, and transmits the acquired reference image to a first server 104 via a network, where the first server 104 identifies an image object included in the reference image using an image object identification model, the image object identification model being obtained by performing machine training using a sample image, and transmits the identified image object to the terminal 102 via the network. After acquiring the image objects contained in the reference image, the terminal 102 acquires the selected target image object from the identified image objects, and retrieves the target image matched with the target image object.
For example, the terminal 102 may retrieve a target image matching the target image object from an image database stored in the terminal 102, or the terminal 102 may transmit the target image object to the second server 106 via a network, and the second server 106 may retrieve the target image matching the target image object from the image database 108 using the target image object and transmit the retrieved target image to the terminal 102 via the network.
Optionally, in this embodiment, the terminal 102 may include, but is not limited to, at least one of the following: mobile phones, tablet computers, notebook computers, desktop computers, and the like. The network may include, but is not limited to, a wired network, a wireless network, wherein the wired network includes: local area network, metropolitan area network, wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The first server 104 and the second server 106 may be the same server or different servers, and may include but are not limited to at least one of the following: PCs and other devices for providing image object recognition services and/or image retrieval services. The above is only an example, and this is not limited in this embodiment.
Optionally, in this embodiment, as an optional implementation manner, as shown in fig. 2, the image processing method may include:
s202, acquiring a reference image for retrieval;
s204, acquiring an image object contained in the reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training;
s206, acquiring a selected target image object in the identified image objects;
s208, searching the target image matched with the target image object.
Alternatively, the image processing method may be, but is not limited to, during the retrieval of the similar images. For example, in the process of searching for similar trademark images.
In the related art, similar image retrieval is usually implemented in a global similarity-based manner: extracting a characteristic vector of a reference image from the reference image; and searching a feature vector similar to the feature vector in an image database by using the extracted feature vector, and further retrieving a target image similar to the reference image. The extracted feature vectors have the problem of low retrieval accuracy due to insufficient capability of representing the reference images.
In the present application, however, a reference image for retrieval is acquired; acquiring an image object contained in a reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training; acquiring a selected target image object in the identified image objects; the target image matched with the target image object is searched, and the target image is searched based on the target image object in the reference image, so that the target image designed by adopting the similar image object with the reference image can be accurately searched, the accuracy of the search result is improved, and the technical problem of low search accuracy of a similar image search mode provided in the related technology is solved.
The execution subject of each step may be a target terminal, and the target terminal may include but is not limited to: mobile phones, tablet computers, notebook computers, desktop computers, and the like.
In step S202, a reference image for retrieval is acquired.
The above-mentioned reference image may be an image for performing similar image (e.g., trademark image) retrieval. For example, when a user needs to know whether a certain trademark picture has an approximate trademark, the user uploads the trademark picture in a retrieval platform or a retrieval tool so as to retrieve the trademark picture by retrieving certain areas of the picture.
The above-mentioned manner of acquiring the reference image may be various, and may include but is not limited to: and obtaining the reference image through a target webpage on the target terminal or a client installed on the target terminal. The manner of acquisition may include, but is not limited to:
(1) detecting a reference image or indication information (e.g., link information, storage location information) of the reference image in a target area on a target terminal;
(2) popping up a selection window for selecting a reference image stored by a target terminal by clicking a first button (e.g., clicking an upload button), and detecting a selection operation performed on the reference image stored by the target terminal;
(3) and calling a photographing tool on the target terminal, and performing photographing operation by using the photographing tool to obtain the reference image.
The target terminal can display the image by adopting a JavaScript webpage picture display tool in a mode of processing the image through the page.
When the reference image is acquired, the uploaded reference image can be displayed on a screen of the target terminal, so that a user can judge whether the acquired reference image is correct.
For example, after clicking an upload button on a web page or a client, a user may select a picture (reference image) to be uploaded or input a picture address, and upload the picture to a tablet or software, and the uploaded picture may be displayed to the user in a front-end interface.
The following examples are given below. As shown in fig. 3, the interface displayed in fig. 3 may be a page displayed on the target terminal or a display interface of the client on the target terminal. And uploading the reference image by dragging the reference image to a designated area or clicking the reference image by the user. At this time, the target terminal acquires a reference image for retrieval.
In step S204, an image object included in the reference image is acquired, where the image object is identified by using an image object identification model, and the image object identification model is obtained by performing machine training using the sample image.
The reference image may include a plurality of image objects, and the graphic object is a part of the reference image, and may include but is not limited to: a pattern, text, numbers, letters, or a combination of at least one of the foregoing. Different image objects may correspond to different regions in the reference image.
After the reference image is acquired, an image object contained in the reference image may be identified from the reference image using an image object identification model (e.g., to automatically identify a likely target region in the reference image).
The image object recognition model may be a network model constructed by Artificial Intelligence (AI). For example, an initial neural network model is constructed based on an object extraction technology of a neural network, and after a large amount of pattern training is performed on the initial neural network model, each parameter in the initial neural network model is optimized, so that the image object recognition model is constructed.
For example, the neural network may be used to locate key regions in the trademark image, automatically extract design elements in the trademark image, and extract a plurality of candidate regions (i.e., image objects) via the AI.
The image object contained in the reference image may be acquired in various ways. May include, but is not limited to: and local acquisition is carried out through the target terminal, and acquisition is carried out through the interaction of the target terminal and the server.
As an alternative implementation, the image object contained in the reference image may be identified by an image object identification model contained in the target terminal. The image object recognition model may be embedded in a client installed in the target terminal.
After the target terminal acquires the reference image, an image object identification model in the target terminal can be called so as to identify the image object contained in the reference image from the reference image.
As another optional implementation, the target terminal may send the acquired reference image to the server through a web page or a client; after receiving the reference image, the server recognizes an image object included in the reference image using an image object recognition model, and transmits the recognized image object to the target terminal through a target message (e.g., returns a candidate region box in a brand picture detected by an AI algorithm to the front end).
In step S206, the selected target image object among the recognized image objects is acquired.
Alternatively, when the image object included in the acquired reference image is acquired, the acquired image object may be displayed on a screen of the target terminal, or may be displayed in a manner of being framed by a graphic frame.
For the mode of image processing through the web page, the acquired image object may be displayed on the upper layer of the corresponding region of the reference image (drawn in the corresponding region of the original input picture) through web page processing.
For the mode of image processing by the client, the image processing tool in the client can be used to display the acquired image object on the upper layer of the corresponding area of the reference image (drawn on the corresponding area of the original input picture) in the display area of the reference image of the client.
Alternatively, after the image objects are displayed on the screen of the target terminal, the target terminal may detect a selection operation performed on one or more of the image objects, where the selected one or more image objects are the target image objects.
The target terminal can call (such as a webpage or a client side) picture individual object selection tool by using a man-machine interaction mode of a graphical interface to acquire a target image object. For the user, one or more graphic elements which are wanted to be reserved can be clicked through human-computer interaction, and unnecessary elements can be removed so as to search similar graphics.
Optionally, after displaying the image objects on the screen of the target terminal in the form of graphic frames, the user may click on the graphic frames to perform the operations of retaining or removing the graphic frames, and directly select the area for subsequent target image retrieval without going to other tools/platforms/software for cutting or editing, without regenerating a new image, or without uploading a new image again.
When the target image object is selected, the area to be searched can be selected by clicking with a mouse (if the area is selected by clicking once, and then the area is deselected by clicking once), and a plurality of areas can be simultaneously selected. The selected graphic frame (identifying the target image object) may be identified by a different color, e.g., the selected graphic frame may change color and return to the original color when deselected.
Optionally, the target terminal may further obtain the target image object according to a predetermined policy, where the predetermined policy is used to select the target image object, and the policy may include, but is not limited to: 1 or N image objects are randomly selected, and image objects having a target type (e.g., graphics, text, etc.) are selected.
In step S208, a target image matching the target image object is retrieved.
After the target image object is acquired, similar graph retrieval may be performed on the target image object to acquire a target image. For example, a graphical-like search of the selected area within the graphical frame may be initiated by clicking a search button.
The retrieval may be performed locally by the target terminal, or the target terminal may transmit the target image object to the server and the server may retrieve the target image object from the graphic database. The graphic database may be located in another device (which may interact with the server) that is separate from the server acquisition.
The retrieval policy for retrieval may be that the degree of similarity between the target image and the target image object is greater than or equal to a target threshold value, or that the degree of similarity between the image object contained in the target image and the target image object is greater than or equal to a target threshold value. In the case where the target image includes a plurality of image objects, the search policy may be that a maximum value of the degrees of similarity between the plurality of image objects included in the target image and the target image object is greater than or equal to a target threshold value, or that a total of the degrees of similarity between the plurality of image objects included in the target image and the target image object is greater than or equal to the target threshold value. The specific search strategy may be flexibly set according to the need, which is not specifically limited in this embodiment.
The number of the target image objects is one or more. When there is one target image object, the target image object may be directly retrieved to obtain a target image matching the target image object. In the case where there are a plurality of target image objects, the target image can be retrieved in various ways.
As an alternative embodiment, target images that match each of the plurality of target image objects may be retrieved separately. The background service may perform a similarity lookup for a plurality of features (corresponding to respective target image objects).
As another optional implementation, at least two target image objects in the plurality of target image objects may be combined to obtain at least one combined image object; a target image matching the at least one combined image object is retrieved.
The number of the above combinations may be one or more. A target image object may belong to at least one combination (different combinations may contain target image objects with intersections) or may not belong to any combination.
The combining step may be performed by the target terminal. The target terminal may detect indication information indicating at least two target image objects to be combined. The target terminal may also automatically perform the combining operation according to a predetermined policy (e.g., for defining the number of combinations, defining the types of combinations, etc.).
The combining step may be performed by a server. The server may automatically perform the combining operation according to a predetermined policy (e.g., for defining the number of combinations, defining the type of combination, etc.).
Optionally, in this embodiment, after retrieving the target images matching the target image object, if there are a plurality of target images, the target images are sorted in order of similarity with the target image object from high to low; and displaying the sequenced target images on a screen of the target terminal.
After the search is completed, the search results may be presented to the user in a picture-displayed manner, with the more similar target images being located further forward.
The above sorting step may be performed by the target terminal, or may be performed by the server. In the case of target image retrieval by a target terminal, the target terminal may rank the plurality of target images according to their similarity to the target image object. In the case of performing the target image retrieval by the server, the server may sort the plurality of target images according to the similarity between the target images and the target image objects and transmit the results of the sorting to the target terminal, or may transmit each target image and the similarity between the target image and the target image object to the target terminal, and the target terminal sorts the plurality of target images according to the similarity between the target images and the target image objects.
The above-described similarity may be a similarity between feature vectors. In calculating the similarity between the target image and the target image object, the similarity between the target image as a whole and the target image object may be calculated, or the similarity between the image object included in the target image and the target image object may be calculated, and the highest similarity may be selected as the similarity between the target image and the target image object, or the weighted similarity between each image object included in the target image and the target image object may be calculated as the similarity between the target image and the target image object.
The number of target image objects may be one or more. When the number of the target image objects is plural, the target similarity between each of the plural target images and each of the plural target image objects may be calculated; respectively carrying out weighted summation on the target similarity of each target image in the target images and each target image object in the target image objects to obtain the weighted similarity corresponding to each target image in the target images; and sequencing the target images according to the weighted similarity from top to bottom.
The same or different weights may be set for different target image objects. In the sorting, the weighted similarity between each target image and the target image object may be calculated first: for one target image object, calculating the target similarity of the target image and each target image object in a plurality of target image objects, and carrying out weighted summation on each target similarity according to the weight of each target image object to obtain the weighted similarity corresponding to the target image; and then sorting the plurality of target images according to the weighted similarity of the target images.
For pictures appearing multiple times in the retrieval result, the scores can be weighted and scored according to the similarity of the pictures and the retrieval characteristics (target image objects); for pictures that appear only once, it may be scored for its similarity to the retrieved features and presented to the user in a score order from high to low.
For example, there are 4 target image objects, each set with the same weight (25%). The similarity between the target image and each target image object is 0.8, 0.5, 0.4, 0.2, respectively, and then the weighted similarity between the target image and each target image object is: (0.8+0.5+0.4+0.2) × 25 ═ 0.475.
Alternatively, in this embodiment, when there are a plurality of target image objects, the target image may be displayed on the target terminal in a plurality of ways. The target images may be displayed in a combined manner (without distinguishing the target image objects) according to the similarity (e.g., weighted similarity) between the target images and the target image objects, or the target images may be displayed on a page basis according to the target image objects, with one target image matching the target image object being displayed in each page. Or, a matching item list may be displayed according to the target image object, where one matching item in the matching item list corresponds to one target image object.
As an alternative implementation, the target images may be sorted according to their similarity (for a plurality of target image objects, it may be a weighted similarity), and displayed on the screen of the target terminal. All the target images may be displayed at once, or a predetermined number of target images may be displayed at a time, and when an operation of performing display switching is detected (for example, a switching operation that is automatically triggered by clicking a "next page" button and sliding a page to the bottom), the predetermined number of target images after switching is displayed.
As another alternative embodiment, a target image matching the first image object may be displayed in a first page on a target screen of the target terminal; in a case where a target operation performed on a target button on a target screen is detected, switching to a second page and displaying a target image matching a second image object in the second page, wherein the plurality of target image objects include a first image object and a second image object.
Different pages can be identified through different tags, all target images can be displayed at one time in each page, a preset number of target images can also be displayed each time, and when the operation of performing display switching is detected (for example, clicking a next page button, and sliding the page to the switching operation automatically triggered at the bottom), the preset number of target images after switching are displayed.
As a further alternative, a list of matching items may be displayed on a target screen of the target terminal, where one matching item in the list of matching items includes object indication information and a sub-target image, the object indication information is used to indicate a target image object corresponding to the matching item, and the sub-target image is a target image matching the target image object indicated by the object indication information.
All the target images can be integrated into one page to be displayed, the integration mode can be a matching item list, one matching item in the matching item list comprises object indication information and sub-target images, the object indication information is used for indicating a target image object corresponding to the matching item, and the sub-target images are target images matched with the target image object indicated by the object indication information. Each matching entry may include a display control button that may be clicked to control the display or hiding of the sub-target image.
The image processing method in the present embodiment is explained below with reference to the following examples. In the example, an artificial intelligence algorithm is adopted to extract image description vectors and position image feature objects, so that a proper area is automatically framed for a user to perform multi-feature matching query. The following describes the image processing method, wherein the reference image is a trademark image.
As shown in fig. 4, the image processing method includes the steps of:
step S402, positioning the characteristic object and extracting the characteristic.
When a user needs to know whether a certain trademark picture has an approximate trademark, the user uploads the trademark picture on a retrieval platform/tool. After clicking the upload button, the user selects the picture to be uploaded or inputs the picture address, the picture is uploaded to the platform/tool, and the picture is displayed to the user on the front-end interface.
The platform background can extract the image object of the trademark picture through the neural network model. For the neural network model, after being trained by a large number of figures, the neural network model can have the capability of key region (image object) positioning and feature extraction on the trademark picture. The detection result of the key area may include a graphic box of the key area (as shown in fig. 5).
And S404, displaying the result and screening the user.
The platform background returns the detected graphic frame to the front end, and the user can perform retention or removal operation on the graphic frame. Subsequent searches will be performed according to the characteristics of the graphics within the box.
Step S406, multi-feature matching query.
Since the user is allowed to select one or more boxes, the background service will perform similarity lookup on multiple features. The final result will be optimized according to the results of the multiple features. Firstly, for pictures appearing for many times in a result, the pictures are weighted and scored according to the similarity of each retrieval characteristic; then, for pictures that appear only once, they are scored for their similarity to the retrieved features. And finally, the scores are ranked from high to low and presented to the user.
In the example, local areas based on the trademark picture are searched, and the areas are the most characteristic individual objects obtained through the neural network, so that compared with a method based on global image similarity, the picture processing method in the example can more accurately find out the trademarks adopting similar design elements, the accuracy of the trademark picture searching is greatly improved, and the steps are more efficient and convenient. And the individual object automatically identified by the AI is edited, so that the retrieval efficiency and precision can be further improved.
With the present embodiment, a reference image for retrieval is acquired; acquiring an image object contained in a reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training; acquiring a selected target image object in the identified image objects; and searching the target image matched with the target image object, thereby accurately searching the target image designed by adopting the similar image object with the reference image and improving the accuracy of the search result.
As an alternative embodiment, acquiring the selected target image object of the identified image objects includes:
s1, displaying the image object on the screen of the target terminal;
s2, a selecting operation performed on one or more of the image objects is detected, wherein the selected one or more image objects are target image objects.
According to the embodiment, the image object is displayed on the screen of the target terminal, the selection operation of the target object is detected to obtain the target graphic object, and the target image object can be obtained in a man-machine interaction mode, so that the accuracy of target image retrieval is improved, and the user experience is improved.
As an alternative embodiment, the number of the target image objects is one or more, and in the case where there are a plurality of target image objects, retrieving the target image matching the target image object includes:
s1, searching for a target image matching each of the plurality of target image objects; alternatively, the first and second electrodes may be,
s2, combining at least two target image objects in the plurality of target image objects to obtain at least one combined image object; a target image matching the at least one combined image object is retrieved.
By the embodiment, the target image is searched in a single search mode or a combined search mode, and the flexibility of target image searching is improved.
As an alternative embodiment, after retrieving the target image matching the target image object, the method further comprises:
s1, when a plurality of target images exist, sorting the target images in the sequence of high-to-low similarity with the target image object;
s2, displaying the sorted plurality of target images on the screen of the target terminal.
Optionally, the sorting the plurality of target images in order of similarity to the target image object from high to low includes:
s1, when the number of the target image objects is plural, calculating the target similarity between each of the plural target images and each of the plural target image objects;
s2, respectively carrying out weighted summation on the target similarity of each target image in the target images and each target image object in the target image objects to obtain the weighted similarity corresponding to each target image in the target images;
s3, the target images are sorted according to the weighted similarity from top to bottom.
Through the embodiment, the target images are sequenced and displayed according to the similarity with the target image object, and the similar images can be displayed in front, so that a user can quickly locate a desired image, and the user experience is improved.
As an alternative embodiment, after retrieving the target image matching the target image object, the method further comprises:
s1, in case that there are a plurality of object image objects, displaying an object image matching the first image object in a first page on an object screen of the object terminal;
s2, in a case where a target operation performed on the target button on the target screen is detected, switching to a second page and displaying a target image matching a second image object in the second page, wherein the plurality of target image objects include the first image object and the second image object.
Through the embodiment, the target images matched with different target image objects are displayed through different pages, so that the target images can be clearly and definitely displayed, and the user experience is improved.
As an alternative embodiment, after retrieving the target image matching the target image object, the method further comprises:
and S1, in the case that the target image object is multiple, displaying a matching item list on a target screen of the target terminal, wherein one matching item in the matching item list comprises object indication information and a sub-target image, the object indication information is used for indicating the target image object corresponding to the matching item, and the sub-target image is the target image matched with the target image object indicated by the object indication information.
By the embodiment, the target images matched with different target image objects are displayed in a classified manner by displaying the matching item list, so that the target images can be clearly and definitely displayed, and the user experience is improved.
As an alternative embodiment, acquiring the image object contained in the reference image comprises:
s1, sending the obtained reference image to a server, wherein the server is used for identifying the image object contained in the reference image by using an image object identification model;
and S2, receiving a target message sent by the server, wherein the target message carries the image object.
Through the embodiment, the server identifies the image object in the reference image through interaction with the server, so that the requirements on software and hardware of the target terminal are reduced, and the development cost is reduced.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus for implementing the above-described image processing method, as shown in fig. 6, the apparatus including:
(1) a first acquisition unit 602 configured to acquire a reference image for retrieval;
(2) a second obtaining unit 604, configured to obtain an image object included in the reference image, where the image object is identified by using an image object identification model, and the image object identification model is a model obtained by performing machine training using a sample image;
(3) a third acquiring unit 606 for acquiring a selected target image object among the recognized image objects;
(4) a retrieving unit 608 for retrieving the target image matching the target image object.
Alternatively, the first obtaining unit 602 may be configured to perform the step S202, the second obtaining unit 604 may be configured to perform the step S204, the third obtaining unit 606 may be configured to perform the step S206, and the retrieving unit 608 may be configured to perform the step S208.
The execution subject of each step may be a target terminal, and the target terminal may include but is not limited to: mobile phones, tablet computers, notebook computers, desktop computers, and the like.
Alternatively, the image processing apparatus may be, but is not limited to, during the retrieval of the similar images. For example, in the process of searching for similar trademark images.
In the related art, similar image retrieval is usually implemented in a global similarity-based manner: extracting a characteristic vector of a reference image from the reference image; and searching a feature vector similar to the feature vector in an image database by using the extracted feature vector, and further retrieving a target image similar to the reference image. The extracted feature vectors have the problem of low retrieval accuracy due to insufficient capability of representing the reference images.
With the present embodiment, a reference image for retrieval is acquired; acquiring an image object contained in a reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training; acquiring a selected target image object in the identified image objects; and searching the target image matched with the target image object, thereby accurately searching the target image designed by adopting the similar image object with the reference image and improving the accuracy of the search result.
As an alternative embodiment, the third obtaining unit 606 includes:
(1) the display module is used for displaying the image object on a screen of the target terminal;
(2) the detection module is used for detecting the selection operation executed on one or more image objects in the image objects, wherein the selected one or more image objects are target image objects.
According to the embodiment, the image object is displayed on the screen of the target terminal, the selection operation of the target object is detected to obtain the target graphic object, and the target image object can be obtained in a man-machine interaction mode, so that the accuracy of target image retrieval is improved, and the user experience is improved.
As an alternative embodiment, the retrieving unit 608 includes:
the first retrieval module is used for respectively retrieving a target image matched with each target image object in the plurality of target image objects under the condition that the plurality of target image objects are provided; alternatively, the first and second electrodes may be,
(1) the combination module is used for combining at least two target image objects in the target image objects under the condition that the target image objects are multiple to obtain at least one combined image object; (2) and the second retrieval module is used for retrieving the target image matched with the at least one combined image object.
By the embodiment, the target image is searched in a single search mode or a combined search mode, and the flexibility of target image searching is improved.
As an alternative embodiment, the above apparatus further comprises:
(1) the sorting unit is used for sorting the target images in the sequence from high similarity to low similarity of the target image objects under the condition that the target images are multiple after the target images matched with the target image objects are searched;
(2) and the display unit is used for displaying the sequenced target images on a screen of the target terminal.
Optionally, the sorting unit comprises:
a calculating module, configured to calculate, when the number of target image objects is multiple, a target similarity between each of the multiple target images and each of the multiple target image objects, respectively;
the summing module is used for respectively carrying out weighted summation on the target similarity of each target image in the target images and each target image object in the target image objects to obtain the weighted similarity corresponding to each target image in the target images;
and the sequencing module is used for sequencing the target images according to the weighted similarity from top to bottom.
Through the embodiment, the target images are sequenced and displayed according to the similarity with the target image object, and the similar images can be displayed in front, so that a user can quickly locate a desired image, and the user experience is improved.
As an alternative embodiment, the above apparatus further comprises:
(1) a first display unit configured to display the target image matching a first image object in a first page on a target screen of a target terminal in a case where the target image object is plural after retrieving the target image matching the target image object; (2) a second display unit configured to switch to a second page and display a target image matching a second image object in the second page in a case where a target operation performed on a target button on the target screen is detected, wherein the plurality of target image objects include the first image object and the second image object; alternatively, the first and second electrodes may be,
and a third display unit, configured to display a list of matching items on a target screen of a target terminal in a case where a plurality of target image objects are present after retrieving the target image that matches the target image object, where one matching item in the list of matching items includes object indication information and a sub-target image, the object indication information is used to indicate the target image object that corresponds to the matching item, and the sub-target image is a target image that matches the target image object indicated by the object indication information.
Through the embodiment, the target images matched with different target image objects are displayed through different pages, so that the target images can be clearly and definitely displayed, and the user experience is improved.
As an alternative embodiment, after retrieving the target image matching the target image object, the method further comprises:
and S1, in the case that the target image object is multiple, displaying a matching item list on a target screen of the target terminal, wherein one matching item in the matching item list comprises object indication information and a sub-target image, the object indication information is used for indicating the target image object corresponding to the matching item, and the sub-target image is the target image matched with the target image object indicated by the object indication information.
By the embodiment, the target images matched with different target image objects are displayed in a classified manner by displaying the matching item list, so that the target images can be clearly and definitely displayed, and the user experience is improved.
As an alternative embodiment, the second obtaining unit 604 includes:
the receiving module is used for sending the acquired reference image to a server, wherein the server is used for identifying an image object contained in the reference image by using an image object identification model;
and the receiving module is used for receiving a target message sent by the server, wherein the target message carries the image object.
Through the embodiment, the server identifies the image object in the reference image through interaction with the server, so that the requirements on software and hardware of the target terminal are reduced, and the development cost is reduced.
According to still another aspect of an embodiment of the present invention, there is also provided an image processing system. As shown in fig. 7, the image processing system includes: a target terminal 702 and a server 704, wherein the target terminal 702 includes any one of the image processing apparatuses described above, and the server 704 is configured to identify an image object included in a reference image using an image object identification model.
According to still another aspect of an embodiment of the present invention, there is also provided a storage medium. The storage medium has stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the above-described method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a reference image for retrieval;
s2, acquiring an image object contained in the reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training;
s3, acquiring a selected target image object in the identified image objects;
s4, a target image matching the target image object is retrieved.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the above-mentioned image processing method, as shown in fig. 8, the electronic device including: processor 802, memory 804, display 806, user interface 808, transmission device 810, and the like. The memory has stored therein a computer program, and the processor is arranged to execute the steps of any of the above method embodiments by means of the computer program.
Optionally, in this embodiment, the electronic device may be a user terminal.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a reference image for retrieval;
s2, acquiring an image object contained in the reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training;
s3, acquiring a selected target image object in the identified image objects;
s4, a target image matching the target image object is retrieved.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 8 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, and a Mobile Internet Device (MID), a PAD, and the like. Fig. 8 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
The memory 804 may be used to store software programs and modules, such as program instructions/modules corresponding to the image processing method and apparatus in the embodiments of the present invention, and the processor 802 executes various functional applications and data processing by running the software programs and modules stored in the memory 804, so as to implement the image processing method. The memory 804 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 804 can further include memory located remotely from the processor 802, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 810 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 810 includes a network adapter (NIC) that can be connected to a router via a network cable and other network devices so as to communicate with the internet or a local area network. In one example, the transmission device 810 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display 806 displays an image object and a target image, and the user interface 808 is used for acquiring an input operation instruction, such as a selection command for selecting the target image object, a search command for performing a search, and the like.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (16)

1. An image processing method, comprising:
acquiring a reference image for retrieval;
acquiring an image object contained in the reference image, wherein the image object is identified by using an image object identification model, and the image object identification model is obtained by using a sample image to perform machine training;
acquiring a selected target image object in the identified image objects;
retrieving a target image that matches the target image object.
2. The method of claim 1, wherein obtaining the selected target image object of the identified image objects comprises:
displaying the image object on a screen of a target terminal;
detecting a selection operation performed on one or more image objects of the image objects, wherein the selected one or more image objects are the target image objects.
3. The method according to claim 1, wherein the target image object is one or more, and in the case where the target image object is plural, retrieving the target image matching the target image object includes:
retrieving the target image matching each of the plurality of target image objects, respectively; alternatively, the first and second electrodes may be,
combining at least two target image objects of the plurality of target image objects to obtain at least one combined image object; retrieving the target image that matches the at least one combined image object orientation.
4. The method of claim 1, wherein after retrieving a target image that matches the target image object, the method further comprises:
when the target images are multiple, sequencing the multiple target images according to the sequence of the similarity of the target images and the target image objects from high to low;
and displaying the sequenced target images on a screen of a target terminal.
5. The method of claim 4, wherein sorting the plurality of target images in order of high-to-low similarity to the target image object comprises:
when the number of the target image objects is multiple, respectively calculating the target similarity between each target image in the multiple target images and each target image object in the multiple target image objects;
respectively carrying out weighted summation on the target similarity of each target image in the target images and each target image object in the target image objects to obtain the weighted similarity corresponding to each target image in the target images;
and sequencing the target images according to the weighted similarity from top to bottom.
6. The method of claim 1, wherein after retrieving the target image that matches the target image object, the method further comprises:
under the condition that the number of the target image objects is multiple, displaying the target image matched with the first image object in a first page on a target screen of a target terminal;
in a case where a target operation performed on a target button on the target screen is detected, switching to a second page and displaying the target image matching a second image object in the second page, wherein the plurality of target image objects include the first image object and the second image object.
7. The method of claim 1, wherein after retrieving the target image that matches the target image object, the method further comprises:
and displaying a matching item list on a target screen of a target terminal under the condition that the target image objects are multiple, wherein one matching item in the matching item list comprises object indication information and a sub-target image, the object indication information is used for indicating the target image object corresponding to the matching item, and the sub-target image is the target image matched with the target image object indicated by the object indication information.
8. The method according to any one of claims 1 to 7, wherein acquiring the image object contained in the reference image comprises:
sending the obtained reference image to a server, wherein the server is used for identifying the image object contained in the reference image by using an image object identification model;
and receiving a target message sent by the server, wherein the target message carries the image object.
9. An image processing apparatus characterized by comprising:
a first acquisition unit configured to acquire a reference image for retrieval;
a second obtaining unit, configured to obtain an image object included in the reference image, where the image object is identified by using an image object identification model, and the image object identification model is obtained by performing machine training using a sample image;
a third acquiring unit, configured to acquire a selected target image object from the identified image objects;
and the retrieval unit is used for retrieving the target image matched with the target image object.
10. The apparatus of claim 9, wherein the third obtaining unit comprises:
the display module is used for displaying the image object on a screen of the target terminal;
a detection module, configured to detect a selection operation performed on one or more image objects of the image objects, where the selected one or more image objects are the target image objects.
11. The apparatus of claim 9, wherein the retrieving unit comprises:
a first retrieval module, configured to retrieve, when the target image object is multiple, the target image that matches each of the multiple target image objects, respectively; alternatively, the first and second electrodes may be,
the combination module is used for combining at least two target image objects in the target image objects under the condition that the target image objects are multiple to obtain at least one combined image object; a second retrieval module for retrieving the target image matching the at least one combined image object.
12. The apparatus of claim 9, further comprising:
a first display unit configured to display the target image matching a first image object in a first page on a target screen of a target terminal in a case where the target image object is plural after retrieving the target image matching the target image object; a second display unit configured to switch to a second page and display a target image matching a second image object in the second page in a case where a target operation performed on a target button on the target screen is detected, wherein the plurality of target image objects include the first image object and the second image object; alternatively, the first and second electrodes may be,
and a third display unit, configured to display a list of matching items on a target screen of a target terminal in a case where a plurality of target image objects are present after retrieving the target image that matches the target image object, where one matching item in the list of matching items includes object indication information and a sub-target image, the object indication information is used to indicate the target image object that corresponds to the matching item, and the sub-target image is a target image that matches the target image object indicated by the object indication information.
13. The apparatus according to any one of claims 9 to 12, wherein the second obtaining unit comprises:
a sending module, configured to send the obtained reference image to a server, where the server is configured to identify the image object included in the reference image using an image object identification model;
and the receiving module is used for receiving a target message sent by the server, wherein the target message carries the image object.
14. An image processing system, comprising: a target terminal comprising the image processing apparatus of any one of claims 9 to 13, and a server for identifying an image object contained in a reference image using an image object recognition model.
15. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 8 when executed.
16. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 8 by means of the computer program.
CN201811005314.5A 2018-08-30 2018-08-30 Image processing method, device and system, storage medium and electronic device Pending CN110929057A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811005314.5A CN110929057A (en) 2018-08-30 2018-08-30 Image processing method, device and system, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811005314.5A CN110929057A (en) 2018-08-30 2018-08-30 Image processing method, device and system, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN110929057A true CN110929057A (en) 2020-03-27

Family

ID=69854897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811005314.5A Pending CN110929057A (en) 2018-08-30 2018-08-30 Image processing method, device and system, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110929057A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329797A (en) * 2020-11-13 2021-02-05 杭州海康威视数字技术股份有限公司 Target object retrieval method, device, server and storage medium
CN113420170A (en) * 2021-07-15 2021-09-21 宜宾中星技术智能***有限公司 Multithreading storage method, device, equipment and medium for big data image
CN113468353A (en) * 2021-07-20 2021-10-01 柒久园艺科技(北京)有限公司 Tourist interaction method and device based on graphics, electronic equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639858A (en) * 2009-08-21 2010-02-03 深圳创维数字技术股份有限公司 Image search method based on target area matching
CN103415868A (en) * 2011-03-11 2013-11-27 欧姆龙株式会社 Image processing device, image processing method and control program
US20150120760A1 (en) * 2013-10-31 2015-04-30 Adobe Systems Incorporated Image tagging
EP3021238A1 (en) * 2014-11-17 2016-05-18 Ricoh Company, Ltd. Information processing apparatus, information processing system, and information processing method
CN106202189A (en) * 2016-06-27 2016-12-07 乐视控股(北京)有限公司 A kind of image search method and device
US20170039417A1 (en) * 2015-08-05 2017-02-09 Canon Kabushiki Kaisha Image recognition method, image recognition apparatus, and recording medium
CN106897372A (en) * 2017-01-17 2017-06-27 腾讯科技(上海)有限公司 voice inquiry method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639858A (en) * 2009-08-21 2010-02-03 深圳创维数字技术股份有限公司 Image search method based on target area matching
CN103415868A (en) * 2011-03-11 2013-11-27 欧姆龙株式会社 Image processing device, image processing method and control program
US20150120760A1 (en) * 2013-10-31 2015-04-30 Adobe Systems Incorporated Image tagging
EP3021238A1 (en) * 2014-11-17 2016-05-18 Ricoh Company, Ltd. Information processing apparatus, information processing system, and information processing method
US20170039417A1 (en) * 2015-08-05 2017-02-09 Canon Kabushiki Kaisha Image recognition method, image recognition apparatus, and recording medium
CN106202189A (en) * 2016-06-27 2016-12-07 乐视控股(北京)有限公司 A kind of image search method and device
CN106897372A (en) * 2017-01-17 2017-06-27 腾讯科技(上海)有限公司 voice inquiry method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329797A (en) * 2020-11-13 2021-02-05 杭州海康威视数字技术股份有限公司 Target object retrieval method, device, server and storage medium
CN113420170A (en) * 2021-07-15 2021-09-21 宜宾中星技术智能***有限公司 Multithreading storage method, device, equipment and medium for big data image
CN113468353A (en) * 2021-07-20 2021-10-01 柒久园艺科技(北京)有限公司 Tourist interaction method and device based on graphics, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN109086394B (en) Search ranking method and device, computer equipment and storage medium
CN110019896B (en) Image retrieval method and device and electronic equipment
US20150169527A1 (en) Interacting method, apparatus and server based on image
CN105160545B (en) Method and device for determining release information style
US11531882B2 (en) Method and system for automatically classifying images
CN103995889A (en) Method and device for classifying pictures
CN108228720B (en) Identify method, system, device, terminal and the storage medium of target text content and original image correlation
CN110162454B (en) Game running method and device, storage medium and electronic device
CN110929057A (en) Image processing method, device and system, storage medium and electronic device
CN108228421A (en) data monitoring method, device, computer and storage medium
CN112328823A (en) Training method and device for multi-label classification model, electronic equipment and storage medium
CN110929058B (en) Trademark picture retrieval method and device, storage medium and electronic device
CN111552767A (en) Search method, search device and computer equipment
CN112306347B (en) Image editing method, image editing device and electronic equipment
CN107341139A (en) Multimedia processing method and device, electronic equipment and storage medium
CN112394861A (en) Page jump method and device, storage medium and electronic device
CN115809371A (en) Learning demand determination method and system based on data analysis
CN114936301A (en) Intelligent household building material data management method, device, equipment and storage medium
CN112348107A (en) Image data cleaning method and apparatus, electronic device, and medium
CN105989114A (en) Collection content recommendation method and terminal
CN111126457A (en) Information acquisition method and device, storage medium and electronic device
CN113037925B (en) Information processing method, information processing apparatus, electronic device, and readable storage medium
CN112052352B (en) Video ordering method, device, server and storage medium
CN110895555B (en) Data retrieval method and device, storage medium and electronic device
CN113377970A (en) Information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210113

Address after: 17c, 14 / F, unit 3, building 3, No.48, Zhichun Road, Haidian District, Beijing 100098

Applicant after: Beijing Blue lantern fish Intelligent Technology Co.,Ltd.

Address before: 1411 Junyue Pavilion, 9 Yannan Road, Fuqiang community, Huaqiangbei street, Futian District, Shenzhen, Guangdong 518031

Applicant before: Shenzhen Blue Lantern Fish Intelligent Technology Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327