CN112200783A - Commodity image processing method and device - Google Patents

Commodity image processing method and device Download PDF

Info

Publication number
CN112200783A
CN112200783A CN202011059920.2A CN202011059920A CN112200783A CN 112200783 A CN112200783 A CN 112200783A CN 202011059920 A CN202011059920 A CN 202011059920A CN 112200783 A CN112200783 A CN 112200783A
Authority
CN
China
Prior art keywords
image
sub
commodity
similarity
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011059920.2A
Other languages
Chinese (zh)
Other versions
CN112200783B (en
Inventor
隋心
张静军
姜琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202011059920.2A priority Critical patent/CN112200783B/en
Publication of CN112200783A publication Critical patent/CN112200783A/en
Application granted granted Critical
Publication of CN112200783B publication Critical patent/CN112200783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a commodity image processing method, which comprises the following steps: and acquiring a first commodity image and a second commodity image, wherein the sizes of the first commodity image and the second commodity image are the same. Dividing a first commodity image into at least two sub-images, and determining a first similarity of the first commodity image and the second commodity image by using the at least two sub-images. In this application, the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image. In this way, when the first similarity is calculated, the sub-image located in the central area of the first commodity image can play more roles, and accordingly, the influence of the sub-image located in the edge area of the first commodity image on the first similarity is weakened, so that the similarity between the first commodity image and the second commodity image is accurately determined.

Description

Commodity image processing method and device
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for processing a commodity image.
Background
With the development of the network shopping mall, users can purchase goods in the network shopping mall. The user can know the related information of the commodity by browsing the commodity image in the network shopping mall.
In some scenarios, for example, in similar or identical product recommendation scenarios, it is necessary to determine the degree of similarity between product images. However, the similarity between the commodity images cannot be accurately determined by the current image processing method.
Therefore, a solution is urgently needed to accurately determine the similarity between the commodity images.
Disclosure of Invention
The technical problem to be solved by the application is how to accurately determine the similarity degree between commodity images, and a commodity image processing method and device are provided.
In a first aspect, an embodiment of the present application provides a method for processing a commodity image, where the method includes:
acquiring a first commodity image and a second commodity image, wherein the sizes of the first commodity image and the second commodity image are the same;
dividing the first commodity image into at least two sub-images;
respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images;
acquiring the similarity of each sub-image and the image area matched with each sub-image in the second commodity image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image; wherein:
the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
In one implementation, the dividing the first merchandise image into at least two sub-images includes:
dividing the first commodity image into a sub-images a and b according to a mode a and b, wherein a and b are both natural numbers, a and b are natural numbers larger than 2, and a and b are not equal to 2 at the same time.
In one implementation, the determining the image area in the second commodity image that matches the first sub-image includes:
and determining the image area which is matched with the first sub-image in the second commodity image to be the image area matched with the first sub-image.
In one implementation, the method further comprises:
respectively calculating the overlapping degree of each sub-image and the determined image area matched with each sub-image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image, wherein the determining comprises the following steps:
and determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in each sub-image and the determined second commodity image, the overlapping degree of the image area matched with each sub-image in each sub-image and the determined second commodity image, and the weight of each sub-image.
In one implementation, the method further comprises:
dividing the first commodity image into c x d sub-images according to a c x d mode, wherein c x d is a natural number larger than 1, and both c and d are natural numbers; a is not equal to c, and/or, b is not equal to d;
for each sub-image in the c x d sub-images, respectively determining an image area in the second commodity image, which is matched with each sub-image;
acquiring the similarity of each sub-image in the c x d sub-images and the determined image area matched with each sub-image in the c x d sub-images in the second commodity image;
determining a second similarity of the first commodity image and the second commodity image according to the similarity of each sub-image in the c x d sub-images and the image area matched with each sub-image in the c x d sub-images in the second commodity image and the weight of each sub-image in the c x d sub-images;
and determining the third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if c and d are both 2, or c x d is equal to 2, the weight of each sub-image in the c x d sub-images is the same; if a is greater than 2, and c and d are not equal to 2 at the same time, the weight of the sub-image in the center region of the first commodity image in the c x d sub-images is higher than the weight of the sub-image in the edge region of the first commodity image in the c x d sub-images.
In one implementation, the method further comprises:
when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodities.
In a second aspect, an embodiment of the present application provides an apparatus for processing an image of a commodity, the apparatus including:
the system comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is used for acquiring a first commodity image and a second commodity image, and the image sizes of the first commodity image and the second commodity image are the same;
the first dividing unit is used for dividing the first commodity image into at least two sub-images;
the first determining unit is used for respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images;
the second acquisition unit is used for acquiring the similarity between each sub-image and the image area matched with each sub-image in the second commodity image;
a second determining unit, configured to determine a first similarity between the first commodity image and the second commodity image according to the similarity between each sub-image and the image region matching each sub-image in the second commodity image, and the weight of each sub-image; wherein:
the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
In one implementation, the first dividing unit is configured to:
dividing the first commodity image into a sub-images a and b according to a mode a and b, wherein a and b are both natural numbers, a and b are natural numbers larger than 2, and a and b are not equal to 2 at the same time.
In an implementation manner, the at least two sub-images include a first sub-image, and the first determining unit is configured to:
and determining the image area which is matched with the first sub-image in the second commodity image to be the image area matched with the first sub-image.
In one implementation, the apparatus further comprises:
a first calculating unit, configured to calculate an overlapping degree of each sub-image and the determined image region matching the each sub-image, respectively;
the second determination unit is configured to:
and determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in each sub-image and the determined second commodity image, the overlapping degree of the image area matched with each sub-image in each sub-image and the determined second commodity image, and the weight of each sub-image.
In one implementation, the apparatus further comprises:
the second dividing unit is used for dividing the first commodity image into c x d sub-images according to a c x d mode, wherein c x d is a natural number larger than 1, and both c and d are natural numbers; a is not equal to c, and/or, b is not equal to d;
a third determining unit, configured to determine, for each sub-image in the c × d sub-images, an image area in the second product image that matches each sub-image;
a third obtaining unit, configured to obtain a similarity between each sub-image in the c × d sub-images and the determined image region in the second product image that matches with each sub-image in the c × d sub-images;
a fourth determining unit, configured to determine a second similarity of the first commodity image and the second commodity image according to the similarity of each sub-image in the c x d sub-images and the image region matched with each sub-image in the c x d sub-images in the second commodity image, and the weight of each sub-image in the c x d sub-images;
a fifth determining unit, configured to determine a third similarity between the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if c and d are both 2, or c x d is equal to 2, the weight of each sub-image in the c x d sub-images is the same; if a is greater than 2, and c and d are not equal to 2 at the same time, the weight of the sub-image in the center region of the first commodity image in the c x d sub-images is higher than the weight of the sub-image in the edge region of the first commodity image in the c x d sub-images.
In one implementation, the apparatus further comprises:
a sixth determining unit, configured to determine that a product included in the first product image and a product included in the second product image are the same or similar product when the first similarity between the first product image and the second product image is higher than a first threshold.
In a third aspect, an embodiment of the present application provides a device for processing an image of a commodity, including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by the one or more processors, and the one or more programs include instructions for:
acquiring a first commodity image and a second commodity image, wherein the sizes of the first commodity image and the second commodity image are the same;
dividing the first commodity image into at least two sub-images;
respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images;
acquiring the similarity of each sub-image and the image area matched with each sub-image in the second commodity image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image; wherein:
the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
In one implementation, the dividing the first merchandise image into at least two sub-images includes:
dividing the first commodity image into a sub-images a and b according to a mode a and b, wherein a and b are both natural numbers, a and b are natural numbers larger than 2, and a and b are not equal to 2 at the same time.
In one implementation, the determining the image area in the second commodity image that matches the first sub-image includes:
and determining the image area which is matched with the first sub-image in the second commodity image to be the image area matched with the first sub-image.
In one implementation, the operations further comprise:
respectively calculating the overlapping degree of each sub-image and the determined image area matched with each sub-image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image, wherein the determining comprises the following steps:
and determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in each sub-image and the determined second commodity image, the overlapping degree of the image area matched with each sub-image in each sub-image and the determined second commodity image, and the weight of each sub-image.
In one implementation, the operations further comprise:
dividing the first commodity image into c x d sub-images according to a c x d mode, wherein c x d is a natural number larger than 1, and both c and d are natural numbers; a is not equal to c, and/or, b is not equal to d;
for each sub-image in the c x d sub-images, respectively determining an image area in the second commodity image, which is matched with each sub-image;
acquiring the similarity of each sub-image in the c x d sub-images and the determined image area matched with each sub-image in the c x d sub-images in the second commodity image;
determining a second similarity of the first commodity image and the second commodity image according to the similarity of each sub-image in the c x d sub-images and the image area matched with each sub-image in the c x d sub-images in the second commodity image and the weight of each sub-image in the c x d sub-images;
and determining the third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if c and d are both 2, or c x d is equal to 2, the weight of each sub-image in the c x d sub-images is the same; if a is greater than 2, and c and d are not equal to 2 at the same time, the weight of the sub-image in the center region of the first commodity image in the c x d sub-images is higher than the weight of the sub-image in the edge region of the first commodity image in the c x d sub-images.
In one implementation, the operations further comprise:
when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodities.
In a fourth aspect, embodiments of the present application provide a computer-readable medium having stored thereon instructions, which, when executed by one or more processors, cause an apparatus to perform the method of any of the above first aspects.
Compared with the prior art, the embodiment of the application has the following advantages:
the embodiment of the application provides a commodity image processing method, which can accurately determine the similarity between a first commodity image and a second commodity image. Specifically, a first commodity image and a second commodity image may be acquired, the image sizes of the first commodity image and the second commodity image being the same. After the first commodity image is acquired, the first commodity image may be divided into at least two sub-images, after the first commodity image is divided into at least two sub-images, each sub-image of the at least two sub-images may be respectively matched with a second commodity image, an image area in the second commodity image, which is matched with each sub-image, is determined, and the similarity between each sub-image and the determined image area matched with each sub-image is acquired. And then, determining the similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the determined image area matched with each sub-image. In this application, when determining the similarity between the first commodity image and the second commodity image according to the similarity between each sub-image and the determined image region matched with each sub-image, a corresponding weight may be given to each sub-image, and the first similarity between the first commodity image and the second commodity image may be determined according to the similarity between each sub-image and the determined image region matched with each sub-image and the weight of each sub-image. It is considered that the article is generally located at the center of the entire image for the article image, and may be a background image or a blank area for the edge portion. Therefore, in order to improve the accuracy of determining the similarity between the first commodity image and the second commodity image, in the present application, the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image. In this way, when calculating the first similarity, the similarity between the sub-image located in the central area of the first commodity image and the image area matching with the sub-image may play a more role, and the influence of the similarity between the sub-image located in the edge area of the first commodity image and the image area matching with the sub-image on the first similarity may be weakened accordingly. Therefore, the similarity between the first commodity image and the second commodity image can be accurately determined by the scheme.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a method for processing a commodity image according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a first commodity image according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another method for processing a commodity image according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a device for processing a commodity image according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a client according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The inventor of the application finds that the user can know the related information of the commodity by browsing the commodity image in the network mall. In some scenarios, for example, in similar or identical commodity recommendation scenarios, the same or similar commodities may be recommended for the user in combination with the similarity between commodity images. For example, when the similarity between two commodity images is greater than a preset threshold, it is determined that the commodities corresponding to the two commodity images are the same or similar commodities.
The inventor of the present application has also found, through research, that the similarity between the product images cannot be accurately determined by using the current image processing method, which further results in that whether the products corresponding to the two product images are the same or similar products cannot be accurately determined. When the similarity between the commodity images is determined by using the current image processing method, all the pixel points in the whole commodity image are treated indiscriminately. For example, the semantic relationship of the pixel points in the whole commodity image is used for segmentation and solution, so that the similarity between two commodity images is determined. However, the commodity image has a certain specificity compared with other types of images such as a landscape image. For the product image, the product is generally located in the center area of the product image, so that the user can visually know the appearance information of the product according to the product image, and the edge area of the product image is generally a blank area or an image background. Therefore, if all the area pixel points in the whole commodity image are treated indiscriminately, the blank area or the background area affects the accuracy of the similarity result. For example, for two images of a product with different corresponding products but the same image layout, the determined image similarity may be high using the current image processing method.
In order to solve the above problem, an embodiment of the present application provides a method for processing a commodity image, which can accurately determine a similarity between two commodity images.
Various non-limiting embodiments of the present application are described in detail below with reference to the accompanying drawings.
Exemplary method
Referring to fig. 1, the figure is a schematic flowchart of a processing method of a commodity image according to an embodiment of the present application.
The method for processing the commodity image according to the embodiment of the present application may be executed by a controller or a processor having a data processing function, or may be executed by a device including the controller or the processor, and the embodiment of the present application is not particularly limited. The device including the controller or the processor includes, but is not limited to, a terminal device and a server.
In the present embodiment, the processing method of the product image shown in fig. 1 may include the following steps S101 to S105, for example.
S101: acquiring a first commodity image and a second commodity image, wherein the sizes of the first commodity image and the second commodity image are the same.
The first commodity image and the second commodity image in the embodiment of the present application are both images including commodities. When determining the similarity between two product images, it is considered that if the sizes of the two images are different, the accuracy of the calculation result may be affected when the similarity calculation is performed on the two images. Therefore, in the embodiment of the present application, the image sizes of the first product image and the second product image are the same.
It should be noted that, if the sizes of the acquired original images of the first product image and the second product image are not the same, before executing step S101 in the present application, the original images of the first product image and/or the second product image need to be processed in advance, so that the sizes of the processed images of the first product image and the second product image are the same.
In one example of an embodiment of the present application, the first commodity image and the second commodity image may be commodity images provided by merchants in an online mall. In yet another example, the first commodity image or the second commodity image may be obtained after size processing is performed on a commodity image provided by a merchant in an online shopping mall. For example, if the image sizes of the product image provided by the first merchant and the product image provided by the second merchant are different, in one example, the first product image provided by the first merchant may be acquired, the third product image provided by the second merchant may be acquired, and the size of the third product image may be processed to obtain the second product image having the same size as the image size of the first product image. In yet another example, a second product image provided by a first merchant may be acquired, a fourth product image provided by a second merchant may be acquired, and the fourth product image may be subjected to size processing to obtain a first product image having the same size as the image of the second product image.
S102: the first merchandise image is divided into at least two sub-images.
In this embodiment of the present application, the first product image may be divided into at least two sub-images in an average division manner, or the first product image may be divided into at least two sub-images in a non-uniform division manner, which is not specifically limited in this embodiment of the present application. The at least two divided sub-images include a sub-image located in a center area of the first commodity image and a sub-image located in an edge area of the first commodity image.
In one example, the first commodity image may be divided into a × b sub-images in an average division manner, a and b are both natural numbers, a × b is a natural number greater than 2, and a and b are not equal to 2 at the same time. In the following embodiments, unless otherwise specified, the scheme provided by the embodiments of the present application is described by taking the example of dividing the first product image into a × b sub-images.
In the embodiment of the present application, it is considered that, for the product image, the product is generally located at the center of the whole image, and for the edge portion, it may be a background image or a blank area. Therefore, in order to improve the accuracy of determining the similarity between the first commodity image and the second commodity image, in the present application, the first commodity image may be divided in an a × b manner to obtain a × b sub-images. And when the accuracy of the first commodity image and the accuracy of the second commodity image are calculated, the a, b sub-images are treated differently according to the relative positions of the a, b sub-images in the first commodity image, so that the sub-images in the a, b sub-images, which are positioned in the central area of the first commodity image, can play more roles when the accuracy of the first commodity image and the accuracy of the second commodity image are determined. Accordingly, weakening the sub-image of the a x b sub-images, which is located in the edge region of the first product image, plays a role in determining the accuracy of the first product image and the second product image.
In the embodiment of the present application, dividing the first commodity image in the manner of a × b means equally dividing the height a of the first commodity image and equally dividing the width b of the first commodity image to obtain a × b sub-images. In the embodiments of the present application, a and b may be equal or different, and the embodiments of the present application are not limited.
In the embodiment of the present application, it is considered that if a × b is equal to 2, for example, a is equal to 1 and b is equal to 2, or a is equal to 2 and b is equal to 1, the relative positions of the two divided sub-images in the first commodity image are the same, and there is no division between the central region and the edge region, and therefore, a × b is greater than 2 in the present application. In addition, if a is equal to 2 and b is equal to 2, the relative positions of the four divided sub-images in the first product image are the same, and there is no division between the center area and the edge area, so in the present application, a and b are not equal to 2 at the same time.
S103: and respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images.
S104: and acquiring the similarity of each sub-image and the image area matched with each sub-image in the determined second commodity image.
In this embodiment of the application, after the first commodity image is divided in an a × b manner, for each sub-image in the a × b sub-images, an image region in the second commodity image, which is matched with each sub-image, may be respectively determined, and further, a similarity between each sub-image and the determined image region matched with each sub-image may be obtained.
S103, in a specific implementation, for example, a classical template matching method may be used to match each sub-image with the second commodity image, so as to obtain an image area in the second commodity image, where the image area is matched with each sub-image. For convenience of description, any one of the a × b sub-images is referred to as a first sub-image. In the embodiment of the present application, the image area in the second commodity image, which is matched with the first sub-image, refers to an image area with a higher image similarity with the first sub-image. In one example, it is considered that in practical applications, for a certain sub-image, for example, the first sub-image, there may be more than one image area in the second commodity image, which has a higher matching degree with the first sub-image. In an implementation manner of the embodiment of the present application, an image area in the second product image, which is most matched with the first sub-image, may be determined as an image area matched with the first sub-image. Regarding the template matching method, it is not described in detail here.
Regarding S104, it should be noted that, in an example, as described above, when determining an image area in the second product image that matches the first sub-image, an image area in the second product image that matches the first sub-image to the highest extent may be determined as the image area that matches the first sub-image. In this way, after S103 is executed, the similarity between the first sub-image and the image region matching the first sub-image can be directly obtained.
In yet another example, the similarity between the first sub-image and the image region matching the first sub-image may be calculated by using histogram matching, matrix decomposition, or other algorithms. The algorithms such as histogram matching and matrix decomposition will not be described in detail here.
S105: determining the first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the determined second commodity image and the weight of each sub-image; wherein: the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
After the similarity between each sub-image and the determined image area matched with each sub-image is obtained in S104, the similarity between the first commodity image and the second commodity image can be determined according to the similarity between each sub-image and the determined image area matched with each sub-image. In this application, when determining the similarity between the first commodity image and the second commodity image according to the similarity between each sub-image and the determined image region matched with each sub-image, a corresponding weight may be given to each sub-image, and the first similarity between the first commodity image and the second commodity image may be determined according to the similarity between each sub-image and the determined image region matched with each sub-image and the weight of each sub-image.
In this application, the weight of the sub-image of the a × b sub-image located in the center region of the first commodity image is higher than the weight of the sub-image of the a × b sub-image located in the edge region of the first commodity image. In the present application, the "center region" and the "edge region" may be regarded as two opposite concepts. If, for two sub-images, the sub-image 1 is closer to the center of the first commodity image than the sub-image 2, the sub-image 1 is considered to be located in the center area of the first commodity image relative to the sub-image 2, and correspondingly, the sub-image 2 is considered to be located in the edge area of the first commodity image relative to the sub-image 1.
In one example, the first similarity may be calculated using the following formula (1).
Figure RE-GDA0002795216470000121
In equation (1):
gamma is a first similarity;
i is a subscript of a sub-image in the first commodity image, and the value of i can be from 1 to a × b;
sisimilarity of an ith sub-image in a first commodity image and an image area matched with the ith sub-image in a second commodity image is obtained;
αiis the weight of the ith sub-image in the first commodity image.
With respect to the alphaiNow, the first product image will be described by taking an example of division into 5 × 5.
Referring to fig. 2, fig. 2 is a schematic view of a first commodity image according to an embodiment of the present disclosure. As shown in fig. 2, in the present application, the first product image is divided into 25 sub-images in a manner of 5 × 5. In this application, the weight of the sub-image located in the center-most region of the first product image (i.e., the sub-image numbered 1 in fig. 2) may be set to 0.9, the weight of the 8 sub-images may be set to 0.7 for the sub-images located in the center-next region of the first product image (i.e., the sub-images numbered 2 to 9 in fig. 2), and the weight of the 16 sub-images may be set to 0.4 for the sub-images located in the edge region of the first product image (i.e., the sub-images numbered 10 to 25 in fig. 2).
The weight of the sub-image in the center region of the first commodity image in the a x b sub-image is higher than the weight of the sub-image in the edge region of the first commodity image in the a x b sub-image, so that when the first similarity is calculated, the similarity between the sub-image in the center region of the first commodity image and the image region matched with the sub-image can play more roles, the influence of the similarity between the sub-image in the edge region of the first commodity image and the image region matched with the sub-image on the first similarity is correspondingly weakened, and the accuracy of the first similarity is effectively improved.
In an example of the embodiment of the application, after the first similarity is obtained through calculation, it may be further determined that a product included in the first product image and a product included in the second product image are the same or similar according to the first similarity. In one implementation, if the first similarity is higher than a first threshold, the product included in the first product image and the product included in the second product image may be considered to be the same or similar product. The first threshold mentioned here may be set according to practical situations, and the embodiments of the present application are not particularly limited.
In addition, the weight of the sub-image in the a x b sub-image located in the center region of the first commodity image is higher than the weight of the sub-image in the a x b sub-image located in the edge region of the first commodity image. Thus:
in one example, if the similarity between the sub-image located in the center area of the first commodity image and the image area matching with the sub-image in the second commodity image is higher, but the similarity between the sub-image located in the edge area of the first commodity image and the image area matching with the sub-image in the second commodity image is lower. Since the weight of the sub-image located in the edge area of the first product image is low, the finally obtained first similarity is also high, for example, higher than the first threshold. For the commodity image, the similarity between the sub-image located in the edge area of the first commodity image and the image area matched with the sub-image in the second commodity image is low, which may be caused by the difference of the background images used for the first commodity image and the second commodity image. Therefore, according to the scheme, the commodity included in the first commodity image and the commodity included in the second commodity image can be accurately determined to be the same or similar commodities under the condition that the first commodity image and the second commodity image use different backgrounds.
In another example, if the similarity between the sub-image located in the center area of the first commodity image and the image area matching with the sub-image in the second commodity image is low, but the similarity between the sub-image located in the edge area of the first commodity image and the image area matching with the sub-image in the second commodity image is high. However, since the weight of the sub-image located in the center area of the first product image is higher and the weight of the sub-image located in the edge area of the first product image is lower, the finally obtained first similarity is still lower, for example, lower than the first threshold. For the commodity image, the similarity between the sub-image located in the edge area of the first commodity image and the image area matched with the sub-image in the second commodity image is higher, which may be caused by the fact that the background images used for the first commodity image and the second commodity image are the same. Therefore, with the present solution, it is possible to accurately determine that the product included in the first product image and the product included in the second product image are not the same or similar products, in the case where the same background is used for the first product image and the second product image.
In some embodiments, as previously described, the edges of the merchandise image are mostly background images or blank areas. Therefore, when determining the image area in the second product image that matches the first sub-image, the determined image area may not be very accurate. For example, the image area in the upper right corner of the second product image matches the sub-image in the upper left corner of the first product image, and both are blank areas, and the similarity between both is high. For this case, the high similarity affects the accuracy of the first similarity.
In order to avoid the above problem, the accuracy of the calculated similarity between the first commodity image and the second commodity image is further improved. In an implementation manner of the embodiment of the application, an overlap degree (IOU) between each sub-image and an image region in the second commodity image that matches the sub-image may be further calculated, where the overlap degree between the first sub-image and the image region in the second commodity image that matches the first sub-image may be calculated by using the following formula (2).
Figure RE-GDA0002795216470000151
In equation (2):
A1a first sub-image in the first commodity image;
B1the image area matched with the first sub-image in the second commodity image is obtained;
Figure RE-GDA0002795216470000152
the area of the intersection of the image areas matched with the first sub-image in the first sub-image and the second commodity image is obtained;
Figure RE-GDA0002795216470000153
as a first sub-image and a second commodity imageThe area of the union of image regions that match the first sub-image.
As can be seen from formula (2), if the position of the image area in the second product image, which is matched with the first sub-image, in the second product image is closer to the position of the first sub-image in the first product image, the IOU is larger, and conversely, the IOU is smaller. The value of the IOU is greater than or equal to 0 and less than or equal to 1. For example, if the first sub-image is the sub-image at the top left corner of the first commodity image, and the image area in the second commodity image matching the first sub-image is located at the top right corner of the second commodity image, the value of the IOU is 0, because the numerator of the formula (2) is 0 at this time. For another example, if the first sub-image is a sub-image at the top left corner of the first commodity image, and the image area in the second commodity image, which is matched with the first sub-image, is located at the top left corner of the second commodity image, the value of the IOU is 1, because the numerator and denominator of equation (2) are equal at this time.
After separately calculating the degree of overlap of each sub-image and the determined image region matching the each sub-image, the first similarity may be calculated in combination with the degree of overlap of each sub-image and the determined image region matching the each sub-image.
In one example, the first similarity may be calculated using the following formula (3).
Figure RE-GDA0002795216470000154
In equation (3):
gamma is a first similarity;
i is a subscript of a sub-image in the first commodity image, and the value of i can be from 1 to a × b;
sisimilarity of an ith sub-image in a first commodity image and an image area matched with the ith sub-image in a second commodity image is obtained;
αithe weight of the ith sub-image in the first commodity image is obtained;
IOUithe overlapping degree of the ith sub-image in the first commodity image and the image area matched with the ith sub-image in the second commodity image is obtained.
It will be appreciated that for the ith sub-image, the IOU isiThe larger, siThe greater the impact on the first similarity; otherwise, IOUiThe smaller, siThe smaller the influence on the first similarity. As previously described, the IOUiThe smaller the image area of the determined second product image matching the ith sub-image may not be accurate, and in this case, using equation (3), s may be weakenediInfluence on the first similarity, thereby improving accuracy of the first similarity.
For example, the image area at the top right corner in the second sub-image is matched with the first sub-image at the top left corner in the first commodity image, and the similarity between the image area at the top right corner in the second sub-image and the first sub-image at the top left corner in the first commodity image is 0.9. However, it is apparent that the sub-image in the upper left corner in the first product image and the image area matching in the upper right corner in the second product image are incorrect. Accordingly, if the 0.9 is used to participate in the calculation of the first similarity, a certain calculation error is introduced. And by using the formula (3), since the IOU of the image areas in the upper left corner of the first sub-image and the upper right corner of the second sub-image in the first commodity image is 0, the aforementioned 0.9 does not actually participate in the calculation of the first similarity, thereby reducing the error and improving the accuracy of the first similarity.
In an example of the embodiment of the application, in order to further improve the accuracy of the determined similarity between the first commodity image and the second commodity image, the first commodity image may be divided in different manners, the similarity between the first commodity image and the second commodity image is calculated according to different dividing manners, a plurality of similarities are obtained, and the similarity between the first commodity image and the second commodity image is finally determined according to the plurality of similarities.
In an example, the method for processing a product image provided in this embodiment of the present application may further include S201-S205 shown in fig. 3 after determining that the first similarity between the first product image and the second product image is obtained, where fig. 3 is a flowchart illustrating a further method for processing a product image provided in this embodiment of the present application.
S201: dividing the first commodity image into c x d sub-images according to the c x d mode.
In this embodiment of the application, if the first commodity image is divided into a × b sub-images according to an average division manner when S102 is executed, c × d is a natural number greater than 1, and c and d are both natural numbers; a is not equal to c, and/or b is not equal to d.
In the embodiment of the present application, the manner of dividing the first product image in step a is different from the manner of dividing the first product image in S102. Therefore, a, b, c and d need to satisfy the condition that a is not equal to c, and/or b is not equal to d.
In the embodiments of the present application, c and d may be the same or different, and the embodiments of the present application are not particularly limited.
S202: and respectively determining an image area matched with each sub-image in the second commodity image for each sub-image in the c x d sub-images.
S203: and acquiring the similarity of each sub-image in the c x d sub-images and the image area matched with each sub-image in the c x d sub-images in the determined second commodity image.
S204: and determining a second similarity of the first commodity image and the second commodity image according to the similarity of each sub-image in the c x d sub-images and the image area matched with each sub-image in the c x d sub-images in the second commodity image and the weight of each sub-image in the c x d sub-images.
Regarding the weight of each sub-image in the c × d sub-images, it should be noted that, when c × d is equal to 2, the relative positions of the two divided sub-images in the first commodity image are the same, and there is no division between the central region and the edge region. When a is equal to 2 and b is equal to 2, the relative positions of the four divided sub-images in the first commodity image are the same, and there is no division between the center area and the edge area. Therefore, if c and d are both 2, or c x d is equal to 2, the weight of each sub-image in the c x d sub-images is the same. Otherwise, the weight of the sub-image in the c x d sub-image in the center region of the first commodity image is higher than the weight of the sub-image in the c x d sub-image in the edge region of the first commodity image.
Regarding S202-S204, the implementation principle is similar to the implementation of S103-S105 above, and the detailed implementation can refer to the description part of S103-S105 above, and is not detailed here.
S205: and determining the third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
After the first similarity and the second similarity are obtained through calculation, the third similarity of the first commodity image and the second commodity image can be determined according to the first similarity and the second similarity. In one example, the sum of the first similarity and the second similarity may be directly determined as the third similarity; in another example, an average value of the first similarity and the second similarity may be determined as the third similarity, and the embodiment of the present application is not particularly limited.
Accordingly, in one example, after the third similarity is calculated, the third similarity may be used to determine that the product included in the first product image and the product included in the second product image are the same or similar products. In one implementation, if the third similarity is higher than a first threshold, the product included in the first product image and the product included in the second product image may be considered to be the same or similar product. The first threshold mentioned here may be set according to practical situations, and the embodiments of the present application are not particularly limited.
Exemplary device
Based on the method for processing the commodity image provided by the above embodiment, the embodiment of the present application further provides a device, which is described below with reference to the accompanying drawings.
Referring to fig. 4, the diagram is a schematic structural diagram of a device for processing a commodity image according to an embodiment of the present application. The apparatus 400 may specifically include, for example: a first acquisition unit 401, a first division unit 402, a first determination unit 403, a second acquisition unit 404, and a second determination unit 405.
A first obtaining unit 401, configured to obtain a first product image and a second product image, where the first product image and the second product image have the same image size;
a first dividing unit 402, configured to divide the first commodity image into at least two sub-images;
a first determining unit 403, configured to determine, for each sub-image of the at least two sub-images, an image area in the second commodity image that matches each sub-image;
a second obtaining unit 404, configured to obtain a similarity between each sub-image and the determined image area in the second commodity image that matches with each sub-image;
a second determining unit 405, configured to determine a first similarity between the first commodity image and the second commodity image according to the similarity between each sub-image and the image region matched with each sub-image in the second commodity image, and the weight of each sub-image; wherein:
the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
In one implementation manner, the first dividing unit 402 is configured to:
dividing the first commodity image into a sub-images a and b according to a mode a and b, wherein a and b are both natural numbers, a and b are natural numbers larger than 2, and a and b are not equal to 2 at the same time.
In an implementation manner, the at least two sub-images include a first sub-image, and the first determining unit 403 is configured to:
and determining the image area which is matched with the first sub-image in the second commodity image to be the image area matched with the first sub-image.
In one implementation, the apparatus further comprises:
a first calculating unit, configured to calculate an overlapping degree of each sub-image and the determined image region matching the each sub-image, respectively;
the second determining unit 405 is configured to:
and determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in each sub-image and the determined second commodity image, the overlapping degree of the image area matched with each sub-image in each sub-image and the determined second commodity image, and the weight of each sub-image.
In one implementation, the apparatus further comprises:
the second dividing unit is used for dividing the first commodity image into c x d sub-images according to a c x d mode, wherein c x d is a natural number larger than 1, and both c and d are natural numbers; a is not equal to c, and/or, b is not equal to d;
a third determining unit, configured to determine, for each sub-image in the c × d sub-images, an image area in the second product image that matches each sub-image;
a third obtaining unit, configured to obtain a similarity between each sub-image in the c × d sub-images and the determined image region in the second product image that matches with each sub-image in the c × d sub-images;
a fourth determining unit, configured to determine a second similarity of the first commodity image and the second commodity image according to the similarity of each sub-image in the c x d sub-images and the image region matched with each sub-image in the c x d sub-images in the second commodity image, and the weight of each sub-image in the c x d sub-images;
a fifth determining unit, configured to determine a third similarity between the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if c and d are both 2, or c x d is equal to 2, the weight of each sub-image in the c x d sub-images is the same; if a is greater than 2, and c and d are not equal to 2 at the same time, the weight of the sub-image in the center region of the first commodity image in the c x d sub-images is higher than the weight of the sub-image in the edge region of the first commodity image in the c x d sub-images.
In one implementation, the apparatus further comprises:
a sixth determining unit, configured to determine that a product included in the first product image and a product included in the second product image are the same or similar product when the first similarity between the first product image and the second product image is higher than a first threshold.
Since the apparatus 400 is an apparatus corresponding to the method provided in the above method embodiment, and the specific implementation of each unit of the apparatus 400 is the same as that of the above method embodiment, for the specific implementation of each unit of the apparatus 400, reference may be made to the description part of the above method embodiment, and details are not repeated here.
The method provided by the embodiment of the present application may be executed by a client or a server, and the client and the server that execute the method are described below separately.
Fig. 5 shows a block diagram of a client 500. For example, client 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and so forth.
Referring to fig. 5, client 500 may include one or more of the following components: processing component 502, memory 504, power component 506, multimedia component 508, audio component 510, input/output (I/O) interface 512, sensor component 514, and communication component 516.
Processing component 502 generally controls overall operation of client 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing elements 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
Memory 504 is configured to store various types of data to support operations at client 500. Examples of such data include instructions for any application or method operating on client 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 506 provides power to the various components of client 500. Power components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for client 500.
The multimedia component 508 includes a screen that provides an output interface between the client 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. When the client 500 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, audio component 510 includes a Microphone (MIC) configured to receive external audio signals when client 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor component 514 includes one or more sensors for providing various aspects of status assessment for client 500. For example, sensor component 514 may detect an open/closed state of device 500, a relative positioning of components, such as a display and keypad of client 500, a change in location of client 500 or a component of client 500, the presence or absence of user contact with client 500, an orientation or acceleration/deceleration of client 500, and a change in temperature of client 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
Communications component 516 is configured to facilitate communications between client 500 and other devices in a wired or wireless manner. Client 500 may access a wireless network based on a communication standard, such as WiFi, 2G or 5G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the client 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the following methods:
acquiring a first commodity image and a second commodity image, wherein the sizes of the first commodity image and the second commodity image are the same;
dividing the first commodity image into at least two sub-images;
respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images;
acquiring the similarity of each sub-image and the image area matched with each sub-image in the second commodity image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image; wherein:
the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
In one implementation, the dividing the first merchandise image into at least two sub-images includes:
dividing the first commodity image into a sub-images a and b according to a mode a and b, wherein a and b are both natural numbers, a and b are natural numbers larger than 2, and a and b are not equal to 2 at the same time.
In one implementation, the determining the image area in the second commodity image that matches the first sub-image includes:
and determining the image area which is matched with the first sub-image in the second commodity image to be the image area matched with the first sub-image.
In one implementation, the method further comprises:
respectively calculating the overlapping degree of each sub-image and the determined image area matched with each sub-image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image, wherein the determining comprises the following steps:
and determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in each sub-image and the determined second commodity image, the overlapping degree of the image area matched with each sub-image in each sub-image and the determined second commodity image, and the weight of each sub-image.
In one implementation, the method further comprises:
dividing the first commodity image into c x d sub-images according to a c x d mode, wherein c x d is a natural number larger than 1, and both c and d are natural numbers; a is not equal to c, and/or, b is not equal to d;
for each sub-image in the c x d sub-images, respectively determining an image area in the second commodity image, which is matched with each sub-image;
acquiring the similarity of each sub-image in the c x d sub-images and the determined image area matched with each sub-image in the c x d sub-images in the second commodity image;
determining a second similarity of the first commodity image and the second commodity image according to the similarity of each sub-image in the c x d sub-images and the image area matched with each sub-image in the c x d sub-images in the second commodity image and the weight of each sub-image in the c x d sub-images;
and determining the third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if c and d are both 2, or c x d is equal to 2, the weight of each sub-image in the c x d sub-images is the same; if a is greater than 2, and c and d are not equal to 2 at the same time, the weight of the sub-image in the center region of the first commodity image in the c x d sub-images is higher than the weight of the sub-image in the edge region of the first commodity image in the c x d sub-images.
In one implementation, the method further comprises:
when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodities.
Fig. 6 is a schematic structural diagram of a server in an embodiment of the present application. The server 600 may vary significantly due to configuration or performance, and may include one or more Central Processing Units (CPUs) 622 (e.g., one or more processors) and memory 632, one or more storage media 630 (e.g., one or more mass storage devices) storing applications 642 or data 644. Memory 632 and storage medium 630 may be, among other things, transient or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 622 may be configured to communicate with the storage medium 630 and execute a series of instruction operations in the storage medium 630 on the server 600.
Still further, in one example, the central processor 422 may perform the following method:
acquiring a first commodity image and a second commodity image, wherein the sizes of the first commodity image and the second commodity image are the same;
dividing the first commodity image into at least two sub-images;
respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images;
acquiring the similarity of each sub-image and the image area matched with each sub-image in the second commodity image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image; wherein:
the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
In one implementation, the dividing the first merchandise image into at least two sub-images includes:
dividing the first commodity image into a sub-images a and b according to a mode a and b, wherein a and b are both natural numbers, a and b are natural numbers larger than 2, and a and b are not equal to 2 at the same time.
In one implementation, the determining the image area in the second commodity image that matches the first sub-image includes:
and determining the image area which is matched with the first sub-image in the second commodity image to be the image area matched with the first sub-image.
In one implementation, the method further comprises:
respectively calculating the overlapping degree of each sub-image and the determined image area matched with each sub-image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image, wherein the determining comprises the following steps:
and determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in each sub-image and the determined second commodity image, the overlapping degree of the image area matched with each sub-image in each sub-image and the determined second commodity image, and the weight of each sub-image.
In one implementation, the method further comprises:
dividing the first commodity image into c x d sub-images according to a c x d mode, wherein c x d is a natural number larger than 1, and both c and d are natural numbers; a is not equal to c, and/or, b is not equal to d;
for each sub-image in the c x d sub-images, respectively determining an image area in the second commodity image, which is matched with each sub-image;
acquiring the similarity of each sub-image in the c x d sub-images and the determined image area matched with each sub-image in the c x d sub-images in the second commodity image;
determining a second similarity of the first commodity image and the second commodity image according to the similarity of each sub-image in the c x d sub-images and the image area matched with each sub-image in the c x d sub-images in the second commodity image and the weight of each sub-image in the c x d sub-images;
and determining the third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
In one implementation, if c and d are both 2, or c x d is equal to 2, the weight of each sub-image in the c x d sub-images is the same; if a is greater than 2, and c and d are not equal to 2 at the same time, the weight of the sub-image in the center region of the first commodity image in the c x d sub-images is higher than the weight of the sub-image in the edge region of the first commodity image in the c x d sub-images.
In one implementation, the method further comprises:
when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodities.
The server 600 may also include one or more power supplies 626, one or more wired or wireless network interfaces 650, one or more input-output interfaces 656, one or more keyboards 656, and/or one or more operating systems 641, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
Embodiments of the present application also provide a computer-readable medium having stored thereon instructions, which, when executed by one or more processors, cause an apparatus to perform the method for processing an image of an article provided by the above method embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the attached claims
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for processing an image of a commodity, the method comprising:
acquiring a first commodity image and a second commodity image, wherein the sizes of the first commodity image and the second commodity image are the same;
dividing the first commodity image into at least two sub-images;
respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images;
acquiring the similarity of each sub-image and the image area matched with each sub-image in the second commodity image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image; wherein:
the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
2. The method of claim 1, wherein said dividing the first merchandise image into at least two sub-images comprises:
dividing the first commodity image into a sub-images a and b according to a mode a and b, wherein a and b are both natural numbers, a and b are natural numbers larger than 2, and a and b are not equal to 2 at the same time.
3. The method of claim 1, wherein the at least two sub-images comprise a first sub-image, and wherein determining the image area of the second merchandise image that matches the first sub-image comprises:
and determining the image area which is matched with the first sub-image in the second commodity image to be the image area matched with the first sub-image.
4. The method of claim 1, further comprising:
respectively calculating the overlapping degree of each sub-image and the determined image area matched with each sub-image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image, wherein the determining comprises the following steps:
and determining a first similarity of the first commodity image and the second commodity image according to the similarity of the image area matched with each sub-image in each sub-image and the determined second commodity image, the overlapping degree of the image area matched with each sub-image in each sub-image and the determined second commodity image, and the weight of each sub-image.
5. The method of claim 1, further comprising:
dividing the first commodity image into c x d sub-images according to a c x d mode, wherein c x d is a natural number larger than 1, and both c and d are natural numbers; a is not equal to c, and/or, b is not equal to d;
for each sub-image in the c x d sub-images, respectively determining an image area in the second commodity image, which is matched with each sub-image;
acquiring the similarity of each sub-image in the c x d sub-images and the determined image area matched with each sub-image in the c x d sub-images in the second commodity image;
determining a second similarity of the first commodity image and the second commodity image according to the similarity of each sub-image in the c x d sub-images and the image area matched with each sub-image in the c x d sub-images in the second commodity image and the weight of each sub-image in the c x d sub-images;
and determining the third similarity of the first commodity image and the second commodity image according to the first similarity and the second similarity.
6. The method of claim 5, wherein if c and d are both 2, or c x d is equal to 2, the weight of each of the c x d sub-images is the same; if a is greater than 2, and c and d are not equal to 2 at the same time, the weight of the sub-image in the center region of the first commodity image in the c x d sub-images is higher than the weight of the sub-image in the edge region of the first commodity image in the c x d sub-images.
7. The method of claim 1, further comprising:
when the first similarity of the first commodity image and the second commodity image is higher than a first threshold value, determining that the commodity included in the first commodity image and the commodity included in the second commodity image are the same or similar commodities.
8. An apparatus for processing an image of a commodity, the apparatus comprising:
the system comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is used for acquiring a first commodity image and a second commodity image, and the image sizes of the first commodity image and the second commodity image are the same;
the first dividing unit is used for dividing the first commodity image into at least two sub-images;
the first determining unit is used for respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images;
the second acquisition unit is used for acquiring the similarity between each sub-image and the image area matched with each sub-image in the second commodity image;
a second determining unit, configured to determine a first similarity between the first commodity image and the second commodity image according to the similarity between each sub-image and the image region matching each sub-image in the second commodity image, and the weight of each sub-image; wherein:
the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
9. A device for processing an image of a commodity, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein execution of the one or more programs by one or more processors includes instructions for:
acquiring a first commodity image and a second commodity image, wherein the sizes of the first commodity image and the second commodity image are the same;
dividing the first commodity image into at least two sub-images;
respectively determining an image area matched with each sub-image in the second commodity image aiming at each sub-image in the at least two sub-images;
acquiring the similarity of each sub-image and the image area matched with each sub-image in the second commodity image;
determining a first similarity of the first commodity image and the second commodity image according to the similarity of each sub-image and the image area matched with each sub-image in the second commodity image and the weight of each sub-image; wherein:
the weight of the sub-image located in the center region of the first commodity image is higher than the weight of the sub-image located in the edge region of the first commodity image.
10. A computer-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform the method of any one of claims 1 to 7.
CN202011059920.2A 2020-09-30 2020-09-30 Commodity image processing method and device Active CN112200783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011059920.2A CN112200783B (en) 2020-09-30 2020-09-30 Commodity image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011059920.2A CN112200783B (en) 2020-09-30 2020-09-30 Commodity image processing method and device

Publications (2)

Publication Number Publication Date
CN112200783A true CN112200783A (en) 2021-01-08
CN112200783B CN112200783B (en) 2024-06-28

Family

ID=74007244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011059920.2A Active CN112200783B (en) 2020-09-30 2020-09-30 Commodity image processing method and device

Country Status (1)

Country Link
CN (1) CN112200783B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009251850A (en) * 2008-04-04 2009-10-29 Albert:Kk Commodity recommendation system using similar image search
CN108615042A (en) * 2016-12-09 2018-10-02 炬芯(珠海)科技有限公司 The method and apparatus and player of video format identification
US20190236789A1 (en) * 2017-04-11 2019-08-01 Rakuten, Inc. Image processing device, image processing method, and program
CN110135517A (en) * 2019-05-24 2019-08-16 北京百度网讯科技有限公司 For obtaining the method and device of vehicle similarity

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009251850A (en) * 2008-04-04 2009-10-29 Albert:Kk Commodity recommendation system using similar image search
CN108615042A (en) * 2016-12-09 2018-10-02 炬芯(珠海)科技有限公司 The method and apparatus and player of video format identification
US20190236789A1 (en) * 2017-04-11 2019-08-01 Rakuten, Inc. Image processing device, image processing method, and program
CN110135517A (en) * 2019-05-24 2019-08-16 北京百度网讯科技有限公司 For obtaining the method and device of vehicle similarity

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
朱远清;李才伟;: "一种基于前景与背景划分的区域图像检索方法及实现", 中国图象图形学报, no. 02, 28 February 2007 (2007-02-28) *
李苏梅;韩国强;: "感兴趣区域的确定及相似度计算方法", 湖南工业大学学报, no. 04, 15 July 2008 (2008-07-15) *
王华秋;聂珍;: "快速搜索密度峰值聚类在图像检索中的应用", 计算机工程与设计, no. 11, 16 November 2016 (2016-11-16) *

Also Published As

Publication number Publication date
CN112200783B (en) 2024-06-28

Similar Documents

Publication Publication Date Title
CN106651955B (en) Method and device for positioning target object in picture
US10127471B2 (en) Method, device, and computer-readable storage medium for area extraction
RU2577188C1 (en) Method, apparatus and device for image segmentation
US10007841B2 (en) Human face recognition method, apparatus and terminal
EP3188094A1 (en) Method and device for classification model training
US10643054B2 (en) Method and device for identity verification
CN103996186B (en) Image cropping method and device
CN107480665B (en) Character detection method and device and computer readable storage medium
CN107944367B (en) Face key point detection method and device
CN107464253B (en) Eyebrow positioning method and device
CN107977934B (en) Image processing method and device
EP3057304A1 (en) Method and apparatus for generating image filter
EP3958110B1 (en) Speech control method and apparatus, terminal device, and storage medium
CN106485567B (en) Article recommendation method and device
CN107330868A (en) image processing method and device
WO2020048392A1 (en) Application virus detection method, apparatus, computer device, and storage medium
CN106648063B (en) Gesture recognition method and device
US10297015B2 (en) Method, device and computer-readable medium for identifying feature of image
US9633444B2 (en) Method and device for image segmentation
US20150339016A1 (en) Tab creation method, device, and terminal
US10083346B2 (en) Method and apparatus for providing contact card
JP2021531589A (en) Motion recognition method, device and electronic device for target
CN107742120A (en) The recognition methods of bank card number and device
US20220222831A1 (en) Method for processing images and electronic device therefor
US9665925B2 (en) Method and terminal device for retargeting images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant