CN109816045A - A kind of commodity recognition method and device - Google Patents

A kind of commodity recognition method and device Download PDF

Info

Publication number
CN109816045A
CN109816045A CN201910110364.8A CN201910110364A CN109816045A CN 109816045 A CN109816045 A CN 109816045A CN 201910110364 A CN201910110364 A CN 201910110364A CN 109816045 A CN109816045 A CN 109816045A
Authority
CN
China
Prior art keywords
image
commodity
identified
recognition result
rectangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910110364.8A
Other languages
Chinese (zh)
Inventor
岳振
翟建光
李佳
李新
李昊旻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Intelligent Business Systems Ltd By Share Ltd
Original Assignee
Qingdao Hisense Intelligent Business Systems Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Intelligent Business Systems Ltd By Share Ltd filed Critical Qingdao Hisense Intelligent Business Systems Ltd By Share Ltd
Priority to CN201910110364.8A priority Critical patent/CN109816045A/en
Publication of CN109816045A publication Critical patent/CN109816045A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the present invention provides a kind of commodity recognition method and device, is related to commodity identification field.The embodiment of the present invention can accurately identify commodity under the premise of guaranteeing quick, safety, low cost.This method comprises: obtaining from the first shooting angle shot commodity to be identified first image generated and shooting angle shot commodity to be identified second image generated from second;Wherein, the position relative to commodity to be identified, the first shooting angle and the second shooting angle are asymmetric in the horizontal direction;The type of commodity to be identified is identified according to the first image and the second image using default neural network model.The present invention is identified applied to commodity.

Description

A kind of commodity recognition method and device
Technical field
The present invention relates to commodity identification field more particularly to a kind of commodity recognition method and devices.
Background technique
In recent years, with the rapid development of the commercial articles vendings form such as unmanned shop, how fast and easily to type of merchandize Automatic identification is carried out, is become for a problem to be solved.
Currently, existing commodity means of identification is broadly divided into two kinds: 1, bar code recognition.This method, which mainly passes through, sweeps It retouches the bar code in commodity packaging and then completes the identification to type of merchandize.When using the bar code recognition type of merchandise, need User first finds bar code and bar code alignment scanning means is completed scanning again, complicated for operation.2,RFID(Radio Frequency Identification, radio frequency identification) identification.This method is by wearing RFID mark on commodity Then label identify commodity by receiving the electric wave signal that RFID label tag issues when user buys commodity.This mode exists Operation cost is high, and is easy the problem of faking.
For above-mentioned existing commodity recognition method, the present invention proposes a kind of new commodity recognition method, can guarantee Fast, under the premise of safety, low cost, the task of commodity identification is completed.
Summary of the invention
The embodiment provides a kind of commodity recognition method and devices, can guarantee quick, safety, low cost Under the premise of accurately identify commodity.
In order to achieve the above objectives, the present invention adopts the following technical scheme:
In a first aspect, the embodiment of the present invention provides a kind of commodity recognition method, comprising: obtain from the first shooting angle shot Commodity to be identified first image generated and from the second shooting angle shot commodity to be identified second image generated;Its In, relative to the position of commodity to be identified, the first shooting angle and the second shooting angle are asymmetric in the horizontal direction;Using pre- If neural network model, according to the first image and the second image, the type of commodity to be identified is identified.
Second aspect, the embodiment of the present invention provide a kind of article identification device, comprising: acquiring unit, for obtaining from the One shooting angle shoots commodity to be identified first image generated and is given birth to from the second shooting angle shot commodity to be identified At the second image;Wherein, the position relative to commodity to be identified, the first shooting angle and the second shooting angle are in the horizontal direction Upper asymmetry;Recognition unit, for being identified to be identified using default neural network model according to the first image and the second image The type of commodity.
The third aspect, the embodiment of the present invention provide a kind of article identification device, comprising: processor, memory, bus and logical Believe interface;For storing computer executed instructions, processor is connect with memory by bus memory, works as article identification device When operation, processor executes the computer executed instructions of memory storage, so that article identification device executes such as above-mentioned first party Commodity recognition method provided by face.
Fourth aspect, the embodiment of the present invention provide a kind of computer storage medium, including instruction, fill when it is identified in commodity When setting operation, so that article identification device executes the commodity recognition method as provided by above-mentioned first aspect.
In the embodiment of the present invention, it is contemplated that in the existing method using subject image to carry out object identification, usually adopt Carry out object identification with the subject image taken according to single shooting angle, this identification method due to accuracy of identification it is low so It is unable to satisfy requirement of the commodity identification to accuracy rate.In view of the above-mentioned problems, the embodiment of the present invention is used from two different shootings Angle obtains the image of object to be identified, and is known according to type of the image of two different shooting angles to commodity to be identified Not, to improve accuracy of identification.In addition, when shooting angle is arranged, in order to obtain the external appearance characteristic of more objects to be identified, In the embodiment of the present invention, asymmetric two shootings in the horizontal direction are set by the first shooting angle and the second shooting angle Angle, if being at this time front with the face of the commodity to be identified of the first shooting angle shot, the second shooting angle can then be shot To the side of commodity to be identified, so that the external appearance characteristic of acquisition object to be identified as much as possible, further increases accuracy of identification.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, embodiment will be described below Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some Embodiment for those of ordinary skill in the art without creative efforts, can also be attached according to these Figure obtains other attached drawings.
Fig. 1 is a kind of one of schematic diagram for shooting commodity provided in an embodiment of the present invention;
Fig. 2 is a kind of one of flow diagram of commodity recognition method provided in an embodiment of the present invention;
Fig. 3 is the two of a kind of schematic diagram for shooting commodity provided in an embodiment of the present invention;
Fig. 4 is the three of a kind of schematic diagram for shooting commodity provided in an embodiment of the present invention;
Fig. 5 is a kind of flow diagram of image preprocessing provided in an embodiment of the present invention;
Fig. 6 is one of the schematic diagram of the commodity image to be identified generated in a kind of pretreatment provided in an embodiment of the present invention;
Fig. 7 is the two of the schematic diagram of the commodity image to be identified generated in a kind of pretreatment provided in an embodiment of the present invention;
Fig. 8 is the three of the schematic diagram of the commodity image to be identified generated in a kind of pretreatment provided in an embodiment of the present invention;
Fig. 9 is the four of the schematic diagram of the commodity image to be identified generated in a kind of pretreatment provided in an embodiment of the present invention;
Figure 10 is the five of the schematic diagram of the commodity image to be identified generated in a kind of pretreatment provided in an embodiment of the present invention;
The six of the schematic diagram of the commodity image to be identified generated in a kind of Figure 11 pretreatment provided in an embodiment of the present invention;
Figure 12 is the eight of the schematic diagram of the commodity image to be identified generated in a kind of pretreatment provided in an embodiment of the present invention;
Figure 13 is the four of a kind of schematic diagram for shooting commodity provided in an embodiment of the present invention;
Figure 14 is the five of a kind of schematic diagram for shooting commodity provided in an embodiment of the present invention;
Figure 15 is the six of a kind of schematic diagram for shooting commodity provided in an embodiment of the present invention;
Figure 16 is the two of a kind of flow diagram of commodity recognition method provided in an embodiment of the present invention;
Figure 17 is a kind of one of structural schematic diagram of article identification device provided in an embodiment of the present invention;
Figure 18 is a kind of second structural representation of article identification device provided in an embodiment of the present invention;
Figure 19 is a kind of third structural representation of article identification device provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Term " unit ", " module " are intended to refer to computer related entity as used in the present invention, and the computer is related Entity can be hardware, firmware, the combination of hardware and software, software or running software.For example, unit can be, but It is not limited to: process object, processor, executable file, thread in execution, program and/or the meter run on a processor Calculation machine.
Firstly, inventive principle of the invention is introduced: in the embodiment of the present invention, it is contemplated that existing to utilize object figure Object knowledge is carried out according to the subject image that single shooting angle takes as generalling use in the method to carry out object identification It not, can not at this time since the external appearance characteristic for the object for including in single object image is less therefore the accuracy rate of identification is just very low Meet requirement of the commodity identification to accuracy rate.And then object to be identified is obtained using from two shooting angle in the embodiment of the present invention Image, and object is identified using the image shot from two shooting angle.
Based on above-mentioned mentality of designing, corresponding angle in the packaging of present all kinds of commodity is further contemplated in the present invention On information may be relatively identical, for example, usually the top surface of commodity and the bottom surface of commodity are used to the printing date of manufacture, shelf-life Etc. information.Therefore in by the way of the image from two shooting angle to obtain object to be identified, according to as shown in Figure 1 Style of shooting, wherein in the horizontal direction, the first shooting angle and the second shooting angle are relative to object to be identified in symmetrical Direction the case where being then likely to occur while taking the top and bottom of object, can not still obtain enough wait know at this time The feature of other object can not play the advantage of double camera angles.
In turn, in the embodiment of the present invention, by the horizontal direction relative to the position of commodity to be identified asymmetric One shooting angle and obtained first image of the second shooting angle shot object to be identified and the second image, recycle the first figure Picture and the second image identify commodity to be identified, so as to which the type of commodity is recognized accurately.
Embodiment one:
Based on foregoing invention principle, the embodiment of the present invention provides a kind of commodity recognition method, as shown in Fig. 2, this method packet It includes:
S101, the image for obtaining Q commodity to be identified.
Wherein, the image of Q commodity to be identified specifically includes: from the spherical surface using the position of commodity to be identified as the centre of sphere, The opposite centre of sphere is in the horizontal direction or on vertical direction at interval of the image of the commodity to be identified obtained captured by predetermined angle.
Specifically, when acquiring the sample image for training neural network module, in the embodiment of the present invention, using ball-type Acquisition mode.As shown in Figure 3, Figure 4, commodity original (including commodity to be identified) is placed in the centre of sphere of spherical surface a, by camera It is placed on spherical surface a.Relative to the centre of sphere, every mobile predetermined angle shoots one in the horizontal direction or on vertical direction on spherical surface Commodity photo can obtain (360/10) * (360/10)=1296 commodity picture if illustrative predetermined angle is 10 °. I.e. at this time Q=1296.In addition, other figures can also be increased outside above-mentioned 1296 commodity pictures in order to guarantee training effect Piece, then Q > 1296 at this time.
Specifically, in one implementation, in order to guarantee training effect, in the image for getting Q commodity to be identified Afterwards, before the image of commodity to be identified to be inputted to default neural network and is trained, in the embodiment of the present invention further include:
S102, the image of Q commodity to be identified is pre-processed.
Wherein, the image of Q commodity to be identified is pre-processed, respectively Q width is waited knowing as shown in figure 5, specifically including The image of other commodity is handled as follows:
S1021, gray processing is carried out to image respectively, obtains the gray level image of commodity to be identified.
Illustratively, as shown in fig. 6, for obtained gray level image after the image progress gray processing to certain commodity.
S1022, edge sharpening is carried out to the gray level image that step S1021 is generated, obtains sharpening image.
For example, carrying out edge sharpening to gray level image shown in Fig. 6 generates sharpening image shown in Fig. 7, the wherein edge of commodity It is more prominent in information ratio Fig. 6.
S1023, edge extracting is carried out to the sharpening image that step S1022 is generated, generates edge image.
For example, carrying out canny operation to image shown in Fig. 7 to extract edge, edge image shown in Fig. 8 is obtained.
S1024, binary conversion treatment is carried out to the edge image that step S1023 is generated, generates binary image.
For example, carrying out binary conversion treatment to edge image shown in Fig. 8, Fig. 9 binary image is obtained.
S1025, closed operation is carried out to the binary image that step S1024 is generated, generates closed operation image.
For example, carrying out closed operation to binary image shown in Fig. 9, closed operation image shown in Figure 10 is obtained.
The background image in closed operation image that S1026, deletion step S1025 are generated, output are deleted after background image Image, complete pretreatment to image.
Specifically, can specifically include when deleting the background image in closed operation image:
S1026a, according to presetting method, frame selects in closed operation image included profile, generates M contour area.
S1026b, the minimum area-encasing rectangle for determining each contour area in M contour area generate M minimum encirclement square Shape.
Such as shown in Figure 11, white rectangle is the minimum area-encasing rectangle of corresponding contour area in figure.
S1026c, the center of each minimum area-encasing rectangle and area in M minimum area-encasing rectangle are calculated separately.
S1026d, redundancy rectangle in M minimum area-encasing rectangle is determined.Wherein, redundancy rectangle includes at least: center Minimum area-encasing rectangle and area not in predeterminable area are less than the minimum area-encasing rectangle of preset area.
Wherein, predeterminable area can be determined according to the placement location of commodity.Specifically, commodity can be put when shooting picture Center in the picture is set, therefore can will preset the central area of size in image as above-mentioned predeterminable area.
Specifically, still by taking Figure 11 as an example, it can be seen that wherein close to three minimum area-encasing rectangles of top edge in Figure 11 It is to deviate picture centre, the area of the minimum area-encasing rectangle of three that in addition dashed circle outlines in figure is then smaller.Therefore will This six minimum area-encasing rectangles are as redundancy rectangle.
S1026e, target rectangle frame is generated.Wherein, target rectangle frame can surround in M minimum area-encasing rectangle except redundancy Other minimum area-encasing rectangles except rectangle.
It continues the example presented above, after determining the redundancy rectangle in Figure 11, then the target rectangle in the image can be generated Frame, as shown in figure 12.
Image section in S1026f, deletion closed operation image in addition to target rectangle frame, completes to delete closed operation image In background image operation.
After the pretreatment operation that the image of Q commodity to be identified is carried out to above-mentioned steps S102, this method is also wrapped It includes:
S103, default neural network model is trained using the image of above-mentioned Q commodity to be identified.
Specifically, due in specific application, it usually needs the commodity of identification are incessantly a kind of.Therefore to default nerve net When network is trained, it can be not limited to be trained default neural network model using the image of Q commodity to be identified.May be used also To increase the image of other kinds of commodity in training sample.Certainly in the image using other kinds of commodity to default mind When being trained through network model can also use above-mentioned steps S101 and S102 content, to photographic device obtain image into Row processing.
For example, one shares 1000 kinds of commodity.Then according to above-mentioned Fig. 3, acquisition mode shown in Fig. 4, for every kind of commodity point 1296 width images are not collected, amount to 1296000 width images.The step of recycling above-mentioned S102 is to above-mentioned 1296000 width image It is pre-processed, then default neural network model is trained using the image after pretreatment.In addition, removing above-mentioned pre- place Outside 1296000 width images after reason, other pictures can also be further added by, default neural network model is trained.
Specifically, neural network model employed in the embodiment of the present invention can be the neural network of Google's exploitation Model Inception_V4.When training, frequency of training be can be set to 1000000 times -6000000 times, be trained every time from sample It randomly selects 100 pictures in image (1296000 width images in such as examples detailed above) to be trained, training method is from the beginning to instruct Practice.Specifically, trained model can be deployed in server end, and write the program picture that receiving terminal is transmitted through at any time, Judged after receiving picture using trained neural network model, and judging result is returned into terminal.
After the training for completing default neural network model, it can start to identify commodity to be identified.Specifically, Method provided by the invention further include:
S104, it obtains from the first shooting angle shot commodity to be identified first image generated and from the second shooting angle Degree shoots commodity the second image generated to be identified.
Wherein, the position relative to commodity to be identified, the first shooting angle and the second shooting angle are in the horizontal direction not Symmetrically.
It illustratively, is as illustrated in figs. 13-15 a kind of schematic diagram (figure of commodity identification equipment provided in an embodiment of the present invention 13 be that commodity identify that 3-D view, Figure 14 of equipment are that commodity identify that top view, Figure 15 of equipment are the master of commodity identification equipment View), wherein commodity identification equipment includes shop counter 01, camera 02, camera 03 and article identification device 04.Wherein Shop counter can be set on the big circular section for crossing the spherical surface b centre of sphere, for placing commodity to be identified.Camera 02 and camera 03 Be arranged in above shop counter on the surface of spherical surface b, and camera 02 and camera 03 relative to shop counter 01 in the horizontal direction Upper asymmetry, illustratively the commodity identify that camera 02 and camera 03 are relative to 01 center of shop counter in equipment such as in Fig. 4 Horizontal sextant angle be 120 °, camera 02 and camera 03 are respectively used to shoot above-mentioned first image and the second image.In addition, real The square that the shop counter in Figure 13-15 01 of example property is side length 36cm, camera 02, camera 03 distance in the horizontal direction The lateral symmetry axis of shop counter 01 is about 14cm, and the longitudinally asymmetric axis apart from shop counter 01 is about 25cm, camera 02, camera 03 vertical distance apart from shop counter 01 is about 42cm.
In one implementation, in order to guarantee to get the external appearance characteristics of more commodity to be identified, the present invention is implemented In example, the first shooting angle and position of second shooting angle relative to commodity to be identified, angle in the horizontal direction is at 60 ° Between~120 °.So i.e. can guarantee the first image in the second image at least piece image can take it is to be identified The front of commodity, also at least piece image can take the side of commodity to be identified.To avoid the occurrence of two images bat The case where taking the photograph the top and bottom for taking commodity to be identified respectively.
In one implementation, in order to improve recognition accuracy, in the embodiment of the present invention, in camera from the first shooting Angle shot commodity to be identified and from second shooting angle shot commodity to be identified, to generate the first image and the second image When and camera when shooting the image of above-mentioned Q commodity to be identified, camera is equidistant with commodity to be identified.Such as The radius r1 of spherical surface a in Fig. 3, Fig. 4 and the radius r2 of spherical surface b in Figure 13-15 is set to be set as equal.
In one implementation, in order to keep the feature of commodity to be identified in the first image and the second image more prominent. In the embodiment of the present invention, above-mentioned steps S104 be can specifically include:
S1041, first original image of the photographic device from the first shooting angle shot commodity to be identified, and camera shooting are obtained Second original image of the device from the second shooting angle shot commodity to be identified.
Specifically, as illustrated in figs. 13-15, camera 02 obtains the first original from the first shooting angle shot commodity to be identified Beginning image;Camera 03 obtains the second original image from the second shooting angle shot commodity to be identified.
S1042, the first original image and the second original image are pre-processed respectively, generates the first image and the second figure Picture.
Wherein, the first original image or the second original image are pre-processed, is specifically included: gray scale is carried out to image Change, generates gray level image;Edge sharpening is carried out to gray level image, generates sharpening image;Edge extracting is carried out to sharpening image, it is raw At edge image;Binaryzation is carried out to edge image, generates binary image;Closed operation is carried out to binary image, generation is closed Operate image;The background image in closed operation image is deleted, the image after background image is deleted in output.
Wherein, the background image in closed operation image is deleted, is specifically included:
(1) according to presetting method, frame selects profile included in closed operation image, generates M contour area;
(2) it determines the minimum area-encasing rectangle of each contour area in M contour area, generates M minimum area-encasing rectangle;
(3) center of each minimum area-encasing rectangle and area in M minimum area-encasing rectangle are calculated separately;
(4) the redundancy rectangle in M minimum area-encasing rectangle is determined;Redundancy rectangle includes at least: center is not default Minimum area-encasing rectangle and area in region are less than the minimum area-encasing rectangle of preset area;
(5) target rectangle frame is generated;Target rectangle frame can surround in M minimum area-encasing rectangle in addition to redundancy rectangle Other minimum area-encasing rectangles;
(6) image section in closed operation image in addition to target rectangle frame is deleted.
Above-mentioned steps S1012, which carries out pretreated specific implementation to the first original image and the second original image, to join According to the content of above-mentioned steps S102, details are not described herein.
S105, the kind of commodity to be identified is identified according to the first image and the second image using default neural network model Class.
Specifically, step S105 is specifically included:
S1051, using default neural network model, respectively according to the first image and the second image, generate the first identification knot Fruit and the second recognition result.
It wherein, include the highest N number of type of merchandize of a possibility that the first image corresponds to and correspondence in the first recognition result Probability parameter;It include the second image a possibility that corresponding to highest N number of type of merchandize and corresponding in second recognition result Probability parameter.
For example, shown in the following table 1, for the image knowledge generated using default neural network mould according to a commodity to be identified Other result:
Name Score
0101 0.995924
0104 0.000645
0107 0.000344
0102 0.000303
0105 0.000160
Table 1
It include the number " Name " of the highest five kinds of commodity of possibility and the probability of this five kinds of commodity in table 1 “Score”。
If the probability parameter of one and only one type of merchandize is big in S1052, the first recognition result and the second recognition result In the first probability threshold value, it is determined that commodity to be identified are the type of merchandize that probability parameter is greater than the first probability threshold value.
For example, the highest type of merchandize of possibility is commodity A in the first recognition result, and the probability of commodity A is 60%; The highest type of merchandize of possibility is commodity B in second recognition result, and the probability of commodity B is 95%.It then determines to be identified Commodity are commodity B.
If there are the probability parameters of more than two types of merchandize in S1053, the first recognition result and the second recognition result Greater than the first probability threshold value, then the first recognition result is merged with the probability parameter of identical type of merchandize in the second recognition result, Generate third recognition result;And determine that commodity to be identified are the maximum type of merchandize of probability parameter in third recognition result.
For example, the highest type of merchandize of possibility is commodity A in the first recognition result, and the probability of commodity A is 93%; The highest type of merchandize of possibility is commodity B in second recognition result, and the probability of commodity B is 95%.Wherein, two identification As a result the commodity that a probability is greater than 90% are respectively included in.Then by the first recognition result and identical commodity in the second recognition result The probability of type merges, and generates third recognition result.For example, probability of the commodity A in the first recognition result is 93%, second Probability in recognition result is 4%;Probability of the commodity B in the first recognition result is 1%, the probability in the second recognition result For 95%, then be 97% by the sum of probability twice that two recognition results merge to obtain commodity A, the probability twice of commodity B it Be 96%.And then determine that commodity to be identified are commodity A.
Illustratively, below in conjunction with attached drawing 16, illustrate the detailed process of above-mentioned steps of embodiment of the present invention S105:
S105a, using default neural network model, respectively according to the first image and the second image, generate the first identification knot Fruit (in Figure 16 referred to as " result 1 ") and the second recognition result (in Figure 16 referred to as " result 2 ").
It is a possibility that possibility highest commodity in the highest commodity of possibility and result 2 in S105b, judging result 1 It is no more than 90%.According to judging result, selection executes one in S105c, S105d, S105e.
It is defeated if a possibility that only a kind of result possibility highest commodity is more than 90% in S105c, two kinds of results The highest commodity of possibility export as a result in this judging result out.
If executing S105f above 90% a possibility that possibility highest commodity in S105d, two results.
If a possibility that possibility highest commodity, is all not above 90% in S105e, two kinds of results.Then execute S105i.
S105f, judge whether the highest commodity of possibility are same commodity in two results.If so, output may The highest commodity of property;If it is not, then executing S105g.
S105g, two kinds of results are merged, a possibility that identical commodity is added in two results.Then possibility is rearranged Commodity and possibility, generate result 3.Then S105h is executed.
Whether the highest two kinds of commodity possibilities of possibility are identical in S105h, judging result 3.If it is not, in output result 3 The highest commodity of possibility are result output;If so, recognition failures, prompt user to convert commodity modes of emplacement.
S105i, two kinds of results are merged, a possibility that identical commodity is added in two results.Then possibility is rearranged Commodity and possibility, generate result 3.Then S105j is executed.
Whether the highest two kinds of commodity possibilities of possibility are identical in S105j, judging result 3.If it is not, executing S105k;If It is, then recognition failures that user is prompted to convert commodity modes of emplacement.
Whether more than 90% a possibility that possibility highest commodity in S105k, judging result 3.If so, output result 3 The middle highest commodity of possibility are result output;If it is not, then recognition failures, prompt user convert commodity modes of emplacement.
In the embodiment of the present invention, it is contemplated that in the existing method using subject image to carry out object identification, usually adopt Carry out object identification with the subject image taken according to single shooting angle, this identification method due to accuracy of identification it is low so It is unable to satisfy requirement of the commodity identification to accuracy rate.In view of the above-mentioned problems, the embodiment of the present invention is used from two different shootings Angle obtains the image of object to be identified, and is known according to type of the image of two different shooting angles to commodity to be identified Not, to improve accuracy of identification.In addition, when shooting angle is arranged, in order to obtain the external appearance characteristic of more objects to be identified, In the embodiment of the present invention, asymmetric two shootings in the horizontal direction are set by the first shooting angle and the second shooting angle Angle, if being at this time front with the face of the commodity to be identified of the first shooting angle shot, the second shooting angle can then be shot To the side of commodity to be identified, so that the external appearance characteristic of acquisition object to be identified as much as possible, further increases accuracy of identification.
Embodiment two:
Specifically, embodiment two: the embodiment of the present invention provides a kind of article identification device, and the device is for executing above-mentioned quotient Product recognition methods.It as shown in figure 17, is a kind of possible structural schematic diagram of article identification device provided in an embodiment of the present invention. Specifically, the article identification device 20 includes: acquiring unit 201, recognition unit 202.Wherein:
Acquiring unit 201, for obtaining from the first shooting angle shot commodity to be identified first image generated and From the second shooting angle shot commodity to be identified second image generated;Wherein, the position relative to commodity to be identified, first Shooting angle and the second shooting angle are asymmetric in the horizontal direction;
Recognition unit 202, for being identified according to the first image and the second image wait know using default neural network model The type of other commodity.
Optionally, acquiring unit 201 specifically include and obtain subelement 2011 and pretreatment subelement 2012;
Subelement 2011 is obtained, obtains photographic device from the first of the first shooting angle shot commodity to be identified for obtaining The second original image that original image and photographic device shoot angle shot commodity to be identified from second;
Subelement 2012 is pre-processed, for being pre-processed respectively to the first original image and the second original image, is generated First image and the second image;Wherein, pretreatment specifically includes: carrying out gray processing to image, generates gray level image;To grayscale image As carrying out edge sharpening, sharpening image is generated;Edge extracting is carried out to sharpening image, generates edge image;To edge image into Row binaryzation generates binary image;Closed operation is carried out to binary image, generates closed operation image;Delete closed operation image In background image, output delete background image after image.
Optionally, the background image in closed operation image is deleted, specifically include: according to presetting method, frame selects closed operation figure The included profile as in, generates M contour area;Determine the minimum area-encasing rectangle of each contour area in M contour area, Generate M minimum area-encasing rectangle;Calculate separately the center and face of each minimum area-encasing rectangle in M minimum area-encasing rectangle Product;Determine the redundancy rectangle in M minimum area-encasing rectangle;Redundancy rectangle includes at least: center is not in predeterminable area Minimum area-encasing rectangle and area are less than the minimum area-encasing rectangle of preset area;Generate target rectangle frame;Target rectangle frame can Surround other minimum area-encasing rectangles in M minimum area-encasing rectangle in addition to redundancy rectangle;It deletes and removes target in closed operation image Image section except rectangle frame.
Optionally, recognition unit 202 specifically include: just knowing subelement 2021 and judgment sub-unit 2022;
Just know subelement 2021, it is raw respectively according to the first image and the second image for utilizing default neural network model At the first recognition result and the second recognition result;It include that a possibility that the first image corresponds to is highest N number of in first recognition result Type of merchandize and corresponding probability parameter;It include the highest N number of quotient of a possibility that the second image corresponds in second recognition result Kind class and corresponding probability parameter;
Judgment sub-unit 2022, if in the first recognition result and the second recognition result, one and only one commodity kind The probability parameter of class is greater than the first probability threshold value, it is determined that commodity to be identified are the commodity that probability parameter is greater than the first probability threshold value Type;
Judgment sub-unit 2022, if being also used in the first recognition result and the second recognition result, there are more than two quotient The probability parameter of kind class is greater than the first probability threshold value, then by the first recognition result and identical type of merchandize in the second recognition result Probability parameter merge, generate third recognition result;And determine that commodity to be identified are that probability parameter is maximum in third recognition result Type of merchandize.
Article identification device 20 further include: training unit 203;
Training unit 203, for obtaining the image of Q commodity to be identified;The image of Q commodity to be identified, comprising: from The position of commodity to be identified is on the spherical surface of the centre of sphere, and the opposite centre of sphere is in the horizontal direction or on vertical direction at interval of predetermined angle The image of captured obtained commodity to be identified;Using the image of Q commodity to be identified, default neural network model is instructed Practice.
In the article identification device provided in the embodiment of the present invention each module function and generated effect can be with Referring to the corresponding description content in one commodity recognition method of above-described embodiment, details are not described herein.
Using integrated unit, Figure 18 shows article identification device involved in above-described embodiment A kind of possible structural schematic diagram.Article identification device 30 includes: processing module 301, communication module 302 and memory module 303. Processing module 301 is for carrying out control management to the movement of article identification device 30, for example, processing module 301 is for supporting quotient Product identification device 30 executes the process S101-S105 in Fig. 2.Communication module 302 is for supporting article identification device and other realities The communication of body.Memory module 303 is used to store the program code and data of article identification device.
Wherein, processing module 301 can be processor or controller, such as can be central processing unit (central Processing unit, CPU), general processor, digital signal processor (digital signal processor, DSP), Specific integrated circuit (application-specific integrated circuit, ASIC), field programmable gate array It is (field programmable gate array, FPGA) or other programmable logic device, transistor logic, hard Part component or any combination thereof.It may be implemented or execute to combine and various illustratively patrol described in present disclosure Collect box, module and circuit.Processor is also possible to realize the combination of computing function, such as includes one or more microprocessors Combination, DSP and the combination of microprocessor etc..Communication module 302 can be transceiver, transmission circuit or communication interface etc..It deposits Storage module 303 can be memory.
When processing module 301 is processor as shown in figure 19, communication module 302 is the transceiver of Figure 19, memory module 303 when being the memory of Figure 19, and article identification device involved in the embodiment of the present application can be following article identification device 40。
Referring to Fig.1 shown in 9, which includes: processor 401, transceiver 402, memory 403 and bus 404。
Wherein, processor 401, transceiver 402, memory 403 are connected with each other by bus 404;Bus 404 can be outer If component connection standard (peripheral component interconnect, PCI) bus or expanding the industrial standard structure (extended industry standard architecture, EISA) bus etc..It is total that the bus can be divided into address Line, data/address bus, control bus etc..Only to be indicated with a thick line in figure, it is not intended that an only bus convenient for indicating Or a type of bus.
Processor 401 can be a general central processor (Central Processing Unit, CPU), micro process Device, application-specific integrated circuit (Application-Specific Integrated Circuit, ASIC) or one or more A integrated circuit executed for controlling application scheme program.
Memory 403 can be read-only memory (Read-Only Memory, ROM) or can store static information and instruction Other kinds of static storage device, random access memory (Random Access Memory, RAM) or letter can be stored The other kinds of dynamic memory of breath and instruction, is also possible to Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-only Memory, EEPROM), CD-ROM (Compact Disc Read- Only Memory, CD-ROM) or other optical disc storages, optical disc storage (including compression optical disc, laser disc, optical disc, digital universal Optical disc, Blu-ray Disc etc.), magnetic disk storage medium or other magnetic storage apparatus or can be used in carrying or store to have referring to Enable or data structure form desired program code and can by any other medium of computer access, but not limited to this. Memory, which can be, to be individually present, and is connected by bus with processor.Memory can also be integrated with processor.
Wherein, memory 402 is used to store the application code for executing application scheme, and is controlled by processor 401 System executes.Transceiver 402 is used to receive the content of external equipment input, and processor 401 is used to execute to store in memory 403 Application code, to realize commodity recognition method described in the embodiment of the present application.
It should be understood that magnitude of the sequence numbers of the above procedures are not meant to execute suitable in the various embodiments of the application Sequence it is successive, the execution of each process sequence should be determined by its function and internal logic, the implementation without coping with the embodiment of the present application Process constitutes any restriction.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method, it can be with It realizes by another way.For example, apparatus embodiments described above are merely indicative, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of equipment or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real It is existing.When being realized using software program, can entirely or partly realize in the form of a computer program product.The computer Program product includes one or more computer instructions.On computers load and execute computer program instructions when, all or It partly generates according to process or function described in the embodiment of the present application.The computer can be general purpose computer, dedicated meter Calculation machine, computer network or other programmable devices.The computer instruction can store in computer readable storage medium In, or from a computer readable storage medium to the transmission of another computer readable storage medium, for example, the computer Instruction can pass through wired (such as coaxial cable, optical fiber, number from a web-site, computer, server or data center Word user line (Digital Subscriber Line, DSL)) or wireless (such as infrared, wireless, microwave etc.) mode to another A web-site, computer, server or data center are transmitted.The computer readable storage medium can be computer Any usable medium that can be accessed either includes the numbers such as one or more server, data centers that medium can be used to integrate According to storage equipment.The usable medium can be magnetic medium (for example, floppy disk, hard disk, tape), optical medium (for example, DVD), Or semiconductor medium (such as solid state hard disk (Solid State Disk, SSD)) etc..
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain Lid is within the scope of protection of this application.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.

Claims (10)

1. a kind of commodity recognition method characterized by comprising
It obtains from the first shooting angle shot commodity to be identified first image generated and is waited for from the second shooting angle shot Identify commodity the second image generated;Wherein, the position relative to commodity to be identified, first shooting angle and described the Two shooting angle are asymmetric in the horizontal direction;
The commodity to be identified are identified according to the first image and second image using default neural network model Type.
2. commodity recognition method according to claim 1, which is characterized in that described obtain waits knowing from the first shooting angle shot Other commodity the first image generated and from the second shooting angle shot commodity to be identified second image generated, it is specific to wrap It includes:
Obtain first original image of the photographic device from commodity to be identified described in the first shooting angle shot, and camera shooting dress Set the second original image from commodity to be identified described in the second shooting angle shot;
First original image and second original image are pre-processed respectively, generate the first image and described Second image;
The pretreatment specifically includes: carrying out gray processing to image, generates gray level image;It is sharp that edge is carried out to the gray level image Change, generates sharpening image;Edge extracting is carried out to the sharpening image, generates edge image;Two are carried out to the edge image Value generates binary image;Closed operation is carried out to the binary image, generates closed operation image;Delete the closed operation The image after background image is deleted in background image in image, output.
3. commodity recognition method according to claim 2, which is characterized in that the background deleted in the closed operation image Image specifically includes:
According to presetting method, frame selects profile included in the closed operation image, generates M contour area;
It determines the minimum area-encasing rectangle of each contour area in the M contour area, generates M minimum area-encasing rectangle;
Calculate separately the center of each minimum area-encasing rectangle and area in described M minimum area-encasing rectangle;
Determine the redundancy rectangle in described M minimum area-encasing rectangle;The redundancy rectangle includes at least: center is not default Minimum area-encasing rectangle and area in region are less than the minimum area-encasing rectangle of preset area;
Generate target rectangle frame;The target rectangle frame can surround in described M minimum area-encasing rectangle in addition to redundancy rectangle Other minimum area-encasing rectangles;
Delete the image section in the closed operation image in addition to the target rectangle frame.
4. commodity recognition method according to claim 1, which is characterized in that it is described to utilize default neural network model, according to The first image and second image identify the type of the commodity to be identified, specifically include:
Using the default neural network model, respectively according to the first image and second image, the first identification is generated As a result with the second recognition result;It include the highest N number of quotient of a possibility that the first image corresponds in first recognition result Kind class and corresponding probability parameter;It include that a possibility that second image corresponds to is highest in second recognition result N number of type of merchandize and corresponding probability parameter;
If the probability parameter of one and only one type of merchandize is greater than in first recognition result and second recognition result First probability threshold value, it is determined that the commodity to be identified are the type of merchandize that the probability parameter is greater than the first probability threshold value;
The method also includes:
If there are the probability parameter of more than two types of merchandize is big in first recognition result and second recognition result In first probability threshold value, then by the probability of first recognition result and identical type of merchandize in second recognition result Parameter merges, and generates third recognition result;And determine the commodity to be identified for probability parameter in the third recognition result most Big type of merchandize.
5. any one of -4 commodity recognition method according to claim 1, which is characterized in that utilizing the default neural network Model, according to the first image and second image, before the type for identifying the commodity to be identified, the method is also wrapped It includes:
Obtain the image of commodity to be identified described in Q width;The image of commodity to be identified described in the Q width, comprising: from quotient to be identified The position of product is on the spherical surface of the centre of sphere, and the relatively described centre of sphere is clapped at interval of predetermined angle in the horizontal direction or on vertical direction The image for the commodity to be identified taken the photograph;
Using the image of commodity to be identified described in the Q width, the default neural network model is trained.
6. a kind of article identification device characterized by comprising
Acquiring unit, for obtaining from the first shooting angle shot commodity to be identified first image generated and from second count Take the photograph angle shot commodity to be identified second image generated;Wherein, the position relative to commodity to be identified, first shooting Angle and second shooting angle are asymmetric in the horizontal direction;
Recognition unit, for using default neural network model, according to the first image and second image, described in identification The type of commodity to be identified.
7. article identification device according to claim 6, which is characterized in that it is single to specifically include acquisition for the acquiring unit Member and pretreatment subelement;
The acquisition subelement obtains photographic device from commodity to be identified described in the first shooting angle shot for obtaining The second original image of first original image and photographic device from commodity to be identified described in the second shooting angle shot;
Subelement is pre-processed, for being pre-processed respectively to first original image and second original image, is generated The first image and second image;Wherein, the pretreatment specifically includes: carrying out gray processing to image, generates gray scale Image;Edge sharpening is carried out to the gray level image, generates sharpening image;Edge extracting is carried out to the sharpening image, is generated Edge image;Binaryzation is carried out to the edge image, generates binary image;Closed operation is carried out to the binary image, Generate closed operation image;The background image in the closed operation image is deleted, the image after background image is deleted in output.
8. article identification device according to claim 7, which is characterized in that the background deleted in the closed operation image Image specifically includes:
According to presetting method, frame selects profile included in the closed operation image, generates M contour area;
It determines the minimum area-encasing rectangle of each contour area in the M contour area, generates M minimum area-encasing rectangle;
Calculate separately the center of each minimum area-encasing rectangle and area in described M minimum area-encasing rectangle;
Determine the redundancy rectangle in described M minimum area-encasing rectangle;The redundancy rectangle includes at least: center is not default Minimum area-encasing rectangle and area in region are less than the minimum area-encasing rectangle of preset area;
Generate target rectangle frame;The target rectangle frame can surround in described M minimum area-encasing rectangle in addition to redundancy rectangle Other minimum area-encasing rectangles;
Delete the image section in the closed operation image in addition to the target rectangle frame.
9. article identification device according to claim 6, which is characterized in that recognition unit specifically includes: just know subelement with And judgment sub-unit;
It is described just to know subelement, for utilizing the default neural network model, respectively according to the first image and described the Two images generate the first recognition result and the second recognition result;It include that the first image is corresponding in first recognition result A possibility that highest N number of type of merchandize and corresponding probability parameter;It include second figure in second recognition result Highest N number of type of merchandize and corresponding probability parameter as a possibility that corresponding;
The judgment sub-unit, if in first recognition result and second recognition result, one and only one quotient The probability parameter of kind class is greater than the first probability threshold value, it is determined that the commodity to be identified are that the probability parameter is greater than first generally The type of merchandize of rate threshold value;
The judgment sub-unit, if being also used in first recognition result and second recognition result, there are two or more Type of merchandize probability parameter be greater than first probability threshold value, then will first recognition result and it is described second identification knot The probability parameter of identical type of merchandize merges in fruit, generates third recognition result;And determine that the commodity to be identified are described the The maximum type of merchandize of probability parameter in three recognition results.
10. according to any one of the claim 6-9 article identification device, which is characterized in that the article identification device also wraps It includes: training unit;
The training unit, for obtaining the image of commodity to be identified described in Q width;The image of commodity to be identified described in the Q width, It include: from the spherical surface using the position of commodity to be identified as the centre of sphere, the relatively described centre of sphere is in the horizontal direction or on vertical direction At interval of the image of the commodity to be identified obtained captured by predetermined angle;It is right using the image of commodity to be identified described in the Q width The default neural network model is trained.
CN201910110364.8A 2019-02-11 2019-02-11 A kind of commodity recognition method and device Pending CN109816045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910110364.8A CN109816045A (en) 2019-02-11 2019-02-11 A kind of commodity recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910110364.8A CN109816045A (en) 2019-02-11 2019-02-11 A kind of commodity recognition method and device

Publications (1)

Publication Number Publication Date
CN109816045A true CN109816045A (en) 2019-05-28

Family

ID=66606362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910110364.8A Pending CN109816045A (en) 2019-02-11 2019-02-11 A kind of commodity recognition method and device

Country Status (1)

Country Link
CN (1) CN109816045A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443118A (en) * 2019-06-24 2019-11-12 上海了物网络科技有限公司 Commodity recognition method, system and medium based on artificial feature
CN111079575A (en) * 2019-11-29 2020-04-28 拉货宝网络科技有限责任公司 Material identification method and system based on packaging image characteristics
CN111797896A (en) * 2020-06-01 2020-10-20 锐捷网络股份有限公司 Commodity identification method and device based on intelligent baking
CN112069862A (en) * 2019-06-10 2020-12-11 华为技术有限公司 Target detection method and device
CN112967787A (en) * 2021-01-28 2021-06-15 壹健康健康产业(深圳)有限公司 Medicine information input method, device, medium and terminal equipment
CN114549406A (en) * 2022-01-10 2022-05-27 华院计算技术(上海)股份有限公司 Hot rolling line management method, device and system, computing equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005276059A (en) * 2004-03-26 2005-10-06 Victor Co Of Japan Ltd Commodity information providing system
CN101398894A (en) * 2008-06-17 2009-04-01 浙江师范大学 Automobile license plate automatic recognition method and implementing device thereof
CN101945257A (en) * 2010-08-27 2011-01-12 南京大学 Synthesis method for extracting chassis image of vehicle based on monitoring video content
CN102332093A (en) * 2011-09-19 2012-01-25 汉王科技股份有限公司 Identity authentication method and device adopting palmprint and human face fusion recognition
CN104866826A (en) * 2015-05-17 2015-08-26 华南理工大学 Static gesture language identification method based on KNN algorithm and pixel ratio gradient features
CN106203274A (en) * 2016-06-29 2016-12-07 长沙慧联智能科技有限公司 Pedestrian's real-time detecting system and method in a kind of video monitoring
CN106203397A (en) * 2016-07-26 2016-12-07 江苏鸿信***集成有限公司 Differentiate and localization method based on the form of tabular analysis technology in image
CN106570496A (en) * 2016-11-22 2017-04-19 上海智臻智能网络科技股份有限公司 Emotion recognition method and device and intelligent interaction method and device
CN106875203A (en) * 2015-12-14 2017-06-20 阿里巴巴集团控股有限公司 A kind of method and device of the style information for determining commodity picture
CN109147254A (en) * 2018-07-18 2019-01-04 武汉大学 A kind of video outdoor fire disaster smog real-time detection method based on convolutional neural networks
CN109299715A (en) * 2017-07-24 2019-02-01 图灵通诺(北京)科技有限公司 The settlement method and device of image recognition technology based on convolutional neural networks

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005276059A (en) * 2004-03-26 2005-10-06 Victor Co Of Japan Ltd Commodity information providing system
CN101398894A (en) * 2008-06-17 2009-04-01 浙江师范大学 Automobile license plate automatic recognition method and implementing device thereof
CN101945257A (en) * 2010-08-27 2011-01-12 南京大学 Synthesis method for extracting chassis image of vehicle based on monitoring video content
CN102332093A (en) * 2011-09-19 2012-01-25 汉王科技股份有限公司 Identity authentication method and device adopting palmprint and human face fusion recognition
CN104866826A (en) * 2015-05-17 2015-08-26 华南理工大学 Static gesture language identification method based on KNN algorithm and pixel ratio gradient features
CN106875203A (en) * 2015-12-14 2017-06-20 阿里巴巴集团控股有限公司 A kind of method and device of the style information for determining commodity picture
CN106203274A (en) * 2016-06-29 2016-12-07 长沙慧联智能科技有限公司 Pedestrian's real-time detecting system and method in a kind of video monitoring
CN106203397A (en) * 2016-07-26 2016-12-07 江苏鸿信***集成有限公司 Differentiate and localization method based on the form of tabular analysis technology in image
CN106570496A (en) * 2016-11-22 2017-04-19 上海智臻智能网络科技股份有限公司 Emotion recognition method and device and intelligent interaction method and device
CN109299715A (en) * 2017-07-24 2019-02-01 图灵通诺(北京)科技有限公司 The settlement method and device of image recognition technology based on convolutional neural networks
CN109147254A (en) * 2018-07-18 2019-01-04 武汉大学 A kind of video outdoor fire disaster smog real-time detection method based on convolutional neural networks

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069862A (en) * 2019-06-10 2020-12-11 华为技术有限公司 Target detection method and device
CN110443118A (en) * 2019-06-24 2019-11-12 上海了物网络科技有限公司 Commodity recognition method, system and medium based on artificial feature
CN110443118B (en) * 2019-06-24 2021-09-03 上海了物网络科技有限公司 Commodity identification method, system and medium based on artificial features
CN111079575A (en) * 2019-11-29 2020-04-28 拉货宝网络科技有限责任公司 Material identification method and system based on packaging image characteristics
CN111797896A (en) * 2020-06-01 2020-10-20 锐捷网络股份有限公司 Commodity identification method and device based on intelligent baking
CN112967787A (en) * 2021-01-28 2021-06-15 壹健康健康产业(深圳)有限公司 Medicine information input method, device, medium and terminal equipment
CN114549406A (en) * 2022-01-10 2022-05-27 华院计算技术(上海)股份有限公司 Hot rolling line management method, device and system, computing equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109816045A (en) A kind of commodity recognition method and device
CN104268498B (en) A kind of recognition methods of Quick Response Code and terminal
CN108520229A (en) Image detecting method, device, electronic equipment and computer-readable medium
US10885660B2 (en) Object detection method, device, system and storage medium
CN105894464B (en) A kind of medium filtering image processing method and device
US9239948B2 (en) Feature descriptor for robust facial expression recognition
US11521303B2 (en) Method and device for inpainting image
CN112633159B (en) Human-object interaction relation identification method, model training method and corresponding device
CN109693387A (en) 3D modeling method based on point cloud data
CN111008561B (en) Method, terminal and computer storage medium for determining quantity of livestock
CN112101124B (en) Sitting posture detection method and device
CN108986115A (en) Medical image cutting method, device and intelligent terminal
CN111626163A (en) Human face living body detection method and device and computer equipment
CN110765891A (en) Engineering drawing identification method, electronic equipment and related product
CN110796016A (en) Engineering drawing identification method, electronic equipment and related product
JP7121132B2 (en) Image processing method, apparatus and electronic equipment
CN111382638B (en) Image detection method, device, equipment and storage medium
CN109582549A (en) A kind of recognition methods of device type and device
CN111967529B (en) Identification method, device, equipment and system
CN110633630B (en) Behavior identification method and device and terminal equipment
CN108647640A (en) The method and electronic equipment of recognition of face
CN108334869A (en) Selection, face identification method and the device and electronic equipment of face component
KR102196749B1 (en) Method and system for image registering using weighted feature point
CN110097061A (en) A kind of image display method and apparatus
CN111753722B (en) Fingerprint identification method and device based on feature point type

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190528