CN103390282A - Image tagging method and device - Google Patents

Image tagging method and device Download PDF

Info

Publication number
CN103390282A
CN103390282A CN2013103255760A CN201310325576A CN103390282A CN 103390282 A CN103390282 A CN 103390282A CN 2013103255760 A CN2013103255760 A CN 2013103255760A CN 201310325576 A CN201310325576 A CN 201310325576A CN 103390282 A CN103390282 A CN 103390282A
Authority
CN
China
Prior art keywords
marked
image
key point
point
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103255760A
Other languages
Chinese (zh)
Other versions
CN103390282B (en
Inventor
韩钧宇
都大龙
陶吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201310325576.0A priority Critical patent/CN103390282B/en
Publication of CN103390282A publication Critical patent/CN103390282A/en
Application granted granted Critical
Publication of CN103390282B publication Critical patent/CN103390282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an image tagging method and device. The image tagging method comprises the steps as follows: a to-be-tagged image is displayed; a tagged key point set is obtained in the to-be-tagged image; according to the tagged key point set in the to-be-tagged image and a preset key point template set, a tagging position of a next to-be-tagged key point in the to-be-tagged image is predicted; the position of the tagged point is moved to the predicted tagging position; and user adjustment on the position of the tagged point is received, and a key point, tagged in an actual position, of the tagged point is added to the tagged key point set. With the adoption of the method and the device, the tagging efficiency can be improved greatly, the tagging accuracy can be effectively guaranteed, and superior collection data are acquired.

Description

Image labeling method and device thereof
Technical field
The application relates to a kind of image labeling method and device, relates in particular to the technology that a kind of user of the location guide by key point in predicted picture carries out the key point mark.
Background technology
The key point mark is the major way of computer vision data mark, be mainly used in the technical fields such as COMPUTER DETECTION, identification, for example, key point to people's face marks and can provide the data foundation for the foundation of geometric relationship between each key point of people's face, computing machine is by the geometric relationship between these key points of study, can distinguish in an image whether people's face is arranged, thereby realize the detection of computing machine to people's face; Because the geometric relationship of key point corresponding to different people face is different, utilizes this point can distinguish different people's faces, and then realize the identification of computing machine to people's face.In addition, the key point label technology can be applied to buildings or other have the mark of geometry in particular object.
As can be seen here, the key point mark is significant for the technical fields such as detection, identification, prior art is when being marked key point: mainly by the key point to marking, be described the mode that guides the user to be marked with mouse or other interactive devices and realize the mark to image key points, but this mark mode only is only suitable for the meaning ratio of key point in image to be easier to be understood by the user, and the fewer situation of the key point that will mark in image; If encounter a fairly large number of situation of key point to be marked, the mark efficiency of key point obviously reduces, and is difficult to guarantee the effect of mark.
Summary of the invention
The object of the present invention is to provide a kind of image labeling method and device, by the next key point position that can mark of set of keypoints prediction marked according to the user, efficiency and the accuracy processed to improve the key point mark on the facial image of input.
According to an aspect of the present invention, provide a kind of image labeling method, comprising: show image to be marked; Obtain the set of keypoints marked in image to be marked; According to the set of keypoints marked in described image to be marked and preset key point template set, predict the labeling position of next key point to be marked in described image to be marked; The position of described mark point is moved to the labeling position of described prediction; Receive the user position of described mark point is adjusted, and described mark point is added into to the described set of keypoints marked in the key point of physical location mark.
Preferably, the step of described demonstration image to be marked also comprises: show that one is marked with the example image of reference point, described reference point is corresponding with the set of key point template.
Preferably, in the described image to be marked of described prediction, the step of the labeling position of next key point to be marked comprises: calculate the minimum geometric transformation coefficient between the described set of keypoints marked and the set of described key point template; Labeling position according to next key point to be marked in the described image to be marked of described minimum geometric transformation coefficient calculations.
Described geometric transformation can be one of following transformation rule: European conversion, similarity transformation, affined transformation, projective transformation.
Preferably, the step that the described position by described mark point moves to the labeling position of described prediction also comprises: the reference point corresponding with next key point to be marked in described image to be marked in the described example image of highlighted demonstration.
Preferably, between the key point marked in described image to be marked, line shows; And/or, in described example image between the reference point corresponding with the key point marked in described image to be marked line show.
According to a further aspect in the invention, provide a kind of image labeling device, comprising: display unit, for showing image to be marked; Predicting unit, for obtaining the set of keypoints marked at image to be marked, and, according to the set of keypoints marked in described image to be marked and preset key point template set, predict the labeling position of next key point to be marked in described image to be marked; Positioning unit, move to the labeling position of described prediction for the position by described mark point; The mark unit, for receiving the position adjustment of user to described mark point, and be added into the described set of keypoints marked by described mark point in the key point of physical location mark.
Preferably, described display unit also for: show that one is marked with the example image of reference point, described reference point is corresponding with the set of key point template.
Preferably, in the described image to be marked of described predicting unit prediction, the processing of the labeling position of next key point to be marked comprises: calculate the minimum geometric transformation coefficient between the described set of keypoints marked and the set of described key point template; Labeling position according to next key point to be marked in the described image to be marked of described minimum geometric transformation coefficient calculations.
Described geometric transformation can be one of following transformation rule: European conversion, similarity transformation, affined transformation, projective transformation.
Preferably, described display unit is also for reference point that the described example image of highlighted demonstration is corresponding with next key point to be marked in described image to be marked.
Preferably, described display unit also for: between the key point marked at described image to be marked, line shows; And/or between the reference point corresponding with the key point marked in described image to be marked, line shows in described example image.
Beneficial effect
Compared with prior art, the present invention has the following advantages:
The present invention not only can increase substantially the efficiency of mark, and can effectively guarantee the accuracy of mark.In addition, because the present invention estimates that by employing the labeling position of next key point to be marked controls the movement of mark point, therefore the user only need adjust the position that marks point and can easily complete the mark to next key point among a small circle, thereby effectively alleviated mark personnel's workload, strengthened user's experience.
The accompanying drawing explanation
By the description of carrying out below in conjunction with accompanying drawing, above and other purpose of the present invention and characteristics will become apparent, wherein:
Fig. 1 is the process flow diagram illustrated according to the image labeling method of exemplary embodiment of the present invention;
Fig. 2 is the structured flowchart illustrated according to the image labeling device of exemplary embodiment of the present invention;
Fig. 3 (a)~(f) is the schematic diagram illustrated according to certain image key points mark of exemplary embodiment of the present invention.
Embodiment
Below, describe with reference to the accompanying drawings embodiments of the invention in detail.
Main design of the present invention: be in the image labeling process, control the position of mark point by the labeling position of estimating key point next to be marked, the position that the user only need adjust the mark point among a small circle can easily complete the mark to the key point position.This image labeling mode not only can increase substantially the efficiency of mark, can also effectively guarantee the accuracy of labeling position, and then obtains the image data of high-quality.
Fig. 1 shows the process flow diagram of the preferred embodiment of a kind of image labeling method of the present invention.A kind of image labeling device shown in Fig. 2 can be used for realizing the method described in Fig. 1.
With reference to Fig. 1, at step S110, described device shows image to be marked.
In addition, the key point that need to mark due to dissimilar image all is not quite similar, therefore not every user has clearly understanding to the key point of image to be marked, particularly for some buildingss, there is object of geometric configuration etc., if at this moment provide the more specific location information of the key point that image to be marked need to mark, except the position distribution of the key point that can allow the mark personnel will mark image has one to understand clearly, can also effectively guide and help the user to carry out correct mark on image to be marked.
Therefore in order to guide better and to help the user to carry out correct mark on image to be marked, according to a preferred embodiment of the present invention, step S110 also comprises: described device shows that one is marked with the example image of reference point, described reference point is corresponding with the set of key point template, and described reference point can also be associated with corresponding special semanteme.
By adopting aforesaid way, the user is without understanding the key point that will mark in image, just can carry out the key point mark to image to be marked according to the position of the reference point of mark in example image, can provide for the user reference by location foundation accurately like this in the mark process.
At step S120, described device obtains the set of keypoints marked in image to be marked.
At step S130, described device, according to the set of keypoints marked in described image to be marked and preset key point template set, is predicted the labeling position of next key point to be marked in described image to be marked.
Particularly, according to an optional embodiment of the present invention, in the described image to be marked of described device prediction, the step of the labeling position of next key point to be marked comprises:
Described device calculates the minimum geometric transformation coefficient between the described set of keypoints marked and the set of described key point template;
Described device is according to the labeling position of next key point to be marked in the described image to be marked of described minimum geometric transformation coefficient calculations.
Wherein, described geometric transformation can be one of following transformation rule: European conversion, similarity transformation, affined transformation, projective transformation.
During concrete enforcement, suppose that the user has marked n key point in image to be marked, the set of keypoints marked in the image described to be marked that described device gets can be expressed as follows:
{(x 1,y 1),(x 2,y 2)……(x n,y n)}
While preparing n+1 key point of mark in image to be marked as the user, now, the set of keypoints marked: { (x 1, y 1), (x 2, y 2) ... (x n, y n) and the set of key point template in pre-stored front n key point template set: { (X 1, Y 1), (X 2, Y 2) ... (X n, Y n) corresponding position coordinates point is right mutually to have formed n, at this moment can adopt the prior art mid point between the Transformation Relation of Projection, can obtain corresponding geometric transformation parameter by any one model calculated in above-mentioned geometric transformation.After obtaining the geometric transformation parameter that described geometric transformation model is corresponding, according to the position coordinates (X of n+1 pre-stored key point template n+1, Y n+1), can estimate and dope the position coordinates (x of n+1 the key point of user on image to be marked n+1, y n+1), the labeling position of next key point to be marked in described image to be marked namely.
At step S140, described device moves to the position of described mark point the labeling position of described prediction.
Can find out, carrying out this step S140 can be so that the interactive operation process of image labeling becomes more intelligent, avoid the user and carry out because find the key point position moving on a large scale the mark point in image to be marked, thereby effectively improved the efficiency of key point mark.
At step S150, described device receives the user position of described mark point is adjusted, and described mark point is added into to the described set of keypoints marked in the key point of physical location mark.
Due to the labeling position of described device next key point to be marked in the image described to be marked of step S130 estimation may with some deviation of physical location that will mark in image to be marked, because this deviation is very little, therefore the user carries out movement adjustment among a small circle near can put the labeling position of next key point to be marked in described image to be marked by mark, in order to mark location point (x more accurately, y), after described device is receiving the mark acknowledge message of described mark point in image to be marked, using the position coordinates that carries in described mark acknowledge message as key point (x n+1, y n+1) be added into described set of keypoints, now, the described set of keypoints marked is updated to:
{(x 1,y 1),(x 2,y 2)……(x n,y n),(x n+ 1,y n+ 1)}
Can find out, by carrying out above-mentioned steps, can guarantee that the present invention collects key point position data more accurate, high-quality, this provides reliable comparison foundation for COMPUTER DETECTION, identification later.
In addition, while due to the user, described mark point being carried out to the position adjustment, need reference point corresponding to position to be marked in the reference example image, therefore navigate to rapidly corresponding reference point for prompting user within the very first time, avoid expending the too much time because searching reference point, according to another preferred embodiment of the invention, step S140 also comprises: the reference point corresponding with next key point to be marked in described image to be marked in the described example image of the highlighted demonstration of described device, and special semantic prompting user that can be corresponding according to described reference point.
In addition, in order to facilitate the user to understand and better to show the position of key point to be marked, according to another preferred embodiment of the present invention, between the key point that described device has marked in described image to be marked, line shows; And/or between the reference point corresponding with described image to be marked, line shows in described example image.By line between the key point marked in described image to be marked, show, can in the mark process, help the user to understand better the position relationship between key point, particularly relating to buildings, have on the image of geometric configuration and marked, between key point, line shows that effective user of help understands the position relationship of key point better, thereby the key point that further accurately the location next one will mark, to guarantee the accuracy of labeled data.In like manner, by line between reference point corresponding with described image to be marked in described example image, show, also can help better the user to understand the position relationship between key point.
Below provide a specific embodiment (referring to Fig. 3), the present invention is done to significantly explanation.
Fig. 3 (a) shows an example image that is marked with 4 reference points, and in figure, the reference point of mark comprises: 1) left eye center, 2) the right eye center, 3) the left corners of the mouth, 4) the right corners of the mouth.
Fig. 3 (b) shows an example image that line shows between the reference point of described mark.
The flow process of concrete mark is as follows:
(1) suppose that the user marks fixed point 1 in image to be marked) to 3) after, prepare mark point 4), described device is highlighted demonstration reference point 4 in example image) positional information to user's prompting (referring to Fig. 3 (c));
(2) described device prompting user simultaneously: " please mark a little 4) the right corners of the mouth ";
(3), meanwhile on the image to be marked of correspondence, described device has completed the user key point 1 of mark)~3) by yellow rectangular dots, mean, red cross mark point is moved to a point 4 of preparing mark) (referring to Fig. 3 (d));
(4) last, the user is by adjusting red cross mark point, and definite key point 4) physical location, to complete the mark (referring to Fig. 3 (e)) to this key point.
Fig. 3 (f) shows in described image to be marked also can have some auxiliary network connectivities to facilitate the user to be marked.
Fig. 2 shows the preferred embodiment structured flowchart of a kind of image labeling device of the present invention.
With reference to Fig. 3, described device at least comprises display unit 201, predicting unit 202, positioning unit 203 and mark unit 204.
Wherein, display unit 201 is for showing image to be marked;
Predicting unit 202 is for obtaining the set of keypoints marked at image to be marked, and, according to the set of keypoints marked in described image to be marked and preset key point template set, predict the labeling position of next key point to be marked in described image to be marked;
Positioning unit 203 moves to the labeling position of described prediction for the position by described mark point;
Mark unit 204 is for receiving the position adjustment of user to described mark point, and the key point that described mark point is marked in physical location is added into the described set of keypoints marked.
In order to guide better and to help the user to carry out correct mark on image to be marked, according to a preferred embodiment of the present invention, described display unit 201 also for: show that one is marked with the example image of reference point, described reference point is corresponding with the set of key point template, and the reference point in described example image can also be associated with corresponding special semanteme.
Particularly, according to an optional embodiment of the present invention, in the described image to be marked of described predicting unit 202 prediction, the processing of the labeling position of next key point to be marked comprises:
Calculate the minimum geometric transformation coefficient between the described set of keypoints marked and the set of described key point template;
Labeling position according to next key point to be marked in the described image to be marked of described minimum geometric transformation coefficient calculations.
Wherein, described geometric transformation can be one of following transformation rule: European conversion, similarity transformation, affined transformation, projective transformation.
In addition, in order to guide better and to help the user to carry out correct mark on image to be marked, according to another preferred embodiment of the invention, described display unit is also for reference point that the described example image of highlighted demonstration is corresponding with next key point to be marked in described image to be marked, and special semantic prompting user that can be corresponding according to described reference point.
In addition, in order to facilitate the user to understand and better to show the position of key point to be marked, according to another optional embodiment of the present invention, described display unit also for: between the key point marked at described image to be marked, line shows; And/or between the reference point corresponding with the key point marked in described image to be marked, line shows in described example image.
As can be seen here, compared with prior art the present invention not only can increase substantially the efficiency of mark, and can effectively guarantee the accuracy of mark.In addition, because the present invention estimates that by employing the labeling position of next key point to be marked controls the movement of mark point, therefore the user only need adjust the position that marks point and can easily complete the mark to next key point among a small circle, thereby effectively alleviated mark personnel's workload, strengthened user's experience.
It may be noted that according to the needs of implementing, each step of describing in the application can be split as to more multi-step, also the part operation of two or more steps or step can be combined into to new step, to realize purpose of the present invention.
Above-mentioned the method according to this invention can be at hardware, in firmware, realize, perhaps be implemented as and can be stored in recording medium (such as CD ROM, RAM, floppy disk, hard disk or magneto-optic disk) in software or computer code, perhaps be implemented the original storage downloaded by network in remote logging medium or nonvolatile machine readable media and the computer code in being stored in the local record medium, thereby method described here can be stored in the use multi-purpose computer, such software on the recording medium of application specific processor or able to programme or specialized hardware (such as ASIC or FPGA) is processed.Be appreciated that, computing machine, processor, microprocessor controller or programmable hardware comprise can store or receive software or computer code memory module (for example, RAM, ROM, flash memory etc.), by computing machine, processor or hardware access and while carrying out, realize disposal route described here when described software or computer code.In addition, when the multi-purpose computer access is used for realizing the code in the processing shown in this, the execution of code is converted to multi-purpose computer for carrying out the special purpose computer in the processing shown in this.
Although with reference to preferred embodiment, mean and described the present invention, it should be appreciated by those skilled in the art that and can carry out various modifications and conversion to these embodiment in the situation that do not break away from the spirit and scope of the present invention that are defined by the claims.

Claims (12)

1. an image labeling method comprises:
Show image to be marked;
Obtain the set of keypoints marked in image to be marked;
According to the set of keypoints marked in described image to be marked and preset key point template set, predict the labeling position of next key point to be marked in described image to be marked;
The position of described mark point is moved to the labeling position of described prediction;
Receive the user position of described mark point is adjusted, and described mark point is added into to the described set of keypoints marked in the key point of physical location mark.
2. the method for claim 1, is characterized in that, the step of described demonstration image to be marked also comprises:
Show that one is marked with the example image of reference point, described reference point is corresponding with the set of key point template.
3. method as claimed in claim 2, is characterized in that, in the described image to be marked of described prediction, the step of the labeling position of next key point to be marked comprises:
Calculate the minimum geometric transformation coefficient between the described set of keypoints marked and the set of described key point template;
Labeling position according to next key point to be marked in the described image to be marked of described minimum geometric transformation coefficient calculations.
4. method as claimed in claim 3, is characterized in that, described geometric transformation is one of following transformation rule: European conversion, similarity transformation, affined transformation, projective transformation.
5. method as claimed in claim 4, is characterized in that, the step that the described position by described mark point moves to the labeling position of described prediction also comprises:
The reference point corresponding with next key point to be marked in described image to be marked in the described example image of highlighted demonstration.
6. method as claimed in claim 5, is characterized in that, between the key point marked in described image to be marked, line shows; And/or, in described example image between the reference point corresponding with the key point marked in described image to be marked line show.
7. an image labeling device comprises:
Display unit, for showing image to be marked; Predicting unit, for obtaining the set of keypoints marked at image to be marked, and, according to the set of keypoints marked in described image to be marked and preset key point template set, predict the labeling position of next key point to be marked in described image to be marked;
Positioning unit, move to the labeling position of described prediction for the position by described mark point;
The mark unit, for receiving the position adjustment of user to described mark point, and be added into the described set of keypoints marked by described mark point in the key point of physical location mark.
8. device as claimed in claim 7, is characterized in that, described display unit also for: show that one is marked with the example image of reference point, described reference point is corresponding with the set of key point template.
9. device as claimed in claim 8, is characterized in that, in the described image to be marked of described predicting unit prediction, the processing of the labeling position of next key point to be marked comprises:
Calculate the minimum geometric transformation coefficient between the described set of keypoints marked and the set of described key point template;
Labeling position according to next key point to be marked in the described image to be marked of described minimum geometric transformation coefficient calculations.
10. device as claimed in claim 9, is characterized in that, described geometric transformation is one of following transformation rule: European conversion, similarity transformation, affined transformation, projective transformation.
11. device as claimed in claim 10, is characterized in that, described display unit is also for reference point that the described example image of highlighted demonstration is corresponding with next key point to be marked in described image to be marked.
12. device as claimed in claim 11, is characterized in that, described display unit also for: between the key point marked at described image to be marked, line shows; And/or between the reference point corresponding with the key point marked in described image to be marked, line shows in described example image.
CN201310325576.0A 2013-07-30 2013-07-30 Image labeling method and device thereof Active CN103390282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310325576.0A CN103390282B (en) 2013-07-30 2013-07-30 Image labeling method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310325576.0A CN103390282B (en) 2013-07-30 2013-07-30 Image labeling method and device thereof

Publications (2)

Publication Number Publication Date
CN103390282A true CN103390282A (en) 2013-11-13
CN103390282B CN103390282B (en) 2016-04-13

Family

ID=49534541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310325576.0A Active CN103390282B (en) 2013-07-30 2013-07-30 Image labeling method and device thereof

Country Status (1)

Country Link
CN (1) CN103390282B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104407876A (en) * 2014-12-15 2015-03-11 北京国双科技有限公司 Method and device for displaying labeling control element
CN105184283A (en) * 2015-10-16 2015-12-23 天津中科智能识别产业技术研究院有限公司 Method and system for marking key points in human face images
CN105354531A (en) * 2015-09-22 2016-02-24 成都通甲优博科技有限责任公司 Marking method for facial key points
CN105389326A (en) * 2015-09-16 2016-03-09 中国科学院计算技术研究所 Image annotation method based on weak matching probability canonical correlation model
CN107004136A (en) * 2014-08-20 2017-08-01 北京市商汤科技开发有限公司 For the method and system for the face key point for estimating facial image
CN107967699A (en) * 2016-10-19 2018-04-27 财团法人资讯工业策进会 Visual positioning device and method
CN108876858A (en) * 2018-07-06 2018-11-23 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN108876934A (en) * 2017-12-20 2018-11-23 北京旷视科技有限公司 Key point mask method, device and system and storage medium
CN108961149A (en) * 2017-05-27 2018-12-07 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN110147701A (en) * 2018-06-27 2019-08-20 腾讯科技(深圳)有限公司 Key point mask method, device, computer equipment and storage medium
CN110210526A (en) * 2019-05-14 2019-09-06 广州虎牙信息科技有限公司 Predict method, apparatus, equipment and the storage medium of the key point of measurand
CN110264523A (en) * 2019-06-25 2019-09-20 亮风台(上海)信息科技有限公司 A kind of method and apparatus of the location information of target image in determining test image
CN110335251A (en) * 2019-05-31 2019-10-15 上海联影智能医疗科技有限公司 Quantization device, method, equipment and the storage medium of image analysis method
CN111191708A (en) * 2019-12-25 2020-05-22 浙江省北大信息技术高等研究院 Automatic sample key point marking method, device and system
CN111260613A (en) * 2020-01-10 2020-06-09 丽水正阳电力建设有限公司 Image labeling method based on machine learning model
WO2020238623A1 (en) * 2019-05-29 2020-12-03 腾讯科技(深圳)有限公司 Image labeling method, labeling display method, apparatus, device, and storage medium
CN112233207A (en) * 2020-10-16 2021-01-15 北京字跳网络技术有限公司 Image processing method, device, equipment and computer readable medium
CN112613448A (en) * 2020-12-28 2021-04-06 北京的卢深视科技有限公司 Face data labeling method and system
CN113052369A (en) * 2021-03-15 2021-06-29 北京农业智能装备技术研究中心 Intelligent agricultural machinery operation management method and system
CN114466218A (en) * 2022-02-18 2022-05-10 广州方硅信息技术有限公司 Live video character tracking method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1936925A (en) * 2006-10-12 2007-03-28 上海交通大学 Method for judging characteristic point place using Bayes network classification device image
JP2011053587A (en) * 2009-09-04 2011-03-17 Sharp Corp Image processing device
CN103077368A (en) * 2011-10-25 2013-05-01 上海银晨智能识别科技有限公司 Method and device for positioning mouth part of human face image as well as method and system for recognizing mouth shape
CN103218603A (en) * 2013-04-03 2013-07-24 哈尔滨工业大学深圳研究生院 Face automatic labeling method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1936925A (en) * 2006-10-12 2007-03-28 上海交通大学 Method for judging characteristic point place using Bayes network classification device image
JP2011053587A (en) * 2009-09-04 2011-03-17 Sharp Corp Image processing device
CN103077368A (en) * 2011-10-25 2013-05-01 上海银晨智能识别科技有限公司 Method and device for positioning mouth part of human face image as well as method and system for recognizing mouth shape
CN103218603A (en) * 2013-04-03 2013-07-24 哈尔滨工业大学深圳研究生院 Face automatic labeling method and system

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107004136A (en) * 2014-08-20 2017-08-01 北京市商汤科技开发有限公司 For the method and system for the face key point for estimating facial image
CN107004136B (en) * 2014-08-20 2018-04-17 北京市商汤科技开发有限公司 Method and system for the face key point for estimating facial image
CN104407876B (en) * 2014-12-15 2018-07-13 北京国双科技有限公司 The method and device of display mark control
CN104407876A (en) * 2014-12-15 2015-03-11 北京国双科技有限公司 Method and device for displaying labeling control element
CN105389326A (en) * 2015-09-16 2016-03-09 中国科学院计算技术研究所 Image annotation method based on weak matching probability canonical correlation model
CN105354531A (en) * 2015-09-22 2016-02-24 成都通甲优博科技有限责任公司 Marking method for facial key points
CN105184283A (en) * 2015-10-16 2015-12-23 天津中科智能识别产业技术研究院有限公司 Method and system for marking key points in human face images
CN107967699A (en) * 2016-10-19 2018-04-27 财团法人资讯工业策进会 Visual positioning device and method
CN108961149B (en) * 2017-05-27 2022-01-07 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN108961149A (en) * 2017-05-27 2018-12-07 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN108876934A (en) * 2017-12-20 2018-11-23 北京旷视科技有限公司 Key point mask method, device and system and storage medium
CN108876934B (en) * 2017-12-20 2022-01-28 北京旷视科技有限公司 Key point marking method, device and system and storage medium
CN110147701A (en) * 2018-06-27 2019-08-20 腾讯科技(深圳)有限公司 Key point mask method, device, computer equipment and storage medium
CN108876858A (en) * 2018-07-06 2018-11-23 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN110210526A (en) * 2019-05-14 2019-09-06 广州虎牙信息科技有限公司 Predict method, apparatus, equipment and the storage medium of the key point of measurand
WO2020238623A1 (en) * 2019-05-29 2020-12-03 腾讯科技(深圳)有限公司 Image labeling method, labeling display method, apparatus, device, and storage medium
CN110335251A (en) * 2019-05-31 2019-10-15 上海联影智能医疗科技有限公司 Quantization device, method, equipment and the storage medium of image analysis method
CN110335251B (en) * 2019-05-31 2021-09-17 上海联影智能医疗科技有限公司 Quantization apparatus, method, device and storage medium for image analysis method
CN110264523B (en) * 2019-06-25 2021-06-18 亮风台(上海)信息科技有限公司 Method and equipment for determining position information of target image in test image
CN110264523A (en) * 2019-06-25 2019-09-20 亮风台(上海)信息科技有限公司 A kind of method and apparatus of the location information of target image in determining test image
CN111191708A (en) * 2019-12-25 2020-05-22 浙江省北大信息技术高等研究院 Automatic sample key point marking method, device and system
CN111260613A (en) * 2020-01-10 2020-06-09 丽水正阳电力建设有限公司 Image labeling method based on machine learning model
CN112233207A (en) * 2020-10-16 2021-01-15 北京字跳网络技术有限公司 Image processing method, device, equipment and computer readable medium
CN112613448A (en) * 2020-12-28 2021-04-06 北京的卢深视科技有限公司 Face data labeling method and system
CN112613448B (en) * 2020-12-28 2021-12-28 北京的卢深视科技有限公司 Face data labeling method and system
CN113052369A (en) * 2021-03-15 2021-06-29 北京农业智能装备技术研究中心 Intelligent agricultural machinery operation management method and system
CN113052369B (en) * 2021-03-15 2024-05-10 北京农业智能装备技术研究中心 Intelligent agricultural machinery operation management method and system
CN114466218A (en) * 2022-02-18 2022-05-10 广州方硅信息技术有限公司 Live video character tracking method, device, equipment and storage medium
CN114466218B (en) * 2022-02-18 2024-04-23 广州方硅信息技术有限公司 Live video character tracking method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN103390282B (en) 2016-04-13

Similar Documents

Publication Publication Date Title
CN103390282B (en) Image labeling method and device thereof
JP6455113B2 (en) Object tracking method and apparatus
CN104794733A (en) Object tracking method and device
US11126820B2 (en) Generating object embeddings from images
JP6820967B2 (en) Indoor positioning systems and methods based on geomagnetic signals combined with computer vision
US20150310617A1 (en) Display control device and display control method
CN103873288A (en) System and method for positioning failures of communication network equipment based on alarm information
US11972578B2 (en) Method and system for object tracking using online training
WO2020140749A1 (en) Queuing recommendation method and device, terminal, and computer readable storage medium
EP2733670A1 (en) Apparatus and method for generating depth information
CN111126209B (en) Lane line detection method and related equipment
KR20200039853A (en) Lane Estimation Method using a Vector Map and Camera for Autonomous Driving Vehicle
KR20170103859A (en) Information pushing method and apparatus
WO2024041464A1 (en) Loop-back path prediction method and apparatus, nonvolatile storage medium, and processor
CN108521809A (en) Obstacle information reminding method, system, unit and recording medium
CN103620621A (en) Method and apparatus for face tracking utilizing integral gradient projections
JP2023064093A (en) Traffic mark detection method and method of training traffic mark detection model
KR20100041172A (en) Method for tracking a movement of a moving target of image tracking apparatus
CN108965861B (en) Method and device for positioning camera, storage medium and intelligent interaction equipment
CN103685975A (en) Video playing system and method
CN111310595B (en) Method and device for generating information
CN110738169B (en) Traffic flow monitoring method, device, equipment and computer readable storage medium
JP2002133423A (en) New registration device for face image database and recording medium
CN116033544A (en) Indoor parking lot positioning method, computer device, storage medium and program product
CN114492573A (en) Map matching method and device based on relaxation iteration semantic features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant