CN110119457B - Method and apparatus for generating information - Google Patents

Method and apparatus for generating information Download PDF

Info

Publication number
CN110119457B
CN110119457B CN201910412440.0A CN201910412440A CN110119457B CN 110119457 B CN110119457 B CN 110119457B CN 201910412440 A CN201910412440 A CN 201910412440A CN 110119457 B CN110119457 B CN 110119457B
Authority
CN
China
Prior art keywords
inner contour
key point
contour key
keypoint
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910412440.0A
Other languages
Chinese (zh)
Other versions
CN110119457A (en
Inventor
卢艺帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910412440.0A priority Critical patent/CN110119457B/en
Publication of CN110119457A publication Critical patent/CN110119457A/en
Application granted granted Critical
Publication of CN110119457B publication Critical patent/CN110119457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for generating information. One embodiment of the method comprises: displaying the obtained annotated image including the lip region; in response to the detection of the user selection operation, adjusting a first inner contour key point and a second inner contour key point indicated by the user selection operation to a target position; in response to detecting a user adjustment operation for a first inner contour key point and a second inner contour key point indicated by the user selection operation, adjusting the first inner contour key point and the second inner contour key point indicated by the user selection operation to a position indicated by the user adjustment operation; coordinates of where the at least one first inner contour keypoint and the at least one second inner contour keypoint are currently located in the annotated image are generated. This embodiment reduces the amount of computation to label key points for the inner contour of the lip region.

Description

Method and apparatus for generating information
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and an apparatus for generating information.
Background
In many application scenarios, it is necessary to mark key points on lip regions included in an image. When the lips shown in the image are in a closed state, the inner contour of the upper lip region and the inner contour of the lower lip region may be considered to be coincident (as shown in fig. 3), and thus may be represented by the same curve. At this time, it is often necessary to adjust the key points labeled for both the inner contour of the upper lip and the inner contour of the lower lip to the curve. In one of the related ways, the curve needs to be extracted from the image, and then the key points labeled for the inner contour of the upper lip and the inner contour of the lower lip are adjusted to the extracted curve.
Disclosure of Invention
Embodiments of the present disclosure propose methods and apparatuses for generating information.
In a first aspect, an embodiment of the present disclosure provides a method for generating information, the method including: displaying the obtained labeled image comprising the lip area, wherein the lip area comprises an upper lip area and a lower lip area, the upper lip area is labeled with at least one first inner contour key point, and the lower lip area is labeled with at least one second inner contour key point; in response to the detection of the user selection operation, adjusting a first inner contour key point and a second inner contour key point indicated by the user selection operation to a target position; in response to detecting a user adjustment operation for a first inner contour key point and a second inner contour key point indicated by the user selection operation, adjusting the first inner contour key point and the second inner contour key point indicated by the user selection operation to a position indicated by the user adjustment operation; coordinates of where the at least one first inner contour keypoint and the at least one second inner contour keypoint are currently located in the annotated image are generated.
In some embodiments, the at least one first inner contour keypoint and the at least one second inner contour keypoint are in one-to-one correspondence; and the adjusting the first inner contour key point and the second inner contour key point indicated by the user selection operation to the target position includes: and adjusting the corresponding first inner contour key point and second inner contour key point indicated by the user selection operation to the target position.
In some embodiments, the target position is a position of a preset point on a straight line passing through the first inner contour key point and the second inner contour key point indicated by the user selection operation.
In some embodiments, after the adjusting the first inner contour keypoint and the second inner contour keypoint indicated by the user selection operation to the position indicated by the user adjustment operation, the method further includes: in response to detecting the user identification operation, generating identification information for identifying visibility of the at least one first inner contour keypoint and the at least one second inner contour keypoint based on the user identification operation.
In some embodiments, before the displaying the acquired annotated image including the lip region, the method further comprises: acquiring an image to be marked including a lip region and coordinate information to be marked, wherein the coordinate information to be marked comprises initial coordinates for marking at least one first inner contour key point and at least one second inner contour key point; and marking at least one first inner contour key point and at least one second inner contour key point in the image to be marked based on the coordinate information to be marked to obtain a marked image.
In some embodiments, the above method further comprises: and displaying the coordinates of the current positions of the at least one first inner contour key point and the at least one second inner contour key point in the labeled image.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating information, the apparatus including: the first display unit is configured to display the acquired labeled image comprising the lip region, wherein the lip region comprises an upper lip region and a lower lip region, the upper lip region is labeled with at least one first inner contour key point, and the lower lip region is labeled with at least one second inner contour key point; a first detection unit configured to adjust a first inner contour key point and a second inner contour key point indicated by a user selection operation to a target position in response to detecting the user selection operation; a second detection unit configured to adjust the first inner contour key point and the second inner contour key point indicated by the user selection operation to the positions indicated by the user adjustment operation in response to detecting the user adjustment operation for the first inner contour key point and the second inner contour key point indicated by the user selection operation; a first generating unit configured to generate coordinates of a position in the annotated image where the at least one first inner contour keypoint and the at least one second inner contour keypoint are currently located.
In some embodiments, the at least one first inner contour keypoint and the at least one second inner contour keypoint are in one-to-one correspondence; the first detection unit described above, further configured to: and adjusting the corresponding first inner contour key point and second inner contour key point indicated by the user selection operation to the target position.
In some embodiments, the target position is a position of a preset point on a straight line passing through the first inner contour key point and the second inner contour key point indicated by the user selection operation.
In some embodiments, the above apparatus further comprises: a third detection unit configured to generate, in response to detecting the user identification operation, identification information for identifying visibility of the at least one first inner contour keypoint and the at least one second inner contour keypoint based on the user identification operation.
In some embodiments, the above apparatus further comprises: the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is configured to acquire an image to be annotated comprising a lip region and coordinate information to be annotated, wherein the coordinate information to be annotated comprises initial coordinates for annotating at least one first inner contour key point and at least one second inner contour key point; and the marking unit is configured to mark at least one first inner contour key point and at least one second inner contour key point in the image to be marked based on the coordinate information to be marked, so as to obtain a marked image.
In some embodiments, the above apparatus further comprises: and the second display unit is configured to display the coordinates of the current positions of the at least one first inner contour key point and the at least one second inner contour key point in the labeled image.
In a third aspect, an embodiment of the present disclosure provides a terminal, including: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which when executed by a processor implements the method as described in any of the implementations of the first aspect.
The method and the device for generating information provided by the embodiments of the present disclosure may first display the obtained labeled image including the lip region, then, after detecting a user selection operation, may adjust a first inner contour key point and a second inner contour key point indicated by the user selection operation to a target position, then, after detecting the user adjustment operation, may adjust the first inner contour key point and the second inner contour key point indicated by the user selection operation to a position indicated by the user adjustment operation, and finally, may generate coordinates of a position in the labeled image where the at least one first inner contour key point and the at least one second inner contour key point are currently located. Thereby reducing the amount of computation for labeling key points for the inner contour of the lip region.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for generating information, according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for generating information in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a method for generating information according to the present disclosure;
FIG. 5 is a schematic diagram of another application scenario of a method for generating information according to the present disclosure;
FIG. 6 is a schematic block diagram illustrating one embodiment of an apparatus for generating information according to the present disclosure;
FIG. 7 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary architecture 100 to which the method for generating information or the apparatus for generating information of the present disclosure may be applied.
As shown in fig. 1, system architecture 100 may include terminal device 101, network 102, and server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal apparatus 101 interacts with the server 103 through the network 102 to receive or transmit messages and the like. Various communication client applications, such as a web browser application, a key point marking tool, a browser application, etc., may be installed on the terminal device 101.
The terminal apparatus 101 may be hardware or software. When the terminal device 101 is hardware, it may be various electronic devices having a display screen and supporting the key point labeling, including but not limited to a smart phone, a tablet computer, an e-book reader, a laptop portable computer, a desktop computer, and the like. When the terminal apparatus 101 is software, it can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 103 may be a server providing various services, such as a background server for a key annotation tool on the terminal device 101. As an example, the background server may store a large number of tagged images including the lip region in advance, and then the terminal device may obtain the tagged images from the background server and process the obtained tagged images to obtain processed data. Optionally, the terminal device may further feed back the obtained processed data to the background server.
It should be noted that the large number of annotated images including the lip region may also be directly stored in the local area of the terminal device 101, and the terminal device 101 may directly extract and process the annotated images stored in the local area, in which case, the server 103 may not be present.
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It is noted that the method for generating information provided by the embodiment of the present disclosure is generally performed by the terminal device 101, and accordingly, the apparatus for generating information is generally disposed in the terminal device 101.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating information in accordance with the present disclosure is shown. The method for generating information comprises the following steps:
step 201, displaying the obtained annotated image including the lip region.
In the present embodiment, an execution subject of the method for generating information (e.g., the terminal device 101 shown in fig. 1) may acquire an annotated image including a lip region from a local or communicatively connected server (e.g., the server 103 shown in fig. 1). Then, the execution body can display the acquired marked image.
The labeled image is generally an image labeled with key points in advance and including a lip region. Here, the lip region may be a region in which a person's lips are displayed. The lip regions may in turn include an upper lip region and a lower lip region. It will be appreciated that the upper lip region is typically the region where the upper lip of a person is displayed and the lower lip region is typically the region where the lower lip of a person is displayed.
In practice, the upper lip region may be labeled with at least one first inner contour keypoint and the lower lip region may be labeled with at least one second inner contour keypoint. Wherein the first inner contour keypoint may be a keypoint for labeling the inner contour of the upper lip region and the second inner contour keypoint may be a keypoint for labeling the inner contour of the lower lip region.
It should be noted that in some application scenarios, the lip region may be further marked with other key points besides the at least one first inner contour key point and the at least one second inner contour key point (for example, key points for marking the outer contour of the upper lip region and the lower lip region). In some application scenarios, other regions (e.g., background region, region with nose displayed) may also be included in the annotated image.
In some optional implementations of the embodiment, before displaying the acquired labeled image, the executing body may further perform the following steps.
And step S1, acquiring the image to be annotated including the lip region and the coordinate information to be annotated.
The execution main body can obtain the image to be annotated and the coordinate information to be annotated from a server or a local place which are in communication connection. The image to be annotated is generally an image which comprises a lip region and is not annotated with key points. The coordinate information to be annotated may include initial coordinates for annotating key points in the image to be annotated, for example, may include initial coordinates for annotating the at least one first inner contour key point and the at least one second inner contour key point. It is to be understood that, in some application scenarios, the coordinate information to be labeled may further include coordinates for labeling other key points, for example, key points for labeling outer contours of the upper lip region and the lower lip region may be included.
Step S2, based on the coordinate information to be labeled, labeling at least one first inner contour key point and at least one second inner contour key point in the image to be labeled to obtain a labeled image.
After the image to be annotated and the coordinate information to be annotated are acquired, the execution main body can label key points of the acquired image to be annotated according to the acquired coordinate information to be annotated, and then the annotated image is acquired.
As an example, the executing entity may label the at least one first inner contour key point and the at least one second inner contour key point in a lip region included in the image to be labeled according to an initial coordinate included in the coordinate information to be labeled, so as to obtain the labeled image.
As another example, the executing body may further mark other key points in the image to be marked according to an initial coordinate included in the coordinate information to be marked, so as to obtain the marked image.
In these implementation manners, the obtained image to be labeled can be labeled with the key points through the obtained coordinate information to be labeled, so as to obtain the labeled image.
Step 202, in response to detecting the user selection operation, adjusting the first inner contour key point and the second inner contour key point indicated by the user selection operation to the target positions.
In this embodiment, after displaying the labeled image, in response to detecting the user selection operation, the executing entity may adjust both the first inner contour key point and the second inner contour key point indicated by the user selection operation to the target position. It will be appreciated that after adjustment to the target position, the first and second inner contour keypoints indicated by the user selection operation overlap. In practice, the execution body may detect the user selection operation through an interface running thereon.
The user selection operation may be an operation of selecting a first inner contour key point and a second inner contour key point for the user. The target position may be determined according to actual conditions, and may be, for example, the position of the selected first inner contour key point in the labeled image, or may also be, for example, the position of the selected second inner contour key point in the labeled image.
As an example, in response to detecting a user selection operation, the executing entity may determine a location of a selected first inner contour keypoint in the annotated image, and may then adjust a selected second inner contour keypoint to the location of the selected first inner contour keypoint. Here, the target position is a position of the selected first inner contour key point in the labeled image.
As yet another example, in response to detecting a user selection operation, the executing entity may determine a location of the selected second inner contour keypoint in the annotated image, and may then adjust the selected first inner contour keypoint to the location of the selected second inner contour keypoint. Here, the target position is a position of the selected second inner contour key point in the labeled image.
In some optional implementations of the present embodiment, the target position may also be a position of a preset point on a straight line passing through the selected first inner contour key point and the second inner contour key point. The preset point is located on a straight line where the selected first inner contour key point and the selected second inner contour key point are located, and the distance between the preset point and the midpoint of the selected first inner contour key point and the selected second inner contour key point is smaller than or equal to the preset distance. It can be understood that when the preset distance is zero, the preset point is the midpoint of the selected first inner contour key point and the second inner contour key point.
At this time, the execution body may respectively determine positions of the selected first inner contour key point and the second inner contour key point in the labeled image, and then may determine a position of the preset point according to the positions of the first inner contour key point and the second inner contour key point, and further adjust both the first inner contour key point and the second inner contour key point to the positions of the preset point.
In these implementations, when the upper and lower lips displayed in the annotated image are in a closed state, it may be considered that the inner contour of the upper lip region and the inner contour of the lower lip region are coincident. Typically, the two sides of the curve representing the coincident inner contour are respectively marked with a plurality of first inner contour key points and a plurality of second inner contour key points. Because the distance between the preset point and the midpoint of the selected first inner contour key point and the selected second inner contour key point is less than or equal to the preset distance, that is, the position of the preset point is closer to the curve of the inner contour overlapped with the representation, the selected first inner contour key point and the selected second inner contour key point are adjusted to the positions of the preset point, so that the selected first inner contour key point and the selected second inner contour key point are adjusted to the positions close to the curve.
It should be noted that the target position may also be other positions satisfying a certain relationship with the positions of the selected first inner contour key point and the second inner contour key point, which is not listed here. It should be noted that the user may perform the user selection operation multiple times, and then the execution main body may adjust the selected first inner contour key point and the second inner contour key point to the target position one by one.
Step 203, in response to detecting the user adjustment operation for the first inner contour key point and the second inner contour key point indicated by the user selection operation, adjusting the first inner contour key point and the second inner contour key point indicated by the user selection operation to the positions indicated by the user adjustment operation.
In this embodiment, after adjusting the selected first inner contour key point and the second inner contour key point to the target positions, in response to detecting the user adjustment operation, the execution subject may adjust the selected first inner contour key point and the second inner contour key point to the positions indicated by the user adjustment operation. In practice, the execution body may detect the user adjustment operation through an interface running thereon.
The user adjustment operation may be an operation of adjusting the selected first inner contour key point and the second inner contour key point from the target position to other positions by the user.
Specifically, in response to detecting the user adjustment operation, the executing entity may first determine a position to be adjusted indicated by the user adjustment operation, and then may adjust the selected first inner contour key point and the second inner contour key point to the position to be adjusted.
In some optional implementations of this embodiment, after adjusting the selected first inner contour key point and the second inner contour key point to the positions indicated by the user adjustment operation, in response to detecting the user identification operation, the execution subject may further generate identification information for identifying the visibility of the at least one first inner contour key point and the at least one second inner contour key point based on the detected user identification operation.
The user identification operation is generally an operation in which a user identifies a key point as visible or invisible. It will be appreciated that the identification information may include information identifying whether the keypoint is visible or invisible. In practice, the identification information may be embodied in various forms, for example, may include, but is not limited to, at least one of the following: numbers, pictures, letters, symbols, etc.
In particular, in response to detecting an operation that identifies an inner contour keypoint (e.g., a first inner contour keypoint, a second inner contour keypoint) as invisible, the execution subject may generate identification information "(a, b): 0") for identifying the inner contour keypoint as invisible. Wherein "(a, b)" is the coordinate of the inner contour key point in the labeled image, and "0" is used to identify that the inner contour key point is not visible. It will be appreciated that in response to detecting a user identification operation that identifies the visibility of other first inner contour keypoints or second inner contour keypoints, the execution body described above may also generate corresponding identification information.
In these implementations, identification information identifying certain inner contour keypoints can be generated according to actual needs, and thus, the visibility of certain inner contour keypoints in the labeled image can be determined according to the generated identification information.
Step 204, generating coordinates of the current positions of the at least one first inner contour key point and the at least one second inner contour key point in the labeled image.
In this embodiment, after adjusting the selected first inner contour key point and the second inner contour key point to the positions indicated by the user adjustment operation, the execution main body may further determine the current positions of each of the first inner contour key point and the second inner contour key point in the labeled image, and further generate coordinates of the current positions of the at least one first inner contour key point and the at least one second inner contour key point.
In some optional implementations of this embodiment, after generating the coordinates of the current locations of the at least one first inner contour keypoint and the at least one second inner contour keypoint, the executing body may further display the coordinates of the current locations of the at least one first inner contour keypoint and the at least one second inner contour keypoint in the labeled image.
In these implementations, displaying the coordinates of the current locations of the at least one first inner contour keypoint and the at least one second inner contour keypoint in the annotated image is achieved by displaying the coordinates of the current locations of the at least one first inner contour keypoint and the at least one second inner contour keypoint.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for generating information according to the present embodiment. In the application scenario of fig. 3, a tool for labeling key points on an image is running on the terminal device 301.
First, the terminal device 301 may locally acquire the annotated image 302 including the lip region, and then the terminal device 301 may display the acquired annotated image 302. As shown, the lip regions in the annotated image 302 include a top lip region and a bottom lip region, and the top lip region is labeled with first inner contour keypoints 303, 304, 305, and the bottom lip region is labeled with second inner contour keypoints 306, 307, 308.
Then, in response to detecting the user's selection of the first inner contour keypoint 304 and the second inner contour keypoint 307, the terminal device 301 may determine the position of the first inner contour keypoint 304 in the annotated image 302, thereby adjusting the second inner contour keypoint 307 to the position where the first inner contour keypoint 304 is located. Then, in response to detecting a user adjustment operation for first inner contour keypoint 304 and second inner contour keypoint 307, terminal device 301 may adjust first inner contour keypoint 304 and second inner contour keypoint 307 to the positions indicated by the user adjustment operation.
It is to be appreciated that in response to detecting a user selection of first inner contour keypoint 303 and second inner contour keypoint 306, terminal device 301 may adjust the position of first inner contour keypoint 303 and second inner contour keypoint 306 in a manner similar to that described above. Similarly, in response to detecting the user's operation of selecting the first inner contour key point 305 and the second inner contour key point 308, the terminal device 301 may also adjust the positions of the first inner contour key point 305 and the second inner contour key point 308 by using a method similar to the above-described method.
After the position adjustment, the terminal device 301 may determine the positions of the first inner contour keypoints 303, 304, 305 and the second inner contour keypoints 306, 307, 308 in the labeled image 302, respectively, thereby generating the coordinates of the positions where the first inner contour keypoints 303, 304, 305 and the second inner contour keypoints 306, 307, 308 are currently located.
Currently, in terms of labeling key points of an image including a lip region, if a lip displayed in the image is in a closed state, it is generally necessary to extract an inner contour feature of the lip region from the image, then obtain a curve representing the inner contour of the lip region according to the extracted feature, and then adjust the key points labeled in advance for both the inner contour of the upper lip region and the inner contour of the lower lip region to the curve. In practice, extracting the inner contour features of the lip region from the image often requires applying various algorithms (e.g., various models trained in advance). Thus, in the process of extracting the inner contour feature, the execution subject needs to undergo a large number of calculations. In the method provided by the above embodiment of the present disclosure, the first inner contour key point and the second inner contour key point selected by the user are adjusted to the target positions, so that the key points labeled for the inner contour of the upper lip region and the inner contour of the lower lip region are adjusted to the same curve, and the first inner contour key point and the second inner contour key point selected by the user are adjusted to the positions indicated by the user adjustment operation, so that the first inner contour key point and the second inner contour key point are synchronously adjusted. In the whole process, the inner contour feature of the lip region does not need to be extracted, so that the calculation amount of the execution subject is reduced.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for generating information is shown. The flow 400 of the method for generating information comprises the steps of:
step 401, displaying the obtained annotated image including the lip region.
Step 401 is the same as step 201, and the above description for step 201 also applies to step 401, which is not described herein again.
Step 402, in response to detecting a user selection operation, adjusting corresponding first inner contour key points and second inner contour key points indicated by the user selection operation to a target position.
In this embodiment, after displaying the labeled image, in response to detecting a user selection operation, an executing subject (such as the terminal device 101 shown in fig. 1) of the method for generating information may adjust the selected corresponding first inner contour key point and second inner contour key point to target positions.
The at least one first inner contour keypoint and the at least one second inner contour keypoint are in one-to-one correspondence. In practice, the correspondence of at least one first inner contour keypoint and at least one second inner contour keypoint may be specified in advance. Taking the labeled image 501 shown in FIG. 5 as an example, first inner contour keypoints 502, 503, 504 and second inner contour keypoints 505, 506, 507 are labeled in the labeled image 501. Here, it is pre-specified that first inner contour keypoint 502 corresponds to second inner contour keypoint 505, first inner contour keypoint 503 corresponds to second inner contour keypoint 506, and first inner contour keypoint 504 corresponds to second inner contour keypoint 507.
Specifically, in response to detecting the user selection operation, the execution main body may first determine whether the first inner contour key point and the second inner contour key point selected by the user correspond to each other, and if so, the execution main body may adjust the first inner contour key point and the second inner contour key point to the target positions by using a method similar to that in step 202.
It can be understood that, at this time, the user may select multiple sets of first inner contour key points and second inner contour key points having corresponding relationships at the same time, and then the execution main body may adjust each set of first inner contour key points and second inner contour key points to the target position according to the pre-specified corresponding relationships. As shown in fig. 5, the execution body may adjust the second inner contour keypoint 505 to the location of the first inner contour keypoint 502, adjust the second inner contour keypoint 506 to the location of the first inner contour keypoint 503, and adjust the second inner contour keypoint 507 to the location of the first inner contour keypoint 504.
Step 403, in response to detecting a user adjustment operation for the first inner contour key point and the second inner contour key point indicated by the user selection operation, adjusting the first inner contour key point and the second inner contour key point indicated by the user selection operation to a position indicated by the user adjustment operation.
Step 404, generating coordinates of the current positions of the at least one first inner contour key point and the at least one second inner contour key point in the labeled image.
Step 403, step 404 are the same as step 203 and step 204, and the above description for step 203 and step 204 also applies to step 403 and step 404, which is not described herein again.
As can be seen from fig. 4, compared to the embodiment corresponding to fig. 2, the flow 400 of the method for generating information in the present embodiment embodies the step of adjusting the corresponding first inner contour keypoints and second inner contour keypoints to the target positions. Therefore, in the solution described in this embodiment, a user can simultaneously select multiple sets of first inner contour key points and second inner contour key points having corresponding relationships according to actual requirements, and then the execution main body can synchronously adjust the multiple sets of first inner contour key points and second inner contour key points to corresponding target positions according to the pre-specified corresponding relationships. Therefore, the first inner contour key point and the second inner contour key point are prevented from being adjusted to the target positions one by one, the time for adjusting the key points is shortened, and meanwhile the flexibility for adjusting the key points is improved.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for generating information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 6, the apparatus 600 for generating information provided by the present embodiment includes a first display unit 601, a first detection unit 602, a second detection unit 603, and a first generation unit 604. Wherein the first display unit 601 may be configured to: displaying the acquired labeled image including a lip region, wherein the lip region may include an upper lip region and a lower lip region, the upper lip region is labeled with at least one first inner contour key point, and the lower lip region is labeled with at least one second inner contour key point. The first detection unit 602 may be configured to: in response to detecting the user selection operation, adjusting the first inner contour key point and the second inner contour key point indicated by the user selection operation to the target position. The second detection unit 603 may be configured to: in response to detecting a user adjustment operation for the first inner contour keypoint and the second inner contour keypoint indicated by the user selection operation, adjusting the first inner contour keypoint and the second inner contour keypoint indicated by the user selection operation to a position indicated by the user adjustment operation. The first generating unit 604 may be configured to: coordinates of where the at least one first inner contour keypoint and the at least one second inner contour keypoint are currently located in the annotated image are generated.
In the present embodiment, in the apparatus 600 for generating information: the detailed processing and the technical effects of the first display unit 601, the first detection unit 602, the second detection unit 603, and the first generation unit 604 can refer to the related descriptions of step 201, step 202, step 203, and step 204 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementations of this embodiment, the at least one first inner contour keypoint and the at least one second inner contour keypoint are in one-to-one correspondence; the first detection unit 602 may be further configured to: and adjusting the corresponding first inner contour key point and second inner contour key point indicated by the user selection operation to the target position.
In some optional implementations of the embodiment, the target position is a position of a preset point on a straight line of the first inner contour key point and the second inner contour key point indicated by the user selection operation.
In some optional implementations of this embodiment, the apparatus 600 may further include: a third detection unit (not shown in the figure). Wherein the third detection unit may be configured to: in response to detecting the user identification operation, generating identification information for identifying visibility of the at least one first inner contour keypoint and the at least one second inner contour keypoint based on the user identification operation.
In some optional implementations of this embodiment, the apparatus 600 may further include: an acquisition unit (not shown in the figure) and a labeling unit (not shown in the figure). Wherein the obtaining unit may be configured to: the method comprises the steps of obtaining an image to be annotated comprising a lip region and coordinate information to be annotated, wherein the coordinate information to be annotated can comprise initial coordinates used for annotating at least one first inner contour key point and at least one second inner contour key point. The labeling unit may be configured to: and marking at least one first inner contour key point and at least one second inner contour key point in the image to be marked based on the coordinate information to be marked to obtain a marked image.
In some optional implementations of this embodiment, the apparatus 600 may further include: a second display unit (not shown). Wherein the second display unit may be configured to: and displaying the coordinates of the current positions of the at least one first inner contour key point and the at least one second inner contour key point in the labeled image.
The apparatus provided in the foregoing embodiment of the present disclosure may first display the acquired labeled image including the lip region through the first display unit 601, then adjust the first inner contour key point and the second inner contour key point indicated by the user selection operation to the target positions through the first detection unit 602 after detecting the user selection operation, then adjust the first inner contour key point and the second inner contour key point indicated by the user selection operation to the positions indicated by the user adjustment operation through the second detection unit 603 after detecting the user adjustment operation, and finally generate the coordinates of the positions where the at least one first inner contour key point and the at least one second inner contour key point are currently located in the labeled image through the first generation unit 604. Thus, the calculation amount of marking key points for the inner contour of the lip area is reduced.
Referring now to fig. 7, shown is a schematic diagram of an electronic device (e.g., terminal device in fig. 1) 700 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The terminal device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device. The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: displaying the obtained labeled image comprising the lip area, wherein the lip area comprises an upper lip area and a lower lip area, the upper lip area is labeled with at least one first inner contour key point, and the lower lip area is labeled with at least one second inner contour key point; in response to the detection of the user selection operation, adjusting a first inner contour key point and a second inner contour key point indicated by the user selection operation to a target position; in response to detecting a user adjustment operation for a first inner contour key point and a second inner contour key point indicated by the user selection operation, adjusting the first inner contour key point and the second inner contour key point indicated by the user selection operation to a position indicated by the user adjustment operation; coordinates of where the at least one first inner contour keypoint and the at least one second inner contour keypoint are currently located in the annotated image are generated.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first display unit, a first detection unit, a second detection unit, and a first generation unit. Where the names of the cells do not in some cases constitute a limitation of the cell itself, for example, the first display unit may also be described as a "cell displaying the acquired annotated image including the lip region".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (12)

1. A method for generating information, comprising:
displaying the obtained labeled image comprising a lip region, wherein the lip region comprises an upper lip region and a lower lip region, the upper lip region is labeled with at least one first inner contour key point, and the lower lip region is labeled with at least one second inner contour key point;
in response to the detection of a user selection operation, adjusting a first inner contour key point and a second inner contour key point indicated by the user selection operation to a target position, wherein the target position is a position of a preset point on a straight line formed by the first inner contour key point and the second inner contour key point indicated by the user selection operation;
in response to detecting a user adjustment operation for a first inner contour keypoint and a second inner contour keypoint indicated by the user selection operation, adjusting the first inner contour keypoint and the second inner contour keypoint indicated by the user selection operation to a position indicated by the user adjustment operation;
generating coordinates of where the at least one first inner contour keypoint and the at least one second inner contour keypoint are currently located in the annotated image.
2. The method of claim 1, wherein the at least one first inner contour keypoint and the at least one second inner contour keypoint are in one-to-one correspondence; and
the adjusting the first inner contour key point and the second inner contour key point indicated by the user selection operation to the target position comprises:
and adjusting the corresponding first inner contour key point and second inner contour key point indicated by the user selection operation to the target position.
3. The method of claim 1, wherein after said adjusting first and second inner contour keypoints indicated by said user selection operation to a position indicated by said user adjustment operation, said method further comprises:
in response to detecting a user identification operation, generating identification information for identifying a visibility of the at least one first inner contour keypoint and the at least one second inner contour keypoint based on the user identification operation.
4. The method according to any one of claims 1-3, wherein prior to said displaying the acquired annotated image comprising the lip region, the method further comprises:
acquiring an image to be marked including a lip region and coordinate information to be marked, wherein the coordinate information to be marked comprises initial coordinates for marking the at least one first inner contour key point and the at least one second inner contour key point;
and marking the at least one first inner contour key point and the at least one second inner contour key point in the image to be marked based on the coordinate information to be marked to obtain the marked image.
5. The method according to any one of claims 1-3, wherein the method further comprises:
displaying, in the annotated image, coordinates of where the at least one first inner contour keypoint and the at least one second inner contour keypoint are currently located in the annotated image.
6. An apparatus for generating information, comprising:
a first display unit configured to display the acquired labeled image including a lip region, wherein the lip region includes a top lip region and a bottom lip region, the top lip region is labeled with at least one first inner contour key point, and the bottom lip region is labeled with at least one second inner contour key point;
the first detection unit is configured to respond to the detection of a user selection operation, adjust a first inner contour key point and a second inner contour key point indicated by the user selection operation to a target position, wherein the target position is a position of a preset point on a straight line formed by the first inner contour key point and the second inner contour key point indicated by the user selection operation;
a second detection unit configured to adjust the first inner contour key point and the second inner contour key point indicated by the user selection operation to the positions indicated by the user adjustment operation in response to detecting the user adjustment operation for the first inner contour key point and the second inner contour key point indicated by the user selection operation;
a first generating unit configured to generate coordinates of a position in the annotated image where the at least one first inner contour keypoint and the at least one second inner contour keypoint are currently located.
7. The apparatus of claim 6, wherein the at least one first inner contour keypoint and the at least one second inner contour keypoint are in one-to-one correspondence;
the first detection unit further configured to:
and adjusting the corresponding first inner contour key point and second inner contour key point indicated by the user selection operation to the target position.
8. The apparatus of claim 6, wherein the apparatus further comprises:
a third detection unit configured to generate, in response to detecting a user identification operation, identification information for identifying visibility of the at least one first inner contour keypoint and the at least one second inner contour keypoint based on the user identification operation.
9. The apparatus of any of claims 6-8, wherein the apparatus further comprises:
the device comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire an image to be annotated comprising a lip region and coordinate information to be annotated, wherein the coordinate information to be annotated comprises initial coordinates for annotating the at least one first inner contour key point and the at least one second inner contour key point;
and the labeling unit is configured to label the at least one first inner contour key point and the at least one second inner contour key point in the image to be labeled based on the coordinate information to be labeled, so as to obtain the labeled image.
10. The apparatus of any of claims 6-8, wherein the apparatus further comprises:
a second display unit configured to display coordinates of a position in the annotated image where the at least one first inner contour keypoint and the at least one second inner contour keypoint are currently located in the annotated image.
11. A terminal, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201910412440.0A 2019-05-17 2019-05-17 Method and apparatus for generating information Active CN110119457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910412440.0A CN110119457B (en) 2019-05-17 2019-05-17 Method and apparatus for generating information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910412440.0A CN110119457B (en) 2019-05-17 2019-05-17 Method and apparatus for generating information

Publications (2)

Publication Number Publication Date
CN110119457A CN110119457A (en) 2019-08-13
CN110119457B true CN110119457B (en) 2021-08-10

Family

ID=67522669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910412440.0A Active CN110119457B (en) 2019-05-17 2019-05-17 Method and apparatus for generating information

Country Status (1)

Country Link
CN (1) CN110119457B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403144A (en) * 2017-07-11 2017-11-28 北京小米移动软件有限公司 Face localization method and device
CN107527034A (en) * 2017-08-28 2017-12-29 维沃移动通信有限公司 A kind of face contour method of adjustment and mobile terminal
CN107679449A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Lip motion method for catching, device and storage medium
CN109461117A (en) * 2018-10-30 2019-03-12 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966316B (en) * 2015-05-22 2019-03-15 腾讯科技(深圳)有限公司 A kind of 3D facial reconstruction method, device and server
CN105930762A (en) * 2015-12-02 2016-09-07 ***股份有限公司 Eyeball tracking method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403144A (en) * 2017-07-11 2017-11-28 北京小米移动软件有限公司 Face localization method and device
CN107679449A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Lip motion method for catching, device and storage medium
CN107527034A (en) * 2017-08-28 2017-12-29 维沃移动通信有限公司 A kind of face contour method of adjustment and mobile terminal
CN109461117A (en) * 2018-10-30 2019-03-12 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Also Published As

Publication number Publication date
CN110119457A (en) 2019-08-13

Similar Documents

Publication Publication Date Title
CN109858445B (en) Method and apparatus for generating a model
CN109829432B (en) Method and apparatus for generating information
CN109993150B (en) Method and device for identifying age
CN109242801B (en) Image processing method and device
CN110059623B (en) Method and apparatus for generating information
CN109981787B (en) Method and device for displaying information
CN110119722B (en) Method and apparatus for generating information
CN111459364A (en) Icon updating method and device and electronic equipment
CN110825286A (en) Image processing method and device and electronic equipment
CN111340015A (en) Positioning method and device
CN111310595B (en) Method and device for generating information
CN112309389A (en) Information interaction method and device
CN110619615A (en) Method and apparatus for processing image
CN110119457B (en) Method and apparatus for generating information
CN113238652B (en) Sight line estimation method, device, equipment and storage medium
CN115712746A (en) Image sample labeling method and device, storage medium and electronic equipment
CN111401182B (en) Image detection method and device for feeding rail
CN113672317B (en) Method and device for rendering topic pages
CN110110695B (en) Method and apparatus for generating information
CN111488928B (en) Method and device for acquiring samples
CN112233207A (en) Image processing method, device, equipment and computer readable medium
CN110084298B (en) Method and device for detecting image similarity
CN113129360B (en) Method and device for positioning object in video, readable medium and electronic equipment
CN110070479B (en) Method and device for positioning image deformation dragging point
CN112308745A (en) Method and apparatus for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

CP01 Change in the name or title of a patent holder