CN111427448A - Portrait marking method and device and computer readable storage medium - Google Patents

Portrait marking method and device and computer readable storage medium Download PDF

Info

Publication number
CN111427448A
CN111427448A CN202010147629.4A CN202010147629A CN111427448A CN 111427448 A CN111427448 A CN 111427448A CN 202010147629 A CN202010147629 A CN 202010147629A CN 111427448 A CN111427448 A CN 111427448A
Authority
CN
China
Prior art keywords
display screen
interface
width
height
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010147629.4A
Other languages
Chinese (zh)
Other versions
CN111427448B (en
Inventor
吴峰
吴奎
邱小锋
张晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gifpay Information Technology Co ltd
Original Assignee
Gifpay Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gifpay Information Technology Co ltd filed Critical Gifpay Information Technology Co ltd
Priority to CN202010147629.4A priority Critical patent/CN111427448B/en
Publication of CN111427448A publication Critical patent/CN111427448A/en
Application granted granted Critical
Publication of CN111427448B publication Critical patent/CN111427448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a portrait labeling method, a portrait labeling device and a computer-readable storage medium, wherein the portrait labeling method comprises the following steps: s1, acquiring the height and width of the display screen and the height and width of the interface, and calculating the ratio of the height and width of the interface; s2, dividing the display screen into N equal parts along the height direction and M equal parts along the width direction; s3, establishing a coordinate axis to obtain coordinate data; s4, respectively acquiring coordinate data of the interface; s5, calculating the height-width ratio of the unit display screen to the face in the interface; s6, calculating a linear regression equation of the unit display screen coordinate with respect to the interface coordinate; and S7, calculating coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates, and performing human face picture frame on the display screen. The image annotation method, device and computer readable storage medium can reduce visual double images and avoid the dizzy feeling of users.

Description

Portrait marking method and device and computer readable storage medium
Technical Field
The present invention relates to the field of technologies, and in particular, to a portrait labeling method and apparatus, and a computer-readable storage medium.
Background
Augmented Reality (AR) and Virtual Reality (VR) are the fields of technology that have attracted much attention in recent years, and their near-to-eye display systems project a far-away virtual image onto the human eye by forming pixels on a display through a series of optical imaging elements. The difference is that AR glasses require perspective (see-through) to see both the real outside world and virtual information, so the imaging system cannot be in front of the line of sight. This requires the addition of one or a group of optical combiners (optical combiners) to integrate, complement and "enhance" virtual information and real scenes in a "stacked" fashion.
The optical display system of an AR device is usually composed of a miniature display screen and optical elements. In summary, the display systems adopted by the AR glasses on the market at present are combinations of various miniature display screens and optical elements such as prisms, free-form surfaces, BirdBath, optical waveguides and the like, wherein the difference of the optical combiners is a key part for distinguishing the AR display systems.
At present, the AR glasses label the portrait in preview, and the effect realized by the scheme can cause visual double images and make users feel dizzy.
Disclosure of Invention
In view of the above, the technical problem to be solved by the present invention is to provide a portrait labeling method, apparatus and computer readable storage medium, which can reduce visual double images and avoid the dizzy feeling of the user.
The technical scheme of the invention is realized as follows:
a portrait labeling method comprises the following steps:
s1, acquiring height lHeight and width lWidth of a display screen, height pHeight and width pWidth of an interface, calculating the ratio hRatio of the height of the interface to the height of the display screen as lHeight/pHeight, and calculating the ratio wRatio of the width of the interface to the width of the display screen as lWidth/pWidth;
s2, dividing the display screen into N equal parts along the height direction, and dividing the display screen into M equal parts along the width direction to obtain N × M unit display screens;
s3, establishing coordinate axes, and acquiring coordinate data of the N × M unit display screens;
s4, aligning at least two unit display screens in the N x M unit display screens to the live-action human face, and respectively acquiring coordinate data of an interface;
s5, calculating the average height and width of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;
s6, calculating a linear regression equation of the coordinates of the unit display screen relative to the coordinates of the interface according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the width ratio wRatio of the interface and the display screen, and the high ratio xRatio and the width ratio yRatio of the face in the unit display screen and the interface;
and S7, calculating coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates, and performing human face picture frame on the display screen.
Preferably, the N × M unit display screens are marked as:
Figure BDA0002401313160000021
the S3 specifically includes:
the left vertex of the first unit rectangle at the upper left corner is (0, 0), the height of the display screen is an x-axis positive axis, the width of the display screen is a y-axis positive axis, a coordinate system is constructed, and the height (lHeight/n) and the width (lWidth) of each unit rectangle are obtained, wherein A ismnThe coordinates corresponding to the upper left point are
Figure BDA0002401313160000022
AmnThe coordinates corresponding to the lower right point are
Figure BDA0002401313160000023
Preferably, the S4 specifically includes:
a is to bemnAligning each rectangular frame to the live-action face, and respectively counting the face coordinate data to obtain
Figure BDA0002401313160000031
PmnThe coordinates corresponding to the upper left point are:
Figure BDA0002401313160000032
Pmnthe coordinates corresponding to the lower right point are:
Figure BDA0002401313160000033
preferably, the S5 specifically includes:
high average value of interface face
Figure BDA0002401313160000034
Interface face width average
Figure BDA0002401313160000035
High ratio of unit display screen height to interface face height
Figure BDA0002401313160000036
Figure BDA0002401313160000037
Unit display screen width and interface face width ratio
Figure BDA0002401313160000038
Preferably, the S6 specifically includes:
x coordinate of upper left point of interface
Figure BDA0002401313160000039
(wherein
Figure BDA00024013131600000310
Figure BDA00024013131600000311
) Corresponding to the x coordinate of the upper left point of the unit display screen
Figure BDA00024013131600000320
Figure BDA00024013131600000312
According to the sample values, solving a linear regression equation of the x coordinate of the upper left point of the unit display screen relative to the x coordinate of the upper left point of preview
Figure BDA00024013131600000313
Is recorded as y ═ a + bx;
y coordinate of upper left point of interface
Figure BDA00024013131600000314
(wherein
Figure BDA00024013131600000315
Figure BDA00024013131600000316
) Corresponding to the y coordinate of the upper left point of the unit display screen
Figure BDA00024013131600000317
Figure BDA00024013131600000318
According to the sample values, solving a linear regression equation of the y coordinate of the upper left point of the unit display screen relative to the y coordinate of the upper left point of the preview
Figure BDA00024013131600000319
Denoted as y ═ c + dx.
Preferably, the S7 specifically includes:
setting the coordinate value of the upper left point of the face obtained by the interface as (x) for the real scene picture frame on the display screen according to the obtained coordinate value of the interface0,y0) The coordinate value of the lower right point is (x)1,y1) Then it is set to the upper left point (coordinates of (a + b *) (hRatio * x) for the real scene0),c+d*(wRatio*y0) )) is a starting point and is (x) high1-x0) * xRatio with a width of (y)1-y0) * YRatio takes a human face picture frame on the display screen.
The invention also provides a portrait labeling device, which comprises:
the acquisition module is used for acquiring the height lHeight and the width lWidth of the display screen, the height pHeight and the width pWidth of the interface, calculating the ratio hRatio of the height of the interface and the height of the display screen as lHeight/pHeight, and calculating the ratio wRatio of the width of the interface and the width of the display screen as lWidth/pWidth;
the dividing module is used for dividing the display screen into N equal parts along the height direction and M equal parts along the width direction to obtain N × M unit display screens;
the axis building module is used for building coordinate axes and obtaining coordinate data of the N x M unit display screens;
the coordinate acquisition module is used for aligning at least two unit display screens in the N x M unit display screens to the live-action human face and respectively acquiring coordinate data of an interface;
the first calculation module is used for calculating the average height value and the average width value of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;
the second calculation module is used for calculating a linear regression equation of the coordinates of the unit display screen relative to the coordinates of the interface according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the wide ratio wRatio of the interface and the display screen, and the high ratio xRatio and the wide ratio yRatio of the face in the unit display screen and the interface;
and the picture frame module is used for calculating the coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates and performing human face picture frame on the display screen.
The invention also proposes a computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor for performing the steps of the portrait annotation method according to any one of claims 1 to 6.
The invention provides a portrait labeling method, a portrait labeling device and a computer-readable storage medium, wherein at least two unit display screens in a plurality of unit display screens are aligned to a live-action face, and coordinate data of an interface are respectively obtained; calculating a linear regression equation of the unit display screen coordinate relative to the interface coordinate; therefore, the coordinates of the human face on the display screen can be calculated according to the linear regression equation and the interface coordinates, human face picture frames are performed on the display screen, visual double images are reduced, and a user is prevented from being dizzy.
Drawings
Fig. 1 is a display screen image in a portrait labeling method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a segmented image of a display screen in a portrait annotation method according to an embodiment of the present invention
FIG. 3 is a flowchart of a portrait annotation method according to an embodiment of the present invention;
fig. 4 is a block diagram of a portrait labeling apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 3, an embodiment of the present invention provides a portrait labeling method, including the following steps:
s101, enabling the interface (preview) to be transparent through the app, and displaying the default horizontal screen as shown in the figure 1.
S102, acquiring display screen height lHeight and width lWidth, proview height pHeight and width pWidth by a program, calculating a ratio hRatio of preview height to display screen height (lHeight/pHeight), and calculating the ratio wRatio of preview height to display screen width (lWidth/pWidth);
s103, as shown in figure 2, dividing the height of the display screen into n equal parts, dividing n-1 lines on the display screen according to equal ratio, dividing the width of the display screen into m equal parts, and dividing m-1 lines on the display screen according to equal ratio to obtain n-m unit rectangles, and marking the unit rectangles as n-m unit rectangles
Figure BDA0002401313160000061
S104, according to the height and the width of the display screen obtained in S102, a coordinate system is constructed by taking the left vertex of the first unit rectangle at the upper left as (0, 0), the height of the display screen as an x-axis and the width of the display screen as a y-axis, and the height lHeight/n and the width lWidth/m of each unit rectangle are obtained, wherein A ismnThe coordinates corresponding to the upper left point are
Figure BDA0002401313160000062
AmnThe coordinates corresponding to the lower right point are
Figure BDA0002401313160000063
S105, adding AmnAligning each rectangular frame to the live-action face to ensure that the live-action face is framed by the rectangular frame, and respectively counting the coordinate data (the coordinates of the upper left point and the lower right point) of the preview face at the moment to obtain
Figure BDA0002401313160000064
PmnThe coordinates corresponding to the upper left point are:
Figure BDA0002401313160000065
Pmnthe coordinates corresponding to the lower right point are:
Figure BDA0002401313160000066
s106, obtaining the preview human face high average value according to the coordinate data obtained in the S105
Figure BDA0002401313160000067
preview face wide average
Figure BDA0002401313160000071
Unit display screen height to preview face height ratio
Figure BDA0002401313160000072
Figure BDA0002401313160000073
Unit display screen widthAspect ratio to preview
Figure BDA0002401313160000074
S107, obtaining the x coordinate of the upper left point of preview according to the coordinate data obtained in the S105
Figure BDA0002401313160000075
(wherein
Figure BDA0002401313160000076
) Corresponding to the x coordinate of the upper left point of the unit display screen
Figure BDA0002401313160000077
According to the sample values, solving a linear regression equation of the x coordinate of the upper left point of the unit display screen relative to the x coordinate of the upper left point of preview
Figure BDA0002401313160000078
Is recorded as y ═ a + bx;
s108, obtaining y coordinates of the upper left point of preview according to the coordinate data obtained in the S105
Figure BDA0002401313160000079
(wherein
Figure BDA00024013131600000710
) Corresponding to the y coordinate of the upper left point of the unit display screen
Figure BDA00024013131600000711
According to the sample values, solving a linear regression equation of the y coordinate of the upper left point of the unit display screen relative to the y coordinate of the upper left point of the preview
Figure BDA00024013131600000712
C + dx;
s109, according to the obtained corresponding coordinate relation, the real scene picture frame can be displayed on the display screen according to the acquired preview coordinate value, and the coordinate value of the upper left point of the face acquired by the preview is assumed to be (x)0,y0) The coordinate value of the lower right point is (x)1,y1) Then, thenIt takes the upper left point (coordinates of (a + b) x for the real scene0),c+d*(wRatio*y0) )) is a starting point and is (x) high1-x0) xRatio of width (y)1-y0) yRatio is a human face picture frame.
As shown in fig. 4, the present invention further provides a portrait labeling apparatus, including:
an obtaining module 10, configured to obtain a display height lhight and a width lWidth, an interface height pHeight and a width pWidth, calculate an interface height to display height ratio hRatio ═ lhight/pHeight, and calculate an interface and display width ratio wRatio ═ lWidth/pWidth;
a dividing module 20, configured to divide the display screen into N equal parts along a height direction and into M equal parts along a width direction, so as to obtain N × M unit display screens;
the axis building module 30 is used for building coordinate axes and obtaining coordinate data of the N × M unit display screens;
the coordinate acquisition module 40 is configured to align at least two unit display screens of the N × M unit display screens with a live-action face, and respectively acquire coordinate data of an interface;
the first calculation module 50 is used for calculating the average height value and the average width value of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;
a second calculating module 60, configured to calculate a linear regression equation of the coordinates of the unit display screen with respect to the interface coordinates according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the wide ratio wRatio of the interface and the display screen, and the high ratio xRatio and the wide ratio wRatio of the face in the unit display screen and the interface;
and the frame module 70 is configured to calculate coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates, and perform human face frame on the display screen.
The invention also proposes a computer-readable storage medium storing a plurality of instructions adapted to be loaded by a processor for performing the steps of the portrait annotation method according to any one of claims 1 to 6.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general-purpose hardware, and certainly can also be implemented by special-purpose hardware including special-purpose integrated circuits, special-purpose CPUs, special-purpose memories, special-purpose components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application may be substantially embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the method of the embodiments of the present application.
The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g., from one website site, computer, server, or data center via a wired (e.g., coaxial cable, fiber optic, digital subscriber line (DS L)) or wireless (e.g., infrared, wireless, microwave, etc.) manner, the computer-readable storage medium may be any available medium that can be stored by a computer or an integrated (e.g., solid state, optical, storage, such as a solid state, magnetic, optical, or optical storage medium, such as a solid state, magnetic, optical, or semiconductor storage medium, such as a solid state, optical, or semiconductor storage medium, such as a floppy disk, a solid state, optical, or optical disk, a solid state, or optical storage medium, such as a floppy disk, a solid state, or optical disk, a solid state, or semiconductor storage medium, such as a magnetic, a floppy disk, a magnetic, or optical disk, or a solid state storage medium, such as a DVD, a magnetic or optical disk, or a magnetic or optical disk.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (8)

1. A portrait labeling method is characterized by comprising the following steps:
s1, acquiring height lHeight and width lWidth of a display screen, height pHeight and width pWidth of an interface, calculating the ratio hRatio of the height of the interface to the height of the display screen as lHeight/pHeight, and calculating the ratio wRatio of the width of the interface to the width of the display screen as lWidth/pWidth;
s2, dividing the display screen into N equal parts along the height direction, and dividing the display screen into M equal parts along the width direction to obtain N × M unit display screens;
s3, establishing coordinate axes, and acquiring coordinate data of the N × M unit display screens;
s4, aligning at least two unit display screens in the N x M unit display screens to the live-action human face, and respectively acquiring coordinate data of an interface;
s5, calculating the average height and width of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;
s6, calculating a linear regression equation of the coordinates of the unit display screen relative to the coordinates of the interface according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the width ratio wRatio of the interface and the display screen, and the high ratio xRatio and the width ratio yRatio of the face in the unit display screen and the interface;
and S7, calculating coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates, and performing human face picture frame on the display screen.
2. The portrait labeling method of claim 1, wherein the N x M unit display screens are marked as:
Figure FDA0002401313150000011
the S3 specifically includes:
the left vertex of the first unit rectangle at the upper left corner is (0, 0), the height of the display screen is an x-axis positive axis, the width of the display screen is a y-axis positive axis, a coordinate system is constructed, and the height (lHeight/n) and the width (lWidth) of each unit rectangle are obtained, wherein A ismnThe coordinates corresponding to the upper left point are
Figure FDA0002401313150000021
AmnThe coordinates corresponding to the lower right point are
Figure FDA0002401313150000022
3. The portrait annotation method according to claim 2, wherein the S4 specifically includes:
a is to bemnAligning each rectangular frame to the live-action face, and respectively counting the face coordinate data to obtain
Figure FDA0002401313150000023
PmnThe coordinates corresponding to the upper left point are:
Figure FDA0002401313150000024
Pmnthe coordinates corresponding to the lower right point are:
Figure FDA0002401313150000025
4. the portrait labeling method of claim 3, wherein the S5 specifically includes:
high average value of interface face
Figure FDA0002401313150000026
Interface face width average
Figure FDA0002401313150000027
High ratio of unit display screen height to interface face height
Figure FDA0002401313150000028
Figure FDA0002401313150000029
Unit display screen width and interface face width ratio
Figure FDA00024013131500000210
5. The portrait annotation method of claim 4, wherein the S6 specifically includes: x coordinate of upper left point of interface
Figure FDA00024013131500000211
(wherein
Figure FDA00024013131500000212
Figure FDA00024013131500000213
) Corresponding to the x coordinate of the upper left point of the unit display screen
Figure FDA00024013131500000214
Figure FDA00024013131500000215
From the sample values, the units are foundLinear regression equation of x coordinate of upper left point of display screen relative to x coordinate of upper left point of preview
Figure FDA00024013131500000216
Is recorded as y ═ a + bx;
y coordinate of upper left point of interface
Figure FDA00024013131500000217
(wherein
Figure FDA00024013131500000218
Figure FDA0002401313150000031
) Corresponding to the y coordinate of the upper left point of the unit display screen
Figure FDA0002401313150000032
Figure FDA0002401313150000033
According to the sample values, solving a linear regression equation of the y coordinate of the upper left point of the unit display screen relative to the y coordinate of the upper left point of the preview
Figure FDA0002401313150000034
Denoted as y ═ c + dx.
6. The portrait labeling method of claim 5, wherein the S7 specifically includes:
setting the coordinate value of the upper left point of the face obtained by the interface as (x) for the real scene picture frame on the display screen according to the obtained coordinate value of the interface0,y0) The coordinate value of the lower right point is (x)1,y1) Then it is set to the upper left point (coordinates of (a + b *) (hRatio * x) for the real scene0),c+d*(wRatio*y0) )) is a starting point and is (x) high1-x0) * xRatio with a width of (y)1-y0) * YRatio takes a human face picture frame on the display screen.
7. A portrait annotation device, comprising:
the acquisition module is used for acquiring the height lHeight and the width lWidth of the display screen, the height pHeight and the width pWidth of the interface, calculating the ratio hRatio of the height of the interface and the height of the display screen as lHeight/pHeight, and calculating the ratio wRatio of the width of the interface and the width of the display screen as lWidth/pWidth;
the dividing module is used for dividing the display screen into N equal parts along the height direction and M equal parts along the width direction to obtain N × M unit display screens;
the axis building module is used for building coordinate axes and obtaining coordinate data of the N x M unit display screens;
the coordinate acquisition module is used for aligning at least two unit display screens in the N x M unit display screens to the live-action human face and respectively acquiring coordinate data of an interface;
the first calculation module is used for calculating the average height value and the average width value of the face in the interface according to the coordinate data of the interface; calculating the face height ratio xRatio and the face width ratio yRatio in the unit display screen and the interface;
the second calculation module is used for calculating a linear regression equation of the coordinates of the unit display screen relative to the coordinates of the interface according to the coordinate data of the interface, the coordinate data of the unit display screen, the high ratio hRatio and the wide ratio wRatio of the interface and the display screen, and the high ratio xRatio and the wide ratio yRatio of the face in the unit display screen and the interface;
and the picture frame module is used for calculating the coordinates of the human face on the display screen according to the linear regression equation and the interface coordinates and performing human face picture frame on the display screen.
8. A computer readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the portrait annotation method according to any one of claims 1-6.
CN202010147629.4A 2020-03-05 2020-03-05 Portrait marking method and device and computer readable storage medium Active CN111427448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010147629.4A CN111427448B (en) 2020-03-05 2020-03-05 Portrait marking method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010147629.4A CN111427448B (en) 2020-03-05 2020-03-05 Portrait marking method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111427448A true CN111427448A (en) 2020-07-17
CN111427448B CN111427448B (en) 2023-07-28

Family

ID=71547717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010147629.4A Active CN111427448B (en) 2020-03-05 2020-03-05 Portrait marking method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111427448B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537103A (en) * 2018-01-19 2018-09-14 东北电力大学 The living body faces detection method and its equipment measured based on pupil axle
CN109712547A (en) * 2018-12-18 2019-05-03 深圳市巨烽显示科技有限公司 A kind of display screen plane brightness measurement method, device, computer equipment and storage medium
WO2019100608A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Video capturing device, face recognition method, system, and computer-readable storage medium
CN110263774A (en) * 2019-08-19 2019-09-20 珠海亿智电子科技有限公司 A kind of method for detecting human face

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100608A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Video capturing device, face recognition method, system, and computer-readable storage medium
CN108537103A (en) * 2018-01-19 2018-09-14 东北电力大学 The living body faces detection method and its equipment measured based on pupil axle
CN109712547A (en) * 2018-12-18 2019-05-03 深圳市巨烽显示科技有限公司 A kind of display screen plane brightness measurement method, device, computer equipment and storage medium
CN110263774A (en) * 2019-08-19 2019-09-20 珠海亿智电子科技有限公司 A kind of method for detecting human face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李俊蒂;徐敏;苏鹭梅;陈州尧;: "一种基于机器视觉的自动标定贴屏实现方法" *

Also Published As

Publication number Publication date
CN111427448B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN109064390B (en) Image processing method, image processing device and mobile terminal
US20180332222A1 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
CN109445103B (en) Display picture updating method and device, storage medium and electronic device
CN106020758B (en) A kind of screen splice displaying system and method
CN112351266B (en) Three-dimensional visual processing method, device, equipment, display system and medium
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN113365130B (en) Live broadcast display method, live broadcast video acquisition method and related devices
KR100540732B1 (en) Apparatus for converting 2D image signal into 3D image signal
CN109255838A (en) Augmented reality is avoided to show the method and apparatus of equipment viewing ghost image
CN112017242A (en) Display method and device, equipment and storage medium
CN111427448A (en) Portrait marking method and device and computer readable storage medium
CN115002442B (en) Image display method and device, electronic equipment and storage medium
CN107087153B (en) 3D image generation method and device and VR equipment
JP5645448B2 (en) Image processing apparatus, image processing method, and program
JP2020135290A (en) Image generation device, image generation method, image generation system, and program
JP2020101897A (en) Information processing apparatus, information processing method and program
CN115205752A (en) Liquid crystal splicing LCD method and system based on intelligent display
CN114339029A (en) Shooting method and device and electronic equipment
CN113068003A (en) Data display method and device, intelligent glasses, electronic equipment and storage medium
US20210297649A1 (en) Image data output device, content creation device, content reproduction device, image data output method, content creation method, and content reproduction method
KR100399735B1 (en) Realization method of virtual navigation using still photograph
CN115348437B (en) Video processing method, device, equipment and storage medium
US20240119676A1 (en) Image generation method, apparatus, and system, and computer-readable storage medium
JP2003101978A (en) Image processor
CN115442580B (en) Naked eye 3D picture effect processing method for portable intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant