CN112749604A - Pupil positioning method and related device and product - Google Patents

Pupil positioning method and related device and product Download PDF

Info

Publication number
CN112749604A
CN112749604A CN201911063691.9A CN201911063691A CN112749604A CN 112749604 A CN112749604 A CN 112749604A CN 201911063691 A CN201911063691 A CN 201911063691A CN 112749604 A CN112749604 A CN 112749604A
Authority
CN
China
Prior art keywords
image frame
pupil
current image
target user
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911063691.9A
Other languages
Chinese (zh)
Inventor
杨平平
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911063691.9A priority Critical patent/CN112749604A/en
Publication of CN112749604A publication Critical patent/CN112749604A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a pupil positioning method, a related device and a product, which are applied to electronic equipment, wherein the method comprises the following steps: acquiring a current image frame of a target user; judging whether the current image frame is the collected first image frame; if the current image frame is not the first image frame, determining the estimated motion track of the pupil image of the target user from the previous image frame to the current image frame; determining the estimated position of the pupil image of the target user in the current image frame according to the estimated motion track and the pupil reference position of the previous image frame; determining a first candidate region and a second candidate region in the current image frame according to the estimated position; and determining the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region. The embodiment of the application is beneficial to improving the efficiency of pupil positioning.

Description

Pupil positioning method and related device and product
Technical Field
The present application relates to the field of electronic devices, and in particular, to a pupil location method, and related apparatus and products.
Background
At present, an electronic device often has an eye tracking function. In the prior art, the eyeball position is often determined by searching a whole image of an acquired image frame, and in the continuous eye tracking process, the number of image frames needing to be processed is huge, so that the calculation amount required to be processed is excessively accumulated, the eye tracking is delayed, and the user experience is reduced.
Disclosure of Invention
The embodiment of the application provides a pupil positioning method, a related device and a product, so as to reduce pupil positioning calculation amount and improve pupil positioning efficiency.
In a first aspect, an embodiment of the present application provides a pupil positioning method applied to an electronic device, where the method includes:
acquiring a current image frame of a target user;
judging whether the current image frame is the collected first image frame;
if the current image frame is not the first image frame, determining an estimated motion track of the pupil image of the target user from the previous image frame to the current image frame;
determining the estimated position of the pupil image of the target user in the current image frame according to the estimated motion track and the pupil reference position of the previous image frame, wherein the pupil reference position of the previous image frame is the position of the pupil image of the target user in the previous image frame;
determining a first candidate region and a second candidate region in the current image frame according to the estimated position, wherein only the first candidate region in the first candidate region and the second candidate region comprises the estimated position;
and determining the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region.
In a second aspect, the embodiments of the present application provide a pupil positioning apparatus applied to an electronic device, the apparatus includes a processing unit and a communication unit, wherein,
the processing unit is used for acquiring the current image frame of the target user through the communication unit; and the image processing device is used for judging whether the current image frame is the collected first image frame; if the current image frame is not the first image frame, acquiring acceleration data of the electronic equipment within a preset time interval through the communication unit, wherein the preset time interval is the time interval from the acquisition of the previous image frame to the acquisition of the current image frame; the acceleration of the pupil image of the target user is determined according to the acceleration data of the electronic equipment; the system comprises a current image frame, a target user, a preset time interval and a previous image frame, wherein the current image frame is used for acquiring the pupil reference position of the target user, and the previous image frame is used for acquiring the pupil reference position of the target user; and determining a first candidate region and a second candidate region in the current image frame according to the estimated position, wherein the first candidate region comprises the estimated position, and the second region does not comprise the estimated position; and the image processing device is used for determining the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in any of the methods of the first aspect of the embodiment of the present application.
In a fourth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program is to make a computer perform part or all of the steps as described in any one of the methods of the first aspect of this application, and the computer includes an electronic device.
It can be seen that, in the embodiment of the present application, an electronic device obtains a current image frame of a target user, then determines whether the current image frame is an acquired first image frame, if the current image frame is not the first image frame, determines an estimated motion trajectory of a pupil image of the target user from a previous image frame to the current image frame, then determines an estimated position of the pupil image of the target user in the current image frame according to the estimated motion trajectory and a pupil reference position of the previous image frame, then determines a first candidate region and a second candidate region in the current image frame according to the estimated position, and finally determines a position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region. Therefore, the electronic equipment can obtain the motion track from the previous image frame to the current image frame through the motion information of the electronic equipment, and obtain the pupil position of the current image frame based on the motion track and the pupil position in the previous image frame, so that the pupil positioning is continuously performed, the calculated amount for the multi-frame image frame is reduced, and the eye movement tracking efficiency is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic flowchart of a pupil location method disclosed in the embodiment of the present application;
fig. 1B is a schematic diagram of a first candidate region and a second candidate region in a current image frame according to an embodiment of the present disclosure;
fig. 1C is a schematic diagram of a first candidate region and a second candidate region in another current image frame according to an embodiment of the present application;
fig. 1D is a schematic diagram of a first candidate region in a current image frame according to an embodiment of the present disclosure;
FIG. 1E is a schematic diagram of a second candidate region in a current image frame corresponding to FIG. 1D according to an embodiment of the present disclosure;
fig. 1F is a schematic diagram of an estimated position and a real position of a pupil image of a target user in a current image frame according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
fig. 3 is a block diagram illustrating functional units of a pupil location device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiments of the present application may be an electronic device with communication capability, and the electronic device may include various handheld devices with wireless communication function, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and so on.
At present, electronic devices often determine the eyeball position by searching a full image of an acquired image frame, and in a continuous eye tracking process, the number of image frames to be processed is huge, so that the amount of calculation to be processed is excessively accumulated, the eye tracking is delayed, and the user experience is reduced.
In view of the above problems, the present application provides a pupil location method to improve the efficiency of eye tracking, and the following describes the embodiments of the present application in detail with reference to the accompanying drawings.
Referring to fig. 1A, fig. 1A is a schematic flowchart of a pupil location method provided in an embodiment of the present application, and is applied to an electronic device, as shown in fig. 1A, the pupil location method includes:
s101, the electronic equipment acquires a current image frame of a target user.
Here, the target user may be the only user in the imaging range, or the target user may be a user who has been authenticated in advance among a plurality of users in the imaging range.
The obtaining of the current image frame of the target user by the electronic device may be that the electronic device obtains the current image frame of the target user through a camera of a local terminal, and the obtaining of the current image frame of the target user by the electronic device may be that the electronic device obtains the current image frame of the target user through a camera device in communication connection with the electronic device.
As can be seen, in this example, the electronic device may obtain a current image frame of the target user.
And S102, the electronic equipment judges whether the current image frame is the acquired first frame image frame.
Here, the first frame image frame refers to a first frame image frame at which pupil positioning is started, for example, a first image frame at which processing of an eye tracking process is performed.
As can be seen, in this example, the electronic device may determine whether the current image frame is the first image frame.
S103, if the current image frame is not the first image frame, determining an estimated motion track of the pupil image of the target user from the previous image frame to the current image frame.
Optionally, the determining, by the electronic device, an estimated motion trajectory of the pupil image of the target user from a previous image frame to a current image frame includes: the electronic equipment acquires acceleration data of the electronic equipment within a preset time interval, wherein the preset time interval is the time interval from the acquisition of a previous image frame to the acquisition of the current image frame; the electronic equipment determines the acceleration of the pupil image of the target user according to the acceleration data of the electronic equipment; the electronic equipment determines the movement distance of the pupil image of the target user from a previous image frame to a current image frame according to the acceleration of the pupil image of the target user and the preset time interval; and the electronic equipment determines the movement direction of the pupil image of the target user from the previous image frame to the current image frame according to the acceleration.
The preset time interval is preset, in the process of eye tracking, the image frames of the target user are obtained according to the preset time interval, the preset time interval may be 1 second, the preset time interval may also be 0.3 second, the preset time interval may be 1 millisecond, the preset time interval may be 1 microsecond, and the like, and the preset time interval is not specifically limited.
The electronic device may acquire the acceleration data of the electronic device within a preset time interval, where the acceleration data of the electronic device is recorded by the electronic device through an accelerometer in the electronic device, and includes movement direction data and movement displacement data.
The implementation manner of determining, by the electronic device, the acceleration of the pupil image of the target user according to the acceleration data of the electronic device may be: the electronic equipment determines the acceleration of the electronic equipment according to the acceleration data of the electronic equipment; and the electronic equipment determines the acceleration of the pupil image of the target user according to the acceleration of the electronic equipment according to an imaging principle. Here, the electronic device and the target user are relatively mobile, that is, applicable to a scene in which the human eyes are still and the electronic device is moving.
The specific implementation manner that the electronic equipment determines the movement distance of the pupil image of the target user from the previous image frame to the current image frame according to the acceleration of the pupil image of the target user and the preset time interval is to substitute the acceleration of the pupil image of the target user and the preset time interval into a displacement formula
Figure BDA0002256214540000051
The method comprises the following steps of obtaining a pupil image of a target user, and obtaining a motion displacement of a previous image frame from the current image frame.
In this example, the electronic device can determine, through the motion data of the electronic device recorded by the accelerometer of the electronic device, an estimated motion trajectory of the pupil image of the target user from the previous image frame to the current image frame.
And S104, the electronic equipment determines the estimated position of the pupil image of the target user in the current image frame according to the estimated motion track and the pupil reference position of the previous image frame, wherein the pupil reference position of the previous image frame is the position of the pupil image of the target user in the previous image frame.
Optionally, the determining, by the electronic device, the estimated position of the pupil image of the target user in the current image frame according to the estimated motion trajectory and the pupil reference position of the previous image frame includes: the electronic equipment matches the current image frame with the previous image frame and determines a first position of a pupil reference position of the previous image frame on the current image frame; and the electronic equipment determines the estimated position of the pupil image of the target user in the current image frame according to the movement distance, the movement direction and the first position.
The implementation manner of determining, by the electronic device, the estimated position of the pupil image of the target user in the current image frame according to the movement distance, the movement direction, and the first position may be: the electronic equipment takes the first position as a starting point and extends the length of the movement distance in the movement direction; and determining that the end point extending the movement distance in the movement direction is the estimated position of the pupil image of the target user in the current image frame.
For example, if the moving distance is 1 cm and the moving direction is 45 ° above and to the right of the first position, it is determined that the position in the current image frame, which is 45 ° above and to the right of the first position and is distant from the first position, is the estimated position of the pupil image of the target user in the current image frame.
It can be seen that, in this example, the electronic device is capable of continuously acquiring image frames of the target user during the eye tracking process, and predicting the position of the pupil image of the target user in the current image frame based on the positions of the historical image frames and the motion data of the electronic device.
S105, the electronic device determines a first candidate region and a second candidate region in the current image frame according to the estimated position, wherein only the first candidate region in the first candidate region and the second candidate region comprises the estimated position.
Optionally, the determining, by the electronic device, a first candidate region and a second candidate region in the current image frame according to the estimated position includes: the electronic device determining a first region in the current image frame as the first candidate region, the first region being a region adjacent to and including the first location; the electronic device determines a region in the current image frame other than the first candidate region as the second candidate region.
The proximity to the estimated position means that a distance between any edge position of the first candidate region and the estimated position is between one eighth and one third of a sum of a length and a width of a current image frame, the first candidate region may be a regular figure, such as a square, a rectangle, a circle, an ellipse, a regular polygon, and the like, or an irregular figure, and a shape of the first candidate region is not specifically limited.
For example, referring to fig. 1B, fig. 1B is a schematic diagram of a first candidate region and a second candidate region in a current image frame according to an embodiment of the present disclosure, as shown in fig. 1B, ln is an estimated position, alnIs a first candidate region, corresponding to the region in the dashed box in the figure, AanotherThe first candidate region is a square region having a side length of one sixth of the sum of the length (corresponding to L in the figure) and the width (corresponding to W in the figure) of the current image frame, and the second candidate region is a region other than the first candidate region in the current image frame, with the estimated position as the center.
For example, referring to fig. 1C, fig. 1C is a schematic diagram of a first candidate region and a second candidate region in another current image frame according to an embodiment of the present application, as shown in fig. 1C, ln is an estimated position, Aln is the first candidate region, corresponding to a region in a dashed box in the diagram, aanotherA second candidate region, which is a circular region centered at the estimated position and having a radius of one seventh of the sum of the length (corresponding to L in the figure) and the width (corresponding to W in the figure) of the current image frame, is set as the second candidate region, which is a region other than the first candidate region in the current image frame.
Optionally, the determining, by the electronic device, a first candidate region and a second candidate region in the current image frame according to the estimated position includes: the electronic device determining a first region in the current image frame as the first candidate region, the first region being a region adjacent to and including the first location; the electronic device determining a second region in the current image frame as a second reference candidate region, the second region being a region having an area smaller than the first region and including the first position in the first region; the electronic device determines a region in the current image frame other than the second reference candidate region as the second candidate region.
For example, referring to fig. 1D and 1E, fig. 1D is a schematic diagram of a first candidate region in a current image frame according to an embodiment of the present disclosure, and fig. 1E is a schematic diagram of a second candidate region in a current image frame corresponding to fig. 1D according to an embodiment of the present disclosure, as shown in fig. 1D, ln is an estimated position, alnIs a first candidate region, AlnA square region with a side length of one quarter of the sum of the length (corresponding to L in the figure) and the width (corresponding to W in the figure) of the current image frame, which takes the estimated position as the center; as shown in FIG. 1F, ln is the estimated position, A1Is a second candidate reference region corresponding to the region within the dashed box in the figure, A2Is a second candidate region, said A1Is a square region with the estimated position as the center and one eighth of the sum of the length and the width of the current image frame as the side length in the current image frame, A2Divide A for the current image frame1Outer region
As can be seen, in this example, the electronic device can determine different candidate regions according to the estimated positions, and then determine the pupil images of the target user in the current image frame in the candidate regions, so that the calculation amount is greatly reduced in the case of consecutive multiple frames.
S106, the electronic equipment determines the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region.
Optionally, the determining, by the electronic device, the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region includes: the electronic equipment judges whether a first candidate region in the current image frame has a pupil image of the target user; if the first candidate region has the pupil image of the target user, determining that the image position of the pupil image of the target user in the first candidate region is the pupil reference position of the current image frame; if the first candidate area does not have the pupil image of the target user, judging whether a second candidate area in the current image frame has the pupil image of the target user; if the second candidate region has the pupil image of the target user, determining that the image position of the pupil image of the target user in the second candidate region is the pupil reference position of the current image frame; if the pupil image of the target user does not exist in the second candidate region, determining that the pupil image of the target user does not exist in the current image frame, and determining that the estimated position is the pupil reference position of the current image frame.
For example, referring to fig. 1F, fig. 1F is a schematic diagram of an estimated position and a real position of a pupil image of a target user in a current image frame according to an embodiment of the present disclosure, as shown in fig. 1F, alnThe image is a first candidate area, ln is an estimated position, lm is a real position, and the electronic equipment determines A when positioning the pupil image of the target user in the current image framelnAnd determining the lm position as the pupil reference position of the current image frame, and providing reference for the pupil image of the target user of the next image frame. It can be seen that, in this case, the electronic device only needs to perform image data processing on the first candidate region, and the amount of calculation is greatly reduced in the case of continuously processing multiple frames of image frames.
Further, the determining, by the electronic device, whether a pupil image of the target user exists in a first candidate region in the current image frame includes: the electronic equipment acquires a gray value corresponding to each pixel point in a plurality of pixel points in the first candidate region; the electronic equipment determines a plurality of target pixel points of which the gray values are smaller than a preset gray threshold value; the electronic equipment determines a plurality of target areas in the first candidate area according to the target pixel points; the electronic device determining an area of each of the plurality of target regions; the electronic device performs the following for each of the plurality of target regions: the electronic equipment judges whether the area of the currently processed target area falls into a preset area range or not; if the area of the currently processed target area falls into the preset area range, determining the currently processed target area as the pupil image of the target user; and if the area of the currently processed target area does not fall into the preset area range, determining that the pupil image of the target user does not exist in the currently processed target area.
Wherein the preset area range is determined by the electronic device executing the following operations: the electronic equipment acquires the distance and the angle between the electronic equipment and the target user; and the electronic equipment obtains a preset area range according to the distance and the angle.
The preset area range obtained by the electronic device according to the distance and the angle may be: and the electronic equipment leads the distance and the angle into a pre-trained artificial intelligence model to obtain a preset area range corresponding to the distance and the angle.
Similarly, the electronic device determines whether the pupil image of the target user exists in the second candidate region in the current image frame in the same manner as the electronic device determines whether the pupil image of the target user exists in the first candidate region in the current image frame, and details are not repeated here.
It can be seen that, in the embodiment of the present application, an electronic device obtains a current image frame of a target user, then determines whether the current image frame is an acquired first image frame, if the current image frame is not the first image frame, determines an estimated motion trajectory of a pupil image of the target user from a previous image frame to the current image frame, then determines an estimated position of the pupil image of the target user in the current image frame according to the estimated motion trajectory and a pupil reference position of the previous image frame, then determines a first candidate region and a second candidate region in the current image frame according to the estimated position, and finally determines a position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region. Therefore, the electronic equipment can obtain the motion track from the previous image frame to the current image frame through the motion information of the electronic equipment, and obtain the pupil position of the current image frame based on the motion track and the pupil position in the previous image frame, so that the pupil positioning is continuously performed, the calculated amount for the multi-frame image frame is reduced, and the eye movement tracking efficiency is further improved.
In one possible example, after the electronic device determines whether the current image frame is the first acquired image frame, the method further includes: and if the current image frame is the first image frame, determining that the image position of the pupil image of the target user in the current image frame is the pupil reference position of the current image frame.
The specific implementation manner of determining that the image position of the pupil image of the target user in the current image frame is the pupil reference position of the current image frame is to determine the image position of the pupil image of the target user in the current image frame by performing a full-map search on the current image frame, and determine that the image position of the pupil image of the target user in the current image frame is the pupil reference position of the current image frame, where determining that the image position of the pupil image of the target user in the current image frame is the relatively mature prior art by the full-map search is not specifically described herein.
In this example, the electronic device can perform a full-image search on the first image frame to determine the image position of the pupil image of the target user in the current image frame.
Referring to fig. 2 in accordance with the embodiment shown in fig. 1A, fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 2, the electronic device 200 includes an application processor 210, a memory 220, a communication interface 230, and one or more programs 221, where the one or more programs 221 are stored in the memory 220 and configured to be executed by the application processor 210, and the one or more programs 221 include instructions for performing the following steps;
acquiring a current image frame of a target user;
judging whether the current image frame is the collected first image frame;
if the current image frame is not the first image frame, determining an estimated motion track of the pupil image of the target user from the previous image frame to the current image frame;
determining the estimated position of the pupil image of the target user in the current image frame according to the estimated motion track and the pupil reference position of the previous image frame, wherein the pupil reference position of the previous image frame is the position of the pupil image of the target user in the previous image frame;
determining a first candidate region and a second candidate region in the current image frame according to the estimated position, wherein only the first candidate region in the first candidate region and the second candidate region comprises the estimated position;
and determining the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region.
It can be seen that, in the embodiment of the present application, an electronic device obtains a current image frame of a target user, then determines whether the current image frame is an acquired first image frame, if the current image frame is not the first image frame, determines an estimated motion trajectory of a pupil image of the target user from a previous image frame to the current image frame, then determines an estimated position of the pupil image of the target user in the current image frame according to the estimated motion trajectory and a pupil reference position of the previous image frame, then determines a first candidate region and a second candidate region in the current image frame according to the estimated position, and finally determines a position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region. Therefore, the electronic equipment can obtain the motion track from the previous image frame to the current image frame through the motion information of the electronic equipment, and obtain the pupil position of the current image frame based on the motion track and the pupil position in the previous image frame, so that the pupil positioning is continuously performed, the calculated amount for the multi-frame image frame is reduced, and the eye movement tracking efficiency is further improved.
In one possible example, in terms of determining an estimated motion trajectory of the pupil imagery of the target user from a previous image frame to a current image frame, the instructions of the one or more programs 221 are specifically configured to perform the following steps: acquiring acceleration data of the electronic equipment within a preset time interval, wherein the preset time interval is the time interval from the acquisition of a previous image frame to the acquisition of the current image frame; determining the acceleration of the pupil image of the target user according to the acceleration data of the electronic equipment; determining the movement distance of the pupil image of the target user from a previous image frame to a current image frame according to the acceleration of the pupil image of the target user and the preset time interval; and determining the movement direction of the pupil image of the target user from the previous image frame to the current image frame according to the acceleration.
In one possible example, in the aspect of determining the estimated position of the pupil image of the target user in the current image frame according to the estimated motion trajectory and the pupil reference position of the previous image frame, the instructions of the one or more programs 221 are specifically configured to perform the following steps: matching the current image frame with the previous image frame, and determining a first position of a pupil reference position of the previous image frame on the current image frame; and determining the estimated position of the pupil image of the target user in the current image frame according to the movement distance, the movement direction and the first position.
In one possible example, the instructions of the one or more programs 221 are specifically configured to, in said determining a first candidate region and a second candidate region in the current image frame based on the estimated position, perform the steps of: determining a first region in the current image frame as the first candidate region, the first region being a region adjacent to and including the first location; determining a region in the current image frame other than the first candidate region as the second candidate region.
In one possible example, the instructions of the one or more programs 221 are specifically configured to, in said determining a first candidate region and a second candidate region in the current image frame based on the estimated position, perform the steps of: determining a sum of a length and a width of the current image frame; determining a first region in the current image frame as the first candidate region, the first region being a region adjacent to and including the first location; determining a second region in the current image frame as a second reference candidate region, the second region being a region having an area smaller than the first region and including the first position in the first region; determining a region in the current image frame other than the second reference candidate region as the second candidate region.
In one possible example, in terms of the determining the position of the pupil imagery of the target user in the current image frame from the first candidate region and the second candidate region, the instructions of the one or more programs 221 are specifically for performing the steps of: judging whether a first candidate region in the current image frame has a pupil image of the target user; if the first candidate region has the pupil image of the target user, determining that the image position of the pupil image of the target user in the first candidate region is the pupil reference position of the current image frame; if the first candidate area does not have the pupil image of the target user, judging whether a second candidate area in the current image frame has the pupil image of the target user; if the second candidate region has the pupil image of the target user, determining that the image position of the pupil image of the target user in the second candidate region is the pupil reference position of the current image frame; if the pupil image of the target user does not exist in the second candidate region, determining that the pupil image of the target user does not exist in the current image frame, and determining that the estimated position is the pupil reference position of the current image frame.
In one possible example, in the aspect of determining whether the pupil image of the target user exists in the first candidate region in the current image frame, the instructions of the one or more programs 221 are specifically configured to perform the following steps: acquiring a gray value corresponding to each pixel point in a plurality of pixel points in the first candidate region; determining a plurality of target pixel points of which the gray values are smaller than a preset gray threshold value in the plurality of pixel points; determining a plurality of target areas in the first candidate area according to the plurality of target pixel points; determining an area of each of the plurality of target regions; performing the following for each of the plurality of target regions: judging whether the area of the currently processed target area falls into a preset area range or not; if the area of the currently processed target area falls into the preset area range, determining the currently processed target area as the pupil image of the target user; if the area of the currently processed target area does not fall into the preset area range, determining that the currently processed target area is not the pupil image of the target user, and determining that the estimated position is the pupil reference position of the current image frame.
In one possible example, the one or more programs 221 further include instructions for performing the steps of: after judging whether the current image frame is the collected first image frame or not, if the current image frame is the first image frame, determining that the image position of the pupil image of the target user in the current image frame is the pupil reference position of the current image frame.
Consistent with the embodiment shown in fig. 1A, fig. 3 is a block diagram of functional units of a pupil location device provided in an embodiment of the present application, where the pupil location device 300 is applied to an electronic device, and includes a processing unit 301 and a communication unit 302, where,
the processing unit 301 is configured to obtain a current image frame of a target user through the communication unit 302; and the image processing device is used for judging whether the current image frame is the collected first image frame; if the current image frame is not the first image frame, acquiring acceleration data of the electronic equipment within a preset time interval through the communication unit, wherein the preset time interval is the time interval from the acquisition of the previous image frame to the acquisition of the current image frame; the acceleration of the pupil image of the target user is determined according to the acceleration data of the electronic equipment; the system comprises a current image frame, a target user, a preset time interval and a previous image frame, wherein the current image frame is used for acquiring the pupil reference position of the target user, and the previous image frame is used for acquiring the pupil reference position of the target user; and determining a first candidate region and a second candidate region in the current image frame according to the estimated position, wherein the first candidate region comprises the estimated position, and the second region does not comprise the estimated position; and the image processing device is used for determining the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region.
The apparatus 300 may further include a storage unit 303 for storing program codes and data of the electronic device. The processing unit 301 may be a processor, the communication unit 302 may be an internal communication interface, and the storage unit 303 may be a memory.
It can be seen that, in the embodiment of the present application, an electronic device obtains a current image frame of a target user, then determines whether the current image frame is an acquired first image frame, if the current image frame is not the first image frame, determines an estimated motion trajectory of a pupil image of the target user from a previous image frame to the current image frame, then determines an estimated position of the pupil image of the target user in the current image frame according to the estimated motion trajectory and a pupil reference position of the previous image frame, then determines a first candidate region and a second candidate region in the current image frame according to the estimated position, and finally determines a position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region. Therefore, the electronic equipment can obtain the motion track from the previous image frame to the current image frame through the motion information of the electronic equipment, and obtain the pupil position of the current image frame based on the motion track and the pupil position in the previous image frame, so that the pupil positioning is continuously performed, the calculated amount for the multi-frame image frame is reduced, and the eye movement tracking efficiency is further improved.
In a possible example, in terms of determining an estimated motion trajectory of the pupil image of the target user from a previous image frame to a current image frame, the processing unit 301 is specifically configured to: acquiring acceleration data of the electronic equipment within a preset time interval, wherein the preset time interval is the time interval from the acquisition of a previous image frame to the acquisition of the current image frame; determining the acceleration of the pupil image of the target user according to the acceleration data of the electronic equipment; determining the movement distance of the pupil image of the target user from a previous image frame to a current image frame according to the acceleration of the pupil image of the target user and the preset time interval; and determining the movement direction of the pupil image of the target user from the previous image frame to the current image frame according to the acceleration.
In a possible example, in terms of determining the estimated position of the pupil image of the target user in the current image frame according to the estimated motion trajectory and the pupil reference position of the previous image frame, the processing unit 301 is specifically configured to: matching the current image frame with the previous image frame, and determining a first position of a pupil reference position of the previous image frame on the current image frame; and determining the estimated position of the pupil image of the target user in the current image frame according to the movement distance, the movement direction and the first position.
In one possible example, in the aspect of determining the first candidate region and the second candidate region in the current image frame according to the estimated position, the processing unit 301 is specifically configured to: determining a first region in the current image frame as the first candidate region, the first region being a region adjacent to and including the first location; determining a region in the current image frame other than the first candidate region as the second candidate region.
In one possible example, in the aspect of determining the first candidate region and the second candidate region in the current image frame according to the estimated position, the processing unit 301 is specifically configured to: determining a sum of a length and a width of the current image frame; determining a first region in the current image frame as the first candidate region, the first region being a region adjacent to and including the first location; determining a second region in the current image frame as a second reference candidate region, the second region being a region having an area smaller than the first region and including the first position in the first region; determining a region in the current image frame other than the second reference candidate region as the second candidate region.
In one possible example, in terms of the determining the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region, the processing unit 301 is specifically configured to: judging whether a first candidate region in the current image frame has a pupil image of the target user; if the first candidate region has the pupil image of the target user, determining that the image position of the pupil image of the target user in the first candidate region is the pupil reference position of the current image frame; if the first candidate area does not have the pupil image of the target user, judging whether a second candidate area in the current image frame has the pupil image of the target user; if the second candidate region has the pupil image of the target user, determining that the image position of the pupil image of the target user in the second candidate region is the pupil reference position of the current image frame; if the pupil image of the target user does not exist in the second candidate region, determining that the pupil image of the target user does not exist in the current image frame, and determining that the estimated position is the pupil reference position of the current image frame.
In one possible example, in the aspect of determining whether the pupil image of the target user exists in the first candidate region in the current image frame, the processing unit 301 is specifically configured to: acquiring a gray value corresponding to each pixel point in a plurality of pixel points in the first candidate region; determining a plurality of target pixel points of which the gray values are smaller than a preset gray threshold value in the plurality of pixel points; determining a plurality of target areas in the first candidate area according to the plurality of target pixel points; determining an area of each of the plurality of target regions; performing the following for each of the plurality of target regions: judging whether the area of the currently processed target area falls into a preset area range or not; if the area of the currently processed target area falls into the preset area range, determining the currently processed target area as the pupil image of the target user; if the area of the currently processed target area does not fall into the preset area range, determining that the currently processed target area is not the pupil image of the target user, and determining that the estimated position is the pupil reference position of the current image frame.
In one possible example, the processing unit 301 is further configured to: after judging whether the current image frame is the collected first image frame or not, if the current image frame is the first image frame, determining that the image position of the pupil image of the target user in the current image frame is the pupil reference position of the current image frame.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program causes a computer to execute part or all of the steps of any one of the methods described in the method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as recited in the method embodiments. The computer program product may be a software installation package, said computer comprising electronic means.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash memory disks, Read-only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiment of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, which have wireless communication functions, and various forms of User Equipment (UE), Mobile Stations (MS), terminals (terminal device), and the like.

Claims (11)

1. A pupil positioning method is applied to an electronic device, and comprises the following steps:
acquiring a current image frame of a target user;
judging whether the current image frame is the collected first image frame;
if the current image frame is not the first image frame, determining an estimated motion track of the pupil image of the target user from the previous image frame to the current image frame;
determining the estimated position of the pupil image of the target user in the current image frame according to the estimated motion track and the pupil reference position of the previous image frame, wherein the pupil reference position of the previous image frame is the position of the pupil image of the target user in the previous image frame;
determining a first candidate region and a second candidate region in the current image frame according to the estimated position, wherein only the first candidate region in the first candidate region and the second candidate region comprises the estimated position;
and determining the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region.
2. The method of claim 1, wherein the determining the estimated motion trajectory of the pupil image of the target user from a previous image frame to a current image frame comprises:
acquiring acceleration data of the electronic equipment within a preset time interval, wherein the preset time interval is the time interval from the acquisition of a previous image frame to the acquisition of the current image frame;
determining the acceleration of the pupil image of the target user according to the acceleration data of the electronic equipment;
determining the movement distance of the pupil image of the target user from a previous image frame to a current image frame according to the acceleration of the pupil image of the target user and the preset time interval;
and determining the movement direction of the pupil image of the target user from the previous image frame to the current image frame according to the acceleration.
3. The method of claim 2, wherein determining the estimated position of the pupil image of the target user in the current image frame according to the estimated motion trajectory and the pupil reference position of the previous image frame comprises:
matching the current image frame with the previous image frame, and determining a first position of a pupil reference position of the previous image frame on the current image frame;
and determining the estimated position of the pupil image of the target user in the current image frame according to the movement distance, the movement direction and the first position.
4. The method of claim 3, wherein said determining a first candidate region and a second candidate region in the current image frame based on the estimated position comprises:
determining a first region in the current image frame as the first candidate region, the first region being a region adjacent to and including the first location;
determining a region in the current image frame other than the first candidate region as the second candidate region.
5. The method of claim 3, wherein said determining a first candidate region and a second candidate region in the current image frame based on the estimated position comprises: determining a sum of a length and a width of the current image frame;
determining a first region in the current image frame as the first candidate region, the first region being a region adjacent to and including the first location;
determining a second region in the current image frame as a second reference candidate region, the second region being a region having an area smaller than the first region and including the first position in the first region;
determining a region in the current image frame other than the second reference candidate region as the second candidate region.
6. The method according to any one of claims 1-5, wherein said determining the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region comprises:
judging whether a first candidate region in the current image frame has a pupil image of the target user;
if the first candidate region has the pupil image of the target user, determining that the image position of the pupil image of the target user in the first candidate region is the pupil reference position of the current image frame;
if the first candidate area does not have the pupil image of the target user, judging whether a second candidate area in the current image frame has the pupil image of the target user;
if the second candidate region has the pupil image of the target user, determining that the image position of the pupil image of the target user in the second candidate region is the pupil reference position of the current image frame;
if the pupil image of the target user does not exist in the second candidate region, determining that the pupil image of the target user does not exist in the current image frame, and determining that the estimated position is the pupil reference position of the current image frame.
7. The method as claimed in claim 6, wherein said determining whether the first candidate region in the current image frame has the pupil image of the target user comprises:
acquiring a gray value corresponding to each pixel point in a plurality of pixel points in the first candidate region;
determining a plurality of target pixel points of which the gray values are smaller than a preset gray threshold value in the plurality of pixel points;
determining a plurality of target areas in the first candidate area according to the plurality of target pixel points;
determining an area of each of the plurality of target regions;
performing the following for each of the plurality of target regions:
judging whether the area of the currently processed target area falls into a preset area range or not;
if the area of the currently processed target area falls into the preset area range, determining the currently processed target area as the pupil image of the target user;
and if the area of the currently processed target area does not fall into the preset area range, determining that the pupil image of the target user does not exist in the currently processed target area.
8. The method of claim 1, wherein after determining whether the current image frame is the first captured image frame, the method further comprises:
and if the current image frame is the first image frame, determining that the image position of the pupil image of the target user in the current image frame is the pupil reference position of the current image frame.
9. A pupil positioning device, applied to an electronic device, comprising a processing unit and a communication unit, wherein,
the processing unit is used for acquiring the current image frame of the target user through the communication unit; and the image processing device is used for judging whether the current image frame is the collected first image frame; if the current image frame is not the first image frame, acquiring acceleration data of the electronic equipment within a preset time interval through the communication unit, wherein the preset time interval is the time interval from the acquisition of the previous image frame to the acquisition of the current image frame; the acceleration of the pupil image of the target user is determined according to the acceleration data of the electronic equipment; the system comprises a current image frame, a target user, a preset time interval and a previous image frame, wherein the current image frame is used for acquiring the pupil reference position of the target user, and the previous image frame is used for acquiring the pupil reference position of the target user; and determining a first candidate region and a second candidate region in the current image frame according to the estimated position, wherein the first candidate region comprises the estimated position, and the second region does not comprise the estimated position; and the image processing device is used for determining the position of the pupil image of the target user in the current image frame according to the first candidate region and the second candidate region.
10. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-8.
11. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-8.
CN201911063691.9A 2019-10-31 2019-10-31 Pupil positioning method and related device and product Pending CN112749604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911063691.9A CN112749604A (en) 2019-10-31 2019-10-31 Pupil positioning method and related device and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911063691.9A CN112749604A (en) 2019-10-31 2019-10-31 Pupil positioning method and related device and product

Publications (1)

Publication Number Publication Date
CN112749604A true CN112749604A (en) 2021-05-04

Family

ID=75645213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911063691.9A Pending CN112749604A (en) 2019-10-31 2019-10-31 Pupil positioning method and related device and product

Country Status (1)

Country Link
CN (1) CN112749604A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113729616A (en) * 2021-09-01 2021-12-03 中国科学院上海微***与信息技术研究所 Method and device for determining pupil center position data and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942542A (en) * 2014-04-18 2014-07-23 重庆卓美华视光电有限公司 Human eye tracking method and device
CN104123549A (en) * 2014-07-30 2014-10-29 中国人民解放军第三军医大学第二附属医院 Eye positioning method for real-time monitoring of fatigue driving
US20160132726A1 (en) * 2014-05-27 2016-05-12 Umoove Services Ltd. System and method for analysis of eye movements using two dimensional images
CN107784280A (en) * 2017-10-18 2018-03-09 张家港全智电子科技有限公司 A kind of dynamic pupil tracking method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942542A (en) * 2014-04-18 2014-07-23 重庆卓美华视光电有限公司 Human eye tracking method and device
US20160132726A1 (en) * 2014-05-27 2016-05-12 Umoove Services Ltd. System and method for analysis of eye movements using two dimensional images
CN104123549A (en) * 2014-07-30 2014-10-29 中国人民解放军第三军医大学第二附属医院 Eye positioning method for real-time monitoring of fatigue driving
CN107784280A (en) * 2017-10-18 2018-03-09 张家港全智电子科技有限公司 A kind of dynamic pupil tracking method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113729616A (en) * 2021-09-01 2021-12-03 中国科学院上海微***与信息技术研究所 Method and device for determining pupil center position data and storage medium

Similar Documents

Publication Publication Date Title
CN107358241B (en) Image processing method, image processing device, storage medium and electronic equipment
CN107590474B (en) Unlocking control method and related product
CN107909104B (en) Face clustering method and device for pictures and storage medium
CN108024065B (en) Terminal shooting method, terminal and computer readable storage medium
CN107622243B (en) Unlocking control method and related product
CN105282547B (en) A kind of bit rate control method and device of Video coding
CN111047622B (en) Method and device for matching objects in video, storage medium and electronic device
CN106529406A (en) Method and device for acquiring video abstract image
CN109960969A (en) The method, apparatus and system that mobile route generates
CN113657195A (en) Face image recognition method, face image recognition equipment, electronic device and storage medium
CN109522814A (en) A kind of target tracking method and device based on video data
CN111881846B (en) Image processing method, image processing apparatus, image processing device, image processing apparatus, storage medium, and computer program
CN112749604A (en) Pupil positioning method and related device and product
CN111105434A (en) Motion trajectory synthesis method and electronic equipment
US10803610B2 (en) Collaborative visual enhancement devices
CN111445442A (en) Crowd counting method and device based on neural network, server and storage medium
CN112446254A (en) Face tracking method and related device
CN109657526B (en) Intelligent picture cutting method and system based on face recognition
CN116630598B (en) Visual positioning method and device under large scene, electronic equipment and storage medium
CN113657154A (en) Living body detection method, living body detection device, electronic device, and storage medium
CN110223219B (en) 3D image generation method and device
CN110933314B (en) Focus-following shooting method and related product
US20220164963A1 (en) Information processing device, estimation method, and nontransitory computer readable medium
CN114930319A (en) Music recommendation method and device
CN116309729A (en) Target tracking method, device, terminal, system and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination