CN115494961B - Novel interactive surrounding intelligent display equipment based on face recognition - Google Patents
Novel interactive surrounding intelligent display equipment based on face recognition Download PDFInfo
- Publication number
- CN115494961B CN115494961B CN202211437166.0A CN202211437166A CN115494961B CN 115494961 B CN115494961 B CN 115494961B CN 202211437166 A CN202211437166 A CN 202211437166A CN 115494961 B CN115494961 B CN 115494961B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- tracking
- face image
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Signal Processing (AREA)
- Vascular Medicine (AREA)
- Collating Specific Patterns (AREA)
- Image Processing (AREA)
Abstract
The invention provides novel interactive surrounding intelligent display equipment based on face recognition, and relates to the technical field of intelligent projection display, wherein the intelligent display equipment comprises a face tracking module, a display module, a mode switching module and a master control processor; the human face tracking module comprises an infrared tracking unit and a human face tracking unit, the infrared tracking unit is used for acquiring infrared images in a display space, and the human face tracking unit is used for acquiring human face images; the face tracking module is configured with a face tracking strategy, which comprises: analyzing the infrared image in the display space to obtain the position of the human body; according to the invention, the position of the user is acquired, and the face orientation of the user is analyzed, so that the projection direction can be adjusted in real time according to the body movement and orientation of the user, and the problem that the interactive intelligence of the conventional projection display equipment is insufficient is solved.
Description
Technical Field
The invention belongs to the technical field of intelligent projection display, and particularly relates to novel interactive surrounding intelligent display equipment based on face recognition.
Background
The projection display is a method or an apparatus for controlling a light source by plane image information, enlarging and displaying an image on a projection screen using an optical system and a projection space. The application in our life is wide, for example teaching, and the projection display is basically used at present. The screen is large, the display is clear, and the teaching is convenient. The projection display meets the requirement of large screen display. Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
The existing projection display technology usually adopts a fixed projection mode no matter in the teaching or entertainment use process, the interactivity of the mode and a user is poor, the projection angle is difficult to change according to the moving state or the orientation state of the user, the display interactive experience is not intelligent enough, and therefore the problem existing in the prior art is solved by lacking an interactive surrounding intelligent display device which changes the projection according to the actual use state of the user.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide novel interactive surrounding intelligent display equipment based on face recognition, wherein the position of a user is obtained, the face orientation of the user is analyzed, and the projection direction can be adjusted in real time according to the body movement and orientation of the user, so that the problem that the interactive intelligence of the conventional projection display equipment is insufficient is solved.
In order to realize the purpose, the invention is realized by the following technical scheme: the novel interactive surrounding intelligent display device based on face recognition comprises a face tracking module, a display module, a mode switching module and a master control processor;
the human face tracking module comprises an infrared tracking unit and a human face tracking unit, the infrared tracking unit is used for acquiring infrared images in a display space, and the human face tracking unit is used for acquiring human face images; the face tracking module is configured with a face tracking strategy, which comprises: analyzing an infrared image in a display space to obtain a human body position, setting a human face tracking reference point according to the human body position, and acquiring a human face image based on the coordinate of the human face tracking reference point;
the display module comprises a projection display device and an angle adjusting device, the projection display device is fixed in the middle of the top of the display space through the angle adjusting device, and the angle adjusting device is used for adjusting the horizontal projection angle of the display device;
the mode switching module comprises a remote controller, the mode switching module is connected with the master control processor through a radio signal, the remote controller is provided with a projection tracking switch and a projection fixing switch, the mode switching module is provided with a mode switching strategy, and the mode switching strategy comprises the following steps: entering a projection tracking mode when a projection tracking switch is started, and entering a projection fixing mode when a projection fixing switch is started;
the master control processor is configured with a master control strategy, and the master control strategy comprises the following steps: when entering a projection tracking mode, analyzing based on a human face image to obtain a human face direction, determining a projection position through the human face direction, and projecting the projection display device to the projection position through an angle adjusting device;
when the projection fixing mode is entered, the projection position of the projection display device is fixed through the angle adjusting device.
Further, infrared tracking unit includes first infrared collection camera and the infrared camera of gathering of second, first infrared collection camera sets up the top at display space, the infrared camera setting of gathering of second is in display space's side, the shooting direction looks vertical of first infrared collection camera and the infrared collection camera of second.
Further, the face tracking strategy further comprises an infrared image analysis sub-strategy, and the infrared image analysis sub-strategy comprises: acquiring a first infrared image through a first infrared acquisition camera, and acquiring a second infrared image through a second infrared acquisition camera;
setting an X axis and a Y axis for the first infrared image, and dividing the first infrared image into pixel points according to a first pixel proportion, wherein the division unit of the X axis and the Y axis is equal to the side length of one pixel point; acquiring a region of a temperature interval in the first infrared image within a first human body reference temperature interval, and setting the region as a basic capturing region; selecting a pixel point from the basic capture area as a basic capture midpoint, and acquiring an X-axis coordinate and a Y-axis coordinate of the basic capture midpoint;
setting a Y axis and a Z axis for a second infrared image, wherein the Y axis of the second infrared image is consistent with the Y axis of the first infrared image, selecting a plurality of pre-selected capture points which have the same Y axis coordinate with the basic capture midpoint and have the temperature intervals within the first human body reference temperature interval from the second infrared image, obtaining the maximum value of the Z axis coordinate of the pre-selected capture points, and solving the Z axis capture coordinate of the maximum value of the Z axis coordinate of the pre-selected capture points through a Z axis capture calculation formula; the Z-axis capture calculation formula is configured to:(ii) a Wherein Zb is the Z-axis capture coordinate, Z max The maximum value of Z-axis coordinates in a plurality of preselected capture points is obtained, alpha is the Z-axis transformation ratio of the preselected capture points, and the value range of the alpha is between 0 and 2;
and setting a face tracking reference point, wherein the X-axis coordinate of the face tracking reference point is the same as the X-axis coordinate of the basic capturing midpoint, the Y-axis coordinate of the face tracking reference point is the same as the Y-axis coordinate of the basic capturing midpoint, and the Z-axis coordinate of the face tracking reference point is the same as the Z-axis capturing coordinate.
Furthermore, the face tracking unit comprises four groups of face tracking cameras, the four groups of face tracking cameras are respectively arranged on four side surfaces of the display space, and the shooting angles of two adjacent face tracking cameras are vertical; the face tracking strategy further comprises a face image acquisition sub-strategy, wherein the face image acquisition sub-strategy comprises the following steps: and respectively acquiring face images towards the face tracking reference points through four groups of face tracking cameras, wherein the area of a shooting area of the face images is larger than that of the first reference area.
Furthermore, the master control processor is also provided with a face database, and the face database stores iris characteristic information of the face and the eye distance when the face is shot positively.
Further, the general control strategy further includes a face orientation analysis sub-strategy, and the face orientation analysis sub-strategy includes: respectively marking the face images acquired by the four groups of face tracking cameras as a first face image, a second face image, a third face image and a fourth face image;
comparing iris feature identification with stored iris feature information to respectively obtain eye distances in a first face image, a second face image, a third face image and a fourth face image;
when two groups of eye distance information are obtained from a first face image, a second face image, a third face image and a fourth face image, the image with the largest eye distance in the first face image, the second face image, the third face image and the fourth face image is used as a basic orientation image; taking an image with the second largest eye distance in the first face image, the second face image, the third face image and the fourth face image as a deviation orientation image;
when a group of eye distance information is acquired from a first face image, a second face image, a third face image and a fourth face image, taking an image with the eye distance information in the first face image, the second face image, the third face image and the fourth face image as a forward direction orientation image;
setting the eye distance when the face is shot in the positive direction as a basic reference eye distance, setting the eye distance in the basic azimuth image as a basic azimuth eye distance, and setting the eye distance in the deviation azimuth image as a deviation azimuth eye distance; calculating the basic reference eye distance, the basic azimuth eye distance and the deviation azimuth eye distance through a deviation angle calculation formula to obtain a human face deviation angle; the deflection angle calculation formula is configured as:(ii) a Wherein Rpx is a human face deviation angle, sjc is a basic reference eye distance, sjf is a basic azimuth eye distance, spf is a deviation azimuth eye distance, a1 is a first angle conversion coefficient, and a2 is a second angle conversion coefficient.
Further, the general control strategy further includes an intelligent display tracking sub-strategy, and the intelligent display tracking sub-strategy includes: when two groups of eye distance information are obtained from a first face image, a second face image, a third face image and a fourth face image, setting the side face of a display space where a face tracking camera corresponding to a basic orientation image is located as a basic orientation side face, and setting the side face of the display space where a face tracking camera corresponding to a deviation orientation image is located as a deviation orientation side face; the angle adjusting device drives the projection display device to rotate the human face deflection angle from the side facing the basic azimuth to the side facing the deflection azimuth;
when a group of eye distance information is acquired from a first face image, a second face image, a third face image and a fourth face image, setting the side face of a display space where a face tracking camera corresponding to a forward direction orientation image is located as a forward direction orientation side face; the projection display device is driven by the angle adjusting device to rotate towards the side face of the forward direction.
The invention has the beneficial effects that: firstly, an infrared tracking unit of a face tracking module can acquire an infrared image in a display space, and the face image can be acquired through the face tracking unit; analyzing an infrared image in a display space to obtain a human body position, setting a human face tracking reference point according to the human body position, and acquiring a human face image based on the coordinate of the human face tracking reference point; the method can acquire the approximate position of the head of the human body firstly, so that the efficiency of face tracking and recognition can be improved, the timeliness of projection tracking of the equipment is further improved, and the agility of display interaction reaction of the equipment is finally improved;
the mode switching module is provided with a remote controller, and the mode switching can be carried out through the remote controller: entering a projection tracking mode when a projection tracking switch is started, and entering a projection fixing mode when a projection fixing switch is started; the design can facilitate users to adjust the interaction state in time according to the self use condition;
when the projection tracking mode is entered, the human face orientation is obtained by analyzing the human face image, the projection position is determined by the human face orientation, and the projection display device is projected to the projection position by the angle adjusting device; when the projection fixing mode is entered, fixing the projection position of the projection display device through the angle adjusting device; the design improves the humanized setting and simultaneously ensures the intelligence of interactive experience.
Advantages of additional aspects of the invention will be set forth in part in the description of the embodiments which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a block diagram of the general control processor of the present invention;
fig. 2 is a schematic diagram of acquiring a face image in a display space according to the present invention.
Detailed Description
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Referring to fig. 1, the invention provides a novel interactive surround intelligent display device based on face recognition, which can adjust a projection direction in real time according to body movement and orientation of a user by acquiring a position of the user and analyzing a face orientation of the user, so as to solve a problem that interactive intelligence of an existing projection display device is insufficient. The display space may be configured as a cylindrical space, or may be configured as a rectangular space.
The intelligent display equipment comprises a face tracking module, a display module, a mode switching module and a master control processor; the user can switch to the projection tracking mode through the mode switching module, obtains face information through the face tracking module, and the orientation that obtains the people's face can be carried out the analysis to face information to rethread master control treater, carries out the adjustment of projection position to the display module based on the people's face orientation. The display module comprises a projection display device and an angle adjusting device, the projection display device is fixed in the middle of the top of the display space through the angle adjusting device, and the angle adjusting device is used for adjusting the horizontal projection angle of the display device; projection display device specifically is the projecting apparatus, angle adjustment device includes the connecting seat, angle adjustment motor and shaft coupling, the shaft coupling is connected with angle adjustment motor's output shaft, the bottom of shaft coupling is used for installing the projecting apparatus, angle adjustment motor passes through the connecting seat to be fixed at the top in display space, rotation through angle adjustment motor can drive the projecting apparatus and carry out angle adjustment, angle adjustment motor's one end is provided with the encoder simultaneously, acquire angle adjustment motor's turned angle through the encoder, angle adjustment motor specifically adopts servo motor.
Specifically, the face tracking module comprises an infrared tracking unit and a face tracking unit, the infrared tracking unit is used for acquiring infrared images in the display space, the infrared tracking unit comprises a first infrared acquisition camera and a second infrared acquisition camera, the first infrared acquisition camera is arranged at the top of the display space, the second infrared acquisition camera is arranged on the side face of the display space, and the shooting directions of the first infrared acquisition camera and the second infrared acquisition camera are perpendicular to each other.
The face tracking unit is used for acquiring a face image; the face tracking unit comprises four groups of face tracking cameras, the four groups of face tracking cameras are respectively arranged on four side faces of the display space, and the shooting angles of the two adjacent face tracking cameras are vertical.
The face tracking module is configured with a face tracking strategy, and the face tracking strategy comprises the following steps:
step S101, analyzing an infrared image in a display space to obtain a human body position, and setting a human face tracking reference point according to the human body position;
step S102, acquiring a face image based on the coordinates of the face tracking reference point;
the face tracking strategy also comprises an infrared image analysis sub-strategy, and the infrared image analysis sub-strategy comprises the following steps:
step S1011, acquiring a first infrared image through a first infrared acquisition camera, and acquiring a second infrared image through a second infrared acquisition camera;
step S1012, setting an X axis and a Y axis for the first infrared image, and dividing the first infrared image into pixel points according to a first pixel ratio, wherein a division unit of the X axis and the Y axis is equal to a side length of one pixel point;
step S1013, acquiring a region of the temperature interval in the first infrared image within the first human body reference temperature interval, and setting the region as a basic capturing region; selecting a pixel point from the basic capture area as a basic capture midpoint, and acquiring an X-axis coordinate and a Y-axis coordinate of the basic capture midpoint;
step S1014, setting the second infrared imageDetermining a Y axis and a Z axis, wherein the Y axis of the second infrared image is consistent with the Y axis of the first infrared image, selecting a plurality of preselected capture points which have the same Y axis coordinate with the basic capture midpoint and have temperature intervals within the first human body reference temperature interval from the second infrared image, obtaining the maximum value of the Z axis coordinate of the preselected capture points, and solving the Z axis capture coordinate of the maximum value of the Z axis coordinate of the preselected capture points through a Z axis capture calculation formula; the Z-axis capture calculation formula is configured as:(ii) a Wherein Zb is a Z-axis capture coordinate, Z max Is the maximum value of Z-axis coordinate in a plurality of preselected capture points, alpha is the Z-axis conversion ratio of the preselected capture points, the value range of alpha is between 0 and 2, and the value of alpha is 0.95 in the specific setting process, for example, Z is the maximum value of Z-axis coordinate in the preselected capture points max When Zb is 1.8, zb is 1.71;
step S1015, a face tracking reference point is set, the X-axis coordinate of the face tracking reference point is the same as the X-axis coordinate of the basic capture midpoint, the Y-axis coordinate of the face tracking reference point is the same as the Y-axis coordinate of the basic capture midpoint, and the Z-axis coordinate of the face tracking reference point is the same as the Z-axis capture coordinate.
The face tracking strategy also comprises a face image acquisition sub-strategy, and the face image acquisition sub-strategy comprises the following steps:
and step S1021, respectively acquiring face images towards the face tracking reference points through the four groups of face tracking cameras, wherein the shooting area of the face images is larger than the area of the first reference area. The area of the first reference region is larger than three times of the area of a normal face in an actual scene, for example, in an actual shooting process, the actual area corresponding to a shooting region of a face image is larger than 1 square meter, and it can be ensured that image information of the face can be acquired when a certain error exists in the calculation of a face tracking reference point.
The mode switching module includes the remote controller, and the mode switching module is connected through radio signal with the master control treater, and the remote controller disposes projection tracking switch and projection fixed switch, and the mode switching module disposes the mode switching strategy, and the mode switching strategy includes following step: step S201, entering a projection tracking mode when a projection tracking switch is started; the projection tracking mode is an intelligent interaction mode, and the projection direction needs to be adjusted according to the turning direction of the human body.
Step S202, entering a projection fixing mode when a projection fixing switch is started; the projection fixing mode is a conventional mode, and only one direction of the projector is required to be ensured to fix projection.
The master control processor is also provided with a face database, and the face database stores iris characteristic information of the face and the eye distance when the face is shot positively; referring to fig. 2, an angle β is a human face deviation angle, and the master control processor is configured with a master control strategy, which includes the following steps:
step S301, when entering a projection tracking mode, analyzing based on a face image to obtain a face direction, determining a projection position through the face direction, and projecting the projection display device to the projection position through an angle adjusting device;
step S302, when the projection fixing mode is entered, the projection position of the projection display device is fixed through the angle adjusting device.
The general control strategy also comprises a face orientation analysis sub-strategy, and the face orientation analysis sub-strategy comprises the following steps: step S30111, labeling the face images acquired by the four groups of face tracking cameras as a first face image, a second face image, a third face image and a fourth face image respectively;
step S30112, comparing iris feature information with stored iris feature information through iris feature recognition, and then respectively obtaining eye distances in a first face image, a second face image, a third face image and a fourth face image; when iris feature recognition is compared with stored iris feature information, the eye distance may be set to be the distance between the intermediate points of the iris features of the two eye distances or the closest distance of the iris features of the two eyes, as long as the acquisition mode of the eye distance used for comparison is the same.
Step S30113, when two groups of eye distance information are obtained from the first face image, the second face image, the third face image and the fourth face image, taking the image with the largest eye distance in the first face image, the second face image, the third face image and the fourth face image as a basic orientation image; taking an image with the second largest eye distance in the first face image, the second face image, the third face image and the fourth face image as a deviation orientation image; when the human face laterally deviates, the eye distance in the obtained human face image is smaller than the eye distance shot in the forward direction, and the larger the deviation angle of the human face is, the smaller the eye distance in the obtained human face image is; meanwhile, when the face images are compared, the face images need to be compared after being scaled to the same size, and the compared data only have validity; when the human face deviates from one side face, only two groups of human face tracking cameras can completely shoot two eyes in the human face; if the face is opposite to one side face, only the face tracking camera on the opposite side face can completely shoot two eyes in the face;
step S30114, when a set of eye distance information is acquired from the first face image, the second face image, the third face image, and the fourth face image, an image having eye distance information in the first face image, the second face image, the third face image, and the fourth face image is taken as a forward direction orientation image;
step S30115, setting the eye distance when the face is shot in the forward direction as a base reference eye distance, setting the eye distance in the base azimuth image as a base azimuth eye distance, and setting the eye distance in the deviation azimuth image as a deviation azimuth eye distance; calculating the basic reference eye distance, the basic azimuth eye distance and the deviation azimuth eye distance through a deviation angle calculation formula to obtain a human face deviation angle; the deflection angle calculation formula is configured as follows:(ii) a Wherein Rpx is a human face deflection angle, sjc is a basic reference eye distance, sjf is a basic azimuth eye distance, spf is a deflection azimuth eye distance, a1 is a first angle conversion coefficient, and a2 is a second angle conversion coefficient; wherein, the value range of a1 is larger than zero, in a calculation formula of the deflection angle,in partThe meaning of the expression is that the difference value of the basic reference eye distance and the deviation azimuth eye distance is subtracted from the difference value of the basic reference eye distance and the deviation azimuth eye distance, the obtained difference value is set as a reference comparison difference value, the larger the final reference comparison difference value is, the smaller the deviation amplitude of the side face corresponding to the face and the basic azimuth image is, wherein the value of a1 is larger than zero, the difference value obtained by subtracting the reference comparison difference value from the basic range eye distance is set as a total deviation difference value, the smaller the multiplication of the total deviation difference value is, the smaller the deviation amplitude of the side face corresponding to the face and the basic azimuth image is, a1 is used for conversion between the total deviation difference value and the deviation angle, a certain conversion relation exists between the total deviation difference value and the deviation angle, and the conversion relation is a1, and the total deviation difference value multiplied by a1 can be used for calculation of the deviation angle of the face; and the deviation angle is also regressed into a deviation angle calculation formula,the part can also be used for calculating the human face deflection angle, because the smaller the difference between the basic reference eye distance and the basic azimuth eye distance, the smaller the deflection amplitude of the side corresponding to the human face and the basic azimuth image, a certain conversion ratio exists between the difference between the basic reference eye distance and the basic azimuth eye distance and the human face deflection angle, the conversion ratio is set as a2, and the more accurate human face deflection angle can be obtained by averaging the human face deflection angles obtained in two directions; the settings of a1 and a2 may be specifically set with reference to the eye distance of the actual user.
The main control strategy also comprises an intelligent display tracking sub-strategy, and the intelligent display tracking sub-strategy comprises the following steps:
step S30121, when two sets of eye distance information are obtained from the first face image, the second face image, the third face image and the fourth face image, setting a side surface of a display space where a face tracking camera corresponding to the basic orientation image is located as a basic orientation side surface, and setting a side surface of a display space where a face tracking camera corresponding to the deviation orientation image is located as a deviation orientation side surface; the angle adjusting device drives the projection display device to rotate the human face deflection angle from the side facing the basic azimuth to the side facing the deflection azimuth; for example, when the human face deflection angle is 30 degrees, the projector is rotated 30 degrees from the side facing the basic azimuth to the side facing the deflection azimuth;
step S30122, when a group of eye distance information is obtained from the first face image, the second face image, the third face image and the fourth face image, setting a side face of a display space where a face tracking camera corresponding to the forward direction position image is located as a forward direction position side face; the projection display device is driven by the angle adjusting device to rotate towards the side face of the forward direction.
The working principle is as follows: firstly, a user can switch a projection mode through a remote controller, enter the projection tracking mode when a projection tracking switch is started, and enter the projection fixing mode when a projection fixing switch is started; in a projection tracking mode, an infrared tracking unit of a face tracking module can acquire an infrared image in a display space, and a face tracking unit can acquire a face image; analyzing an infrared image in a display space to obtain a human body position, setting a human face tracking reference point according to the human body position, and acquiring a human face image based on the coordinate of the human face tracking reference point;
when the projection tracking mode is entered, the master control processor can analyze the human face image to obtain a human face position, determine a projection position through the human face position, and project the projection display device to the projection position through the angle adjusting device; when the projection fixing mode is entered, fixing the projection position of the projection display device through the angle adjusting device; the intelligent interactive display of the equipment is improved while the humanized setting of the equipment is realized.
The above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (2)
1. The novel interactive surrounding intelligent display device based on face recognition is characterized by comprising a face tracking module, a display module, a mode switching module and a master control processor;
the human face tracking module comprises an infrared tracking unit and a human face tracking unit, the infrared tracking unit is used for acquiring infrared images in a display space, and the human face tracking unit is used for acquiring human face images; the face tracking module is configured with a face tracking strategy, which comprises: analyzing an infrared image in a display space to obtain a human body position, setting a human face tracking reference point according to the human body position, and acquiring a human face image based on the coordinate of the human face tracking reference point;
the display module comprises a projection display device and an angle adjusting device, the projection display device is fixed in the middle of the top of the display space through the angle adjusting device, and the angle adjusting device is used for adjusting the horizontal projection angle of the display device;
the mode switching module comprises a remote controller, the mode switching module is connected with the master control processor through a radio signal, the remote controller is provided with a projection tracking switch and a projection fixing switch, the mode switching module is provided with a mode switching strategy, and the mode switching strategy comprises the following steps: when a projection tracking switch is started, entering a projection tracking mode, and when a projection fixing switch is started, entering a projection fixing mode;
the master control processor is configured with a master control strategy, and the master control strategy comprises the following steps: when entering a projection tracking mode, analyzing based on a human face image to obtain a human face direction, determining a projection position through the human face direction, and projecting the projection display device to the projection position through an angle adjusting device;
when the projection fixing mode is entered, fixing the projection position of the projection display device through the angle adjusting device;
the infrared tracking unit comprises a first infrared acquisition camera and a second infrared acquisition camera, the first infrared acquisition camera is arranged at the top of the display space, the second infrared acquisition camera is arranged on the side surface of the display space, and the shooting directions of the first infrared acquisition camera and the second infrared acquisition camera are perpendicular to each other;
the face tracking strategy further comprises an infrared image analysis sub-strategy, wherein the infrared image analysis sub-strategy comprises the following steps: acquiring a first infrared image through a first infrared acquisition camera, and acquiring a second infrared image through a second infrared acquisition camera;
setting an X axis and a Y axis for the first infrared image, and dividing the first infrared image into pixel points according to a first pixel proportion, wherein the division unit of the X axis and the Y axis is equal to the side length of one pixel point; acquiring a region of a temperature interval in the first infrared image within a first human body reference temperature interval, and setting the region as a basic capturing region; selecting a pixel point from the basic capture area as a basic capture midpoint, and acquiring an X-axis coordinate and a Y-axis coordinate of the basic capture midpoint;
setting a Y axis and a Z axis for a second infrared image, wherein the Y axis of the second infrared image is consistent with the Y axis of the first infrared image, selecting a plurality of pre-selected capture points which have the same Y axis coordinate with the basic capture midpoint and have the temperature intervals within the first human body reference temperature interval from the second infrared image, obtaining the maximum value of the Z axis coordinate of the pre-selected capture points, and solving the Z axis capture coordinate of the maximum value of the Z axis coordinate of the pre-selected capture points through a Z axis capture calculation formula; the Z-axis capture calculation formula is configured to:(ii) a Wherein Zb is a Z-axis capture coordinate, Z max The maximum value of Z-axis coordinates in a plurality of preselected capture points is obtained, alpha is the Z-axis transformation ratio of the preselected capture points, and the value range of the alpha is between 0 and 2;
setting a face tracking reference point, wherein the X-axis coordinate of the face tracking reference point is the same as the X-axis coordinate of the basic capturing midpoint, the Y-axis coordinate of the face tracking reference point is the same as the Y-axis coordinate of the basic capturing midpoint, and the Z-axis coordinate of the face tracking reference point is the same as the Z-axis capturing coordinate;
the face tracking unit comprises four groups of face tracking cameras, the four groups of face tracking cameras are respectively arranged on four side surfaces of the display space, and the shooting angles of two adjacent face tracking cameras are vertical; the face tracking strategy further comprises a face image acquisition sub-strategy, wherein the face image acquisition sub-strategy comprises: respectively acquiring face images towards face tracking reference points through four groups of face tracking cameras, wherein the area of a shooting area of the face images is larger than that of a first reference area;
the master control processor is also provided with a face database, and the face database stores iris characteristic information of a face and an eye distance when the face is shot positively;
the general control strategy also comprises a face orientation analysis sub-strategy, wherein the face orientation analysis sub-strategy comprises the following steps: respectively marking the face images acquired by the four groups of face tracking cameras as a first face image, a second face image, a third face image and a fourth face image;
comparing iris feature identification with stored iris feature information to respectively obtain eye distances in a first face image, a second face image, a third face image and a fourth face image;
when two groups of eye distance information are acquired from a first face image, a second face image, a third face image and a fourth face image, taking the image with the largest eye distance in the first face image, the second face image, the third face image and the fourth face image as a basic azimuth image; taking an image with the second largest eye distance in the first face image, the second face image, the third face image and the fourth face image as a deviation orientation image;
when a group of eye distance information is acquired from a first face image, a second face image, a third face image and a fourth face image, taking an image with the eye distance information in the first face image, the second face image, the third face image and the fourth face image as a forward direction orientation image;
setting the eye distance when the face is shot in the positive direction as a basic reference eye distance, setting the eye distance in the basic azimuth image as a basic azimuth eye distance, and setting the eye distance in the deviation azimuth image as a deviation azimuth eye distance; calculating the basic reference eye distance, the basic azimuth eye distance and the deviation azimuth eye distance through a deviation angle calculation formula to obtain a human face deviation angle; the deflection angle calculation formula is configured as:(ii) a Wherein Rpx is a human face deviation angle, sjc is a basic reference eye distance, sjf is a basic azimuth eye distance, spf is a deviation azimuth eye distance, a1 is a first angle conversion coefficient, and a2 is a second angle conversion coefficient.
2. A novel interactive surround intelligent display device based on face recognition as claimed in claim 1, wherein the general control strategy further comprises an intelligent display tracking sub-strategy, the intelligent display tracking sub-strategy comprising: when two groups of eye distance information are acquired from a first face image, a second face image, a third face image and a fourth face image, setting the side face of a display space where a face tracking camera corresponding to a basic azimuth image is located as a basic azimuth side face, and setting the side face of the display space where the face tracking camera corresponding to a deviation azimuth image is located as a deviation azimuth side face; the angle adjusting device drives the projection display device to rotate the human face deflection angle from the base azimuth side to the deflection azimuth side;
when a group of eye distance information is acquired from the first face image, the second face image, the third face image and the fourth face image, setting the side face of the display space where the face tracking camera corresponding to the forward direction orientation image is located as a forward direction orientation side face; the projection display device is driven by the angle adjusting device to rotate towards the side face of the forward direction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211437166.0A CN115494961B (en) | 2022-11-17 | 2022-11-17 | Novel interactive surrounding intelligent display equipment based on face recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211437166.0A CN115494961B (en) | 2022-11-17 | 2022-11-17 | Novel interactive surrounding intelligent display equipment based on face recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115494961A CN115494961A (en) | 2022-12-20 |
CN115494961B true CN115494961B (en) | 2023-03-24 |
Family
ID=85115935
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211437166.0A Active CN115494961B (en) | 2022-11-17 | 2022-11-17 | Novel interactive surrounding intelligent display equipment based on face recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115494961B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106972990A (en) * | 2016-01-14 | 2017-07-21 | 芋头科技(杭州)有限公司 | Intelligent home device based on Application on Voiceprint Recognition |
CN114967128A (en) * | 2022-06-20 | 2022-08-30 | 深圳市新联优品科技有限公司 | Sight tracking system and method applied to VR glasses |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106325484A (en) * | 2015-06-30 | 2017-01-11 | 芋头科技(杭州)有限公司 | Face directional display system and method |
CN105868574B (en) * | 2016-04-25 | 2018-12-14 | 南京大学 | A kind of optimization method of camera track human faces and wisdom health monitor system based on video |
CN109960401B (en) * | 2017-12-26 | 2020-10-23 | 广景视睿科技(深圳)有限公司 | Dynamic projection method, device and system based on face tracking |
CN110769213A (en) * | 2018-08-20 | 2020-02-07 | 成都极米科技股份有限公司 | Automatic tracking projection method and device for display area based on face recognition |
US20220365363A1 (en) * | 2019-09-17 | 2022-11-17 | Jingmen City Dream Exploration Technology Co., Ltd. | Holographic display system |
CN113703564A (en) * | 2020-05-21 | 2021-11-26 | 北京聚匠艺传媒有限公司 | Man-machine interaction equipment and system based on facial features |
CN112351358B (en) * | 2020-11-03 | 2022-03-25 | 浙江大学 | 360-degree free three-dimensional type three-dimensional display sound box based on face detection |
-
2022
- 2022-11-17 CN CN202211437166.0A patent/CN115494961B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106972990A (en) * | 2016-01-14 | 2017-07-21 | 芋头科技(杭州)有限公司 | Intelligent home device based on Application on Voiceprint Recognition |
CN114967128A (en) * | 2022-06-20 | 2022-08-30 | 深圳市新联优品科技有限公司 | Sight tracking system and method applied to VR glasses |
Also Published As
Publication number | Publication date |
---|---|
CN115494961A (en) | 2022-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103716594B (en) | Panorama splicing linkage method and device based on moving target detecting | |
CN109960401B (en) | Dynamic projection method, device and system based on face tracking | |
KR100588042B1 (en) | Interactive presentation system | |
CN100531373C (en) | Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure | |
US20120133754A1 (en) | Gaze tracking system and method for controlling internet protocol tv at a distance | |
CN107462992B (en) | Method and device for adjusting head-mounted display equipment and head-mounted display equipment | |
CN201252615Y (en) | Control system for TV camera and cradle head in case of location shooting | |
CN102855471B (en) | Remote iris intelligent imaging device and method | |
CN1457468A (en) | Automatic positioning of display depending upon viewer's location | |
CN108351951A (en) | intelligent privacy system, device and method thereof | |
CN101587542A (en) | Field depth blending strengthening display method and system based on eye movement tracking | |
CN101072332A (en) | Automatic mobile target tracking and shooting method | |
JP2011166305A (en) | Image processing apparatus and imaging apparatus | |
CN112363626B (en) | Large screen interaction control method based on human body posture and gesture posture visual recognition | |
CN112666705A (en) | Eye movement tracking device and eye movement tracking method | |
KR20080052398A (en) | Guesture recognition system having mobile video camera | |
CN113438464A (en) | Switching control method, medium and system for naked eye 3D display mode | |
US10176375B2 (en) | High speed pupil detection system and method | |
JP4464902B2 (en) | Camera control apparatus and camera control program | |
CN115494961B (en) | Novel interactive surrounding intelligent display equipment based on face recognition | |
KR101916093B1 (en) | Method for tracking object | |
US9386280B2 (en) | Method for setting up a monitoring camera | |
CN103327385B (en) | Based on single image sensor apart from recognition methods and device | |
US20120026617A1 (en) | Mirror and adjustment method therefor | |
CN1292287C (en) | Projection system with image pickup device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |