CN111131872A - Intelligent television integrated with depth camera and control method and control system thereof - Google Patents

Intelligent television integrated with depth camera and control method and control system thereof Download PDF

Info

Publication number
CN111131872A
CN111131872A CN201911308303.9A CN201911308303A CN111131872A CN 111131872 A CN111131872 A CN 111131872A CN 201911308303 A CN201911308303 A CN 201911308303A CN 111131872 A CN111131872 A CN 111131872A
Authority
CN
China
Prior art keywords
target object
depth
depth information
information
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911308303.9A
Other languages
Chinese (zh)
Inventor
黄俊宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Konka Electronic Technology Co Ltd
Original Assignee
Shenzhen Konka Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Konka Electronic Technology Co Ltd filed Critical Shenzhen Konka Electronic Technology Co Ltd
Priority to CN201911308303.9A priority Critical patent/CN111131872A/en
Publication of CN111131872A publication Critical patent/CN111131872A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4784Supplemental services, e.g. displaying phone caller identification, shopping application receiving rewards

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a smart television integrated with a depth camera and a control method and a control system thereof, wherein the method comprises the following steps: acquiring depth information of a target object; extracting the structural features and/or the action features of the target object from the depth information; and controlling the smart television to execute corresponding operation according to the structural characteristics and/or the action characteristics of the target object. According to the invention, the depth information of the target object is acquired through the depth camera, so that the object can be accurately identified; the structural features and/or the action features of the target object are extracted from the depth information, and the structural features and/or the action features of the target object are converted into control instructions of the intelligent television to control the intelligent television to execute corresponding operations, so that application scenes such as remote action control and face recognition payment of the intelligent television can be realized through image recognition.

Description

Intelligent television integrated with depth camera and control method and control system thereof
Technical Field
The invention relates to the technical field of smart televisions, in particular to a smart television integrated with a depth camera, and a control method and a control system thereof.
Background
With the rapid development of science and technology, the functions of televisions are no longer limited to watching programs and playing video, and smart televisions with user interfaces or operating systems are available. The existing smart television generally can be matched with the 2D camera when realizing live-action display and man-machine action interaction, but due to the limitation of the 2D camera, when the shot images are overlapped due to similar colors, the accuracy of object or action recognition can be influenced, and the application scenes of the existing smart television on the matched 2D camera are limited.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The invention provides a depth camera integrated smart television and a control method and a control system thereof, aiming at solving the technical problem that the existing smart television is only provided with a 2D camera, and the application scene of the existing smart television on matching with the 2D camera is limited because the shot images cannot be accurately identified due to the overlapping of similar colors.
The technical scheme adopted by the invention for solving the technical problem is as follows:
acquiring depth information of a target object; the depth information comprises position information between each acquisition point on the surface of the target object and the depth camera;
extracting the structural features and/or the action features of the target object from the depth information;
and controlling the smart television to execute corresponding operation according to the structural characteristics and/or the action characteristics of the target object.
The control method of the smart television integrated with the depth camera comprises the following steps of:
projecting a known coding pattern or light pulse to the surface of a target object through a depth camera to acquire light information reflected by the target object;
and calculating the distance between each point on the surface of the target object and the depth camera according to the deformation degree of the coding pattern in the optical information or the electric charge amount obtained after photoelectric conversion, and acquiring the depth information of the target object.
The method for controlling the smart television integrated with the depth camera, wherein when the target object is an article, the step of extracting the structural feature of the target object from the depth information specifically includes:
extracting size and/or spatial position information of the target object from the depth information;
and acquiring the structural characteristics of the target object according to the size and/or the spatial position information of the target object.
The control method of the smart television integrated with the depth camera, wherein when the target object is a human body and the structural feature is a human face feature, the step of extracting the structural feature of the target object from the depth information specifically includes:
carrying out face detection on the depth information, removing background information, and extracting face information of the human body;
and extracting the human face features of the human body from the human face information by using a regional feature analysis algorithm and an image processing technology.
The control method of the smart television integrated with the depth camera, wherein when the target object is a human body, the step of extracting the action features of the target object from the depth information specifically includes:
extracting characteristic values of the joint points of the human body according to a deep learning algorithm by using the depth information of the human body;
training the extracted characteristic values of the joint points, and positioning and tracking the bones of the human body according to the characteristic values obtained after training to obtain the action characteristics of the human body.
The method for controlling the smart television integrated with the depth camera includes the following steps:
judging whether the structural features and/or the action features of the target object are consistent with the structural features and/or the action features stored in a cloud database or a local database;
if yes, extracting an operation instruction corresponding to the structural feature and/or the action feature from the cloud database or the local database, and controlling the smart television to execute corresponding operation according to the operation instruction.
A control system of a smart television integrated with a depth camera, comprising:
the depth camera is used for acquiring depth information of the target object; the depth information comprises position information between each acquisition point on the surface of the target object and the depth camera;
the data processing module is used for extracting the structural features and/or the action features of the target object from the depth information;
and the control module is used for controlling the intelligent television to execute corresponding operation according to the structural characteristics and/or the action characteristics of the target object.
The control system of the intelligent television integrated with the depth camera comprises a data acquisition unit and a first communication unit; the control module comprises a control unit and a second communication unit; the data processing module comprises a data processing unit and a third communication unit;
the data acquisition unit is used for acquiring depth information of a target object;
the first communication unit is used for transmitting the depth information to the second communication unit;
the second communication unit is used for receiving the depth information and transmitting the structural characteristics and/or the action characteristics of the target object to a third communication unit;
the data processing unit is used for extracting the structural features and/or the action features of the target object from the depth information;
the third communication unit is used for receiving the depth information and transmitting the structural characteristics and/or the action characteristics of the target object to the second communication unit;
and the control unit is used for controlling the intelligent television to execute corresponding operation according to the structural characteristics and/or the action characteristics of the target object.
The control system of the intelligent television integrated with the depth camera, wherein the system further comprises:
and the 2D camera is used for acquiring a color image of the target object and transmitting the color image of the target object to the control module.
An intelligent television set at least comprises the control system of the intelligent television integrated with the depth camera.
The invention has the beneficial effects that: according to the invention, the depth information of the target object is acquired through the depth camera, so that the object can be accurately identified; the structural features and/or the action features of the target object are extracted from the depth information, and the structural features and/or the action features of the target object are converted into the control instruction of the intelligent television to control the intelligent television to execute corresponding operation, so that the intelligent television can be applied to scenes such as remote action control, face recognition payment and the like.
Drawings
Fig. 1 is a flowchart of a control method of a smart tv integrated with a depth camera according to a preferred embodiment of the present invention;
fig. 2 is a schematic structural diagram of a control system of a smart television integrated with a depth camera according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a control method of an intelligent television integrated with a depth camera, and aims to solve the problem that the application scenes of the existing intelligent television matched with a 2D camera are limited because the 2D camera is only arranged on the existing intelligent television and the shot images cannot accurately identify objects due to the overlapping of similar colors.
Referring to fig. 1, fig. 1 is a flowchart illustrating a control method of a smart television integrated with a depth camera according to a preferred embodiment of the present invention.
In a preferred embodiment of the present invention, the method for controlling the smart tv integrated with the depth camera includes three steps:
s100, acquiring depth information of a target object;
s200, extracting the structural features and/or the action features of the target object from the depth information;
and S300, controlling the intelligent television to execute corresponding operation according to the structural characteristics and/or the action characteristics of the target object.
During specific implementation, in order to solve the problem that the existing smart television is only provided with a 2D camera, shot images cannot accurately identify objects due to the fact that similar colors are overlapped, and therefore the existing smart television cannot perform remote and accurate control through image identification, in the embodiment, a depth camera is integrated into the smart television, depth information of a target object is obtained through the depth camera, and the distance relation of the surface of the object relative to the depth camera can be accurately identified due to the fact that the depth information contains three-dimensional data of the target object. The structural features and/or the action features of the target object are extracted from the depth information, and the structural features and/or the action features of the target object are converted into control instructions of the intelligent television to control the intelligent television to execute corresponding operations, so that the remote control of the intelligent television is realized through image recognition.
In a specific embodiment, the step S100 specifically includes:
step S110, projecting a known coding pattern or light pulse to the surface of a target object through a depth camera, and acquiring light information reflected by the target object;
step S120, calculating distances between each point on the surface of the target object and the depth camera according to the deformation degree of the coding pattern in the optical information or the charge amount obtained after photoelectric conversion, and obtaining the depth information of the target object. .
In specific implementation, the depth camera may be a structured light camera, an RGB binocular camera, or a TOF camera, etc. Since the depth calculation of the TOF camera is not affected by the surface gray scale and the features of the object, the three-dimensional measurement can be performed very accurately, and the depth calculation accuracy does not change with the change of the distance. When the depth information of the target object needs to be acquired, infrared light pulses are transmitted to the surface of the target object through the depth camera, the target object is shot at preset angles until complete three-dimensional coordinate data of the surface of the target object are acquired, and the depth information of the target object is acquired. When the depth-based camera shoots around the target object, the motion trail of the camera during shooting can be irregular, and the shooting preset angle can be correspondingly adjusted according to different structures of the surface of the target object.
In specific implementation, the depth information of the target object includes position information between each acquisition point on the surface of the target object and the depth camera, and the position information includes a distance value and an angle value between each acquisition point on the surface of the target object and the depth camera. The acquisition method of the distance value is that the depth camera transmits infrared light pulses to the surface of a target object; under the irradiation of the infrared light pulse, the surface of the target object reflects the infrared light pulse to the depth camera and is received by a photosensitive unit in the depth camera; and the depth camera calculates the distance between each point on the surface of the target object and the depth camera according to the pulse width time and the light speed of the infrared light and the electric charge amount obtained after photoelectric conversion. The shooting angle is an angle value between the central line of the depth camera lens in the depth camera coordinate system and three coordinate axes. Since the depth camera is rotated around the target object between a preset angle or a preset plurality of angles when acquiring images of different directions on the surface of the target object, the depth camera forms different shooting angles during the rotation process.
In a specific embodiment, the target object may be determined according to interface information currently displayed by the smart television or a currently running application program, and the target object may be an article or a human body. For example, after the smart television enters the power-on interface, the depth information of a living room in which the television is placed or a sofa in the living room is acquired through the depth camera, and the display mode, the sound effect and the like of the smart television are adjusted according to the depth information. For another example, depth information of a human body watching the smart television can be acquired through the depth camera, motion characteristics of the human body are identified from the depth information, and the smart television is controlled according to the identified motion characteristics of the human body.
In a specific embodiment, when the target object is an article, the step of extracting the structural feature of the target object from the depth information in step S200 specifically includes:
step S210, extracting the size and/or spatial position information of the target object from the depth information;
and S220, acquiring the structural characteristics of the target object according to the size and/or the spatial position information of the target object.
In specific implementation, the depth information of the target object in the foregoing steps includes position information between each acquisition point on the surface of the target object and the depth camera, that is, a distance value and an angle value between each acquisition point on the surface of the target object and the depth camera, and when the target object is an article, after the depth information of the target object is acquired, the size and/or spatial position information of the target object is further extracted from the depth information, and then the structural feature of the target object is acquired according to the size and/or spatial position information of the target object. For example, the target object is a living room in which the smart television is placed, after the smart television acquires depth information of the living room through the depth camera, the distance value between each sampling point and the depth camera in the living room can be obtained from the depth information, and if points on the front, rear, left, right, upper and lower walls opposite to the depth camera are taken as sampling points, the area size of the living room can be calculated according to the distance between the sampling points and the depth camera. For another example, when the target object is a sofa placed in a living room, since the user generally sits on the sofa to watch television, the position information of the sofa in the living room can be obtained through the depth image of the sofa, and further the information of the distance of the user to watch is obtained.
In a specific embodiment, when the target object is a human body and the structural feature is a human face feature, the step of extracting the structural feature of the target object from the depth image in step S200 specifically includes:
step S210', performing face detection on the depth information, removing background information, and extracting face information of the human body;
step S220', extracting the human face features of the human body from the human face information by using a regional feature analysis algorithm and an image processing technology.
Although the existing smart television carries a user interface and various operating systems, a user can watch a historical interface and television shopping through logging in a television account, and the user needs to input a complex user name and a complex password when logging in the account or paying for the shopping. In this embodiment, when a user needs to log in an account or pay for shopping, after depth information of a human body is acquired by a depth camera, because original data acquired by the depth camera includes a large amount of random noise and background data irrelevant to the human face, an area including the human face is cut out, then noise in the human face data is removed, and the human face information of the human body is extracted from the depth information of the human body. And then extracting the human face features from the human face information by using a regional feature analysis algorithm and an image processing technology, automatically controlling the smart television to perform account login or shopping payment by recognizing the human face features of the user, and pushing programs watched in the past for the user according to the account logged in by the user, thereby facilitating the operation of the user.
In a specific embodiment, when the target object is a human body, the step of extracting the motion feature of the target object from the depth information in step S200 specifically includes:
step S210', extracting the characteristic value of the human body joint point according to a depth learning algorithm by using the depth information of the human body;
and S220', training the extracted characteristic values of the joint points, and positioning and tracking the bones of the human body according to the characteristic values obtained after training to obtain the action characteristics of the human body.
In specific implementation, in order to realize control over the smart television by recognizing specific motion features of a user, in this embodiment, after depth information of a human body is acquired, feature values of joint points of the human body are extracted according to a deep learning algorithm, and the deep learning algorithm is a deep learning algorithm based on a convolutional neural network, so that the neural network is favorable for improving the accuracy and speed of feature value extraction. And then training the extracted characteristic values of the joint points, and positioning and tracking the bones of the human body according to the characteristic values obtained after training to obtain the action characteristics of the human body. The joint points comprise brain joint points, double shoulder joint points, hand joint points, foot joint points and the like. For example, by locating and tracking the hand joint points of the human body, the hand joint points of the user move from top to bottom or from left to right.
In a specific embodiment, the step S300 specifically includes:
step S310, judging whether the structural features and/or the action features of the target object are consistent with those stored in a cloud database or a local database;
step S320, if yes, extracting an operation instruction corresponding to the structural feature and/or the action feature from the cloud database or the local database, and controlling the smart television to execute a corresponding operation according to the operation instruction.
In specific implementation, in order to realize remote control of the smart television, in this embodiment, the smart television is connected to a cloud database, and the cloud database and a local database in the smart television store operation instructions corresponding to the structural features and/or the action features. For example, the volume value or play mode corresponding to the size of the living room; the hand joint point moves from left to right corresponding to the increase of the channel number. After the structure and/or action characteristics of the target object are extracted from the depth information, whether the structure and/or action characteristics of the target object are consistent with the structure and/or action characteristics stored in a cloud database or a local database is judged; if yes, extracting an operation instruction corresponding to the structural feature and/or the action feature from a cloud database or a local database, and controlling the smart television to execute corresponding operation according to the operation instruction. Specifically, whether the structural features and/or the motion features of the target object are consistent with the structural features and/or the motion features stored in the cloud database or the local database or not is judged, a similarity threshold value can be set, and when the structural features and/or the motion features of the target object are similar to the structural features and/or the motion features stored in the cloud database or the local database and are greater than a preset threshold value, the structural features and/or the motion features of the target object are judged to be consistent with the structural features and/or the motion features stored in the cloud database or the local database. In this embodiment, when the smart television is in a network connection state, the extracted structural features and/or motion features of the target object are preferentially compared with the structural features and/or motion features stored in the cloud database. When the intelligent television is in a network disconnection state, the intelligent television cannot be connected to the cloud database, so that the extracted structural features and/or action features of the target object are compared with the structural features and/or action features stored in the local database, and the structural features and/or action features of the target object can be identified no matter the intelligent television is in the network connection state or the network disconnection state.
In specific implementation, when a login account or shopping payment is carried out through the smart television, after the face features of the user are extracted from the depth image, the extracted face features of the user are compared with face model features which are pre-established in a cloud database, and whether the extracted face features of the user are consistent with the face model features stored in the cloud database or not is judged; if the face model is the user operation, the login or shopping payment is performed, login information or payment information corresponding to the face model features is obtained from the cloud database for login or payment, so that the input of a complex account number and a complex password is avoided, and the safety of account number or shopping payment is ensured.
In a specific embodiment, after the step S100, the method further includes:
step S200', extracting depth information data of the target object by using a depth camera, and establishing a three-dimensional model of the target object according to the depth information data;
in specific implementation, after the depth information of the target object is acquired, based on the position information between each acquisition point on the surface of the target object and the depth camera, which is contained in the depth information, three-dimensional coordinate values of each acquisition point on the surface of the target object in a three-dimensional coordinate system of the depth camera are calculated, and the depth information data of the target object is acquired. And then, according to the three-dimensional coordinate value of each acquisition point on the surface of the target object in the three-dimensional coordinate system of the depth camera, determining the three-dimensional coordinate value of each acquisition point in the coordinate system of the intelligent television, and performing three-dimensional modeling on the target object according to the three-dimensional coordinate value of each acquisition point in the coordinate system of the intelligent television, so that virtual trial assembly and the like on the intelligent television can be realized.
The present invention also provides a control system of a smart tv integrated with a depth camera, as shown in fig. 2, the system includes:
a depth camera 210 for acquiring depth information of a target object; the depth information comprises position information between each acquisition point on the surface of the target object and the depth camera;
the data processing module 220 is configured to extract structural features and/or motion features of the target object from the depth information;
and the control module 230 is configured to control the smart television to perform a corresponding operation according to the structural feature and/or the action feature of the target object.
In specific implementation, because the mainstream interface of the current depth camera is a Type-C, MIPI interface and the like, and the smart television does not directly support the hardware interfaces of the depth cameras, in order to realize the integration of the depth camera and the smart television, in this embodiment, conversion is performed by adding a Type-C/MIPI to HDMI signal chip on the smart television. Because the distance between the installation position of the depth camera and the interface position of the intelligent television mainboard is larger, the Re-Timer device can be added for compensation in order to effectively reduce the frequency loss and attenuation related to the connector and the cable in the data transmission process.
In a specific embodiment, the depth camera and the data processing module are respectively connected to the control module, the depth information of the target object acquired by the depth camera is transmitted to the control module, and is transmitted to the data processing module through the control module for data processing, and the structural feature and/or the motion feature of the target object is extracted from the depth information. The data processing module transmits the extracted structural features and/or action features of the target object to the control module, and the control module controls the intelligent television to execute corresponding operations according to the structural features and/or action features of the target object.
In one embodiment, as shown in fig. 2, the depth camera 210 includes a data acquisition unit 211 and a first communication unit 212; the control module 220 includes a control unit 221 and a second communication unit 222; the data processing module 230 includes a data processing unit 231 and a third communication unit 232. In specific implementation, the data acquisition unit 211 acquires depth information of a target object; the first communication unit 212 transmits the depth information of the target object acquired by the data acquisition unit 211 to the second communication unit 222; the second communication unit 222 receives the depth information of the target object transmitted by the first communication unit 212 and transmits the depth information to the third communication unit 232; the third communication unit 232 receives the depth information of the target object transmitted by the second communication unit 222; the data processing unit 231 extracts the structural features and/or the motion features of the target object from the depth information; the third communication unit 232 transmits the structural features and/or the motion features of the target object extracted by the data processing unit 231 to the second communication unit 222; the control unit 221 controls the smart television to execute corresponding operations according to the structural features and/or the action features of the target object.
In a specific embodiment, the control system further includes a 2D camera 240, the 2D camera 240 is also connected to the control module 220, a distance between an infrared lens of the depth camera 210 and a camera of the 2D camera 240 needs to be designed and fixed structurally, a user cannot change the distance freely, and calibration is needed during production of the smart television. The 2D camera 240 is configured to acquire a color image of the target object and transmit the color image of the target object to the control module.
In a specific embodiment, the data processing module 230 further includes a three-dimensional modeling unit, and the three-dimensional modeling unit is configured to build a three-dimensional model of the target object according to the depth information. The data processing module 230 further includes a data fusion unit, and the data fusion unit is configured to extract RGB color values corresponding to each collection point from the color image of the target object, and fill the RGB color values corresponding to each collection point into the established three-dimensional model according to the three-dimensional coordinate values in the smart television coordinate system corresponding to each collection point.
In summary, the present invention discloses a smart television integrated with a depth camera and a control method and a control system thereof, wherein the method comprises: acquiring depth information of a target object; extracting the structural features and/or the action features of the target object from the depth information; and controlling the smart television to execute corresponding operation according to the structural characteristics and/or the action characteristics of the target object. According to the invention, the depth information of the target object is acquired through the depth camera, so that the object can be accurately identified; the structure characteristics and/or the action characteristics of the target object are extracted from the depth information, and the structure characteristics and/or the action characteristics of the target object are converted into the control instruction of the intelligent television to control the intelligent television to execute corresponding operation, so that the remote control of the intelligent television can be realized.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A control method of an intelligent television integrated with a depth camera is characterized by comprising the following steps:
acquiring depth information of a target object; the depth information comprises position information between each acquisition point on the surface of the target object and the depth camera;
extracting the structural features and/or the action features of the target object from the depth information;
and controlling the smart television to execute corresponding operation according to the structural characteristics and/or the action characteristics of the target object.
2. The method for controlling the smart tv with the integrated depth camera according to claim 1, wherein the step of obtaining the depth information of the target object specifically comprises:
projecting a known coding pattern or light pulse to the surface of a target object through a depth camera to acquire light information reflected by the target object;
and calculating the distance between each point on the surface of the target object and the depth camera according to the deformation degree of the coding pattern in the optical information or the electric charge amount obtained after photoelectric conversion, and acquiring the depth information of the target object.
3. The method for controlling the smart tv with the integrated depth camera according to claim 1, wherein when the target object is an article, the step of extracting the structural feature of the target object from the depth information specifically includes:
extracting size and/or spatial position information of the target object from the depth information;
and acquiring the structural characteristics of the target object according to the size and/or the spatial position information of the target object.
4. The method according to claim 3, wherein when the target object is a human body and the structural feature is a human face feature, the step of extracting the structural feature of the target object from the depth information specifically includes:
carrying out face detection on the depth information, removing background information, and extracting face information of the human body;
and extracting the human face features of the human body from the human face information by using a regional feature analysis algorithm and an image processing technology.
5. The method according to claim 3, wherein when the target object is a human body, the step of extracting the motion feature of the target object from the depth information specifically includes:
extracting characteristic values of the joint points of the human body according to a deep learning algorithm by using the depth information of the human body;
training the extracted characteristic values of the joint points, and positioning and tracking the bones of the human body according to the characteristic values obtained after training to obtain the action characteristics of the human body.
6. The method for controlling the smart television with the integrated depth camera according to claim 1, wherein the step of controlling the smart television to perform corresponding operations according to the structural features and/or the motion features of the target object specifically comprises:
judging whether the structural features and/or the action features of the target object are consistent with the structural features and/or the action features stored in a cloud database or a local database;
if yes, extracting an operation instruction corresponding to the structural feature and/or the action feature from the cloud database or the local database, and controlling the smart television to execute corresponding operation according to the operation instruction.
7. A control system of an intelligent television integrated with a depth camera is characterized by comprising:
the depth camera is used for acquiring depth information of the target object; the depth information comprises position information between each acquisition point on the surface of the target object and the depth camera;
the data processing module is used for extracting the structural features and/or the action features of the target object from the depth information;
and the control module is used for controlling the intelligent television to execute corresponding operation according to the structural characteristics and/or the action characteristics of the target object.
8. The control system of the intelligent television integrated with the depth camera according to claim 7, wherein the depth camera comprises a data acquisition unit and a first communication unit; the control module comprises a control unit and a second communication unit; the data processing module comprises a data processing unit and a third communication unit;
the data acquisition unit is used for acquiring depth information of a target object;
the first communication unit is used for transmitting the depth information to the second communication unit;
the second communication unit is used for receiving the depth information and transmitting the structural characteristics and/or the action characteristics of the target object to a third communication unit;
the data processing unit is used for extracting the structural features and/or the action features of the target object from the depth information;
the third communication unit is used for receiving the depth information and transmitting the structural characteristics and/or the action characteristics of the target object to the second communication unit;
and the control unit is used for controlling the intelligent television to execute corresponding operation according to the structural characteristics and/or the action characteristics of the target object.
9. The control system of the depth camera-integrated smart tv of claim 8, further comprising:
and the 2D camera is used for acquiring a color image of the target object and transmitting the color image of the target object to the control module.
10. An intelligent television set, characterized by comprising at least the control system of the intelligent television set integrated with the depth camera of any one of the claims 7 to 9.
CN201911308303.9A 2019-12-18 2019-12-18 Intelligent television integrated with depth camera and control method and control system thereof Pending CN111131872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911308303.9A CN111131872A (en) 2019-12-18 2019-12-18 Intelligent television integrated with depth camera and control method and control system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911308303.9A CN111131872A (en) 2019-12-18 2019-12-18 Intelligent television integrated with depth camera and control method and control system thereof

Publications (1)

Publication Number Publication Date
CN111131872A true CN111131872A (en) 2020-05-08

Family

ID=70499523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911308303.9A Pending CN111131872A (en) 2019-12-18 2019-12-18 Intelligent television integrated with depth camera and control method and control system thereof

Country Status (1)

Country Link
CN (1) CN111131872A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110093889A1 (en) * 2009-10-21 2011-04-21 John Araki User interface for interactive digital television
CN102547172A (en) * 2010-12-22 2012-07-04 康佳集团股份有限公司 Remote control television
US20170124742A1 (en) * 2015-10-29 2017-05-04 Intel Corporation Variable Rasterization Order for Motion Blur and Depth of Field
CN107105217A (en) * 2017-04-17 2017-08-29 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
CN108415875A (en) * 2018-02-01 2018-08-17 深圳奥比中光科技有限公司 The method of Depth Imaging mobile terminal and face recognition application
CN108537187A (en) * 2017-12-04 2018-09-14 深圳奥比中光科技有限公司 Task executing method, terminal device and computer readable storage medium
CN109508093A (en) * 2018-11-13 2019-03-22 宁波视睿迪光电有限公司 A kind of virtual reality exchange method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110093889A1 (en) * 2009-10-21 2011-04-21 John Araki User interface for interactive digital television
CN102547172A (en) * 2010-12-22 2012-07-04 康佳集团股份有限公司 Remote control television
US20170124742A1 (en) * 2015-10-29 2017-05-04 Intel Corporation Variable Rasterization Order for Motion Blur and Depth of Field
CN107105217A (en) * 2017-04-17 2017-08-29 深圳奥比中光科技有限公司 Multi-mode depth calculation processor and 3D rendering equipment
CN108537187A (en) * 2017-12-04 2018-09-14 深圳奥比中光科技有限公司 Task executing method, terminal device and computer readable storage medium
CN108415875A (en) * 2018-02-01 2018-08-17 深圳奥比中光科技有限公司 The method of Depth Imaging mobile terminal and face recognition application
CN109508093A (en) * 2018-11-13 2019-03-22 宁波视睿迪光电有限公司 A kind of virtual reality exchange method and device

Similar Documents

Publication Publication Date Title
US9842433B2 (en) Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality
CN105574525B (en) A kind of complex scene multi-modal biological characteristic image acquiring method and its device
US9432593B2 (en) Target object information acquisition method and electronic device
CN104036488B (en) Binocular vision-based human body posture and action research method
CN105554385A (en) Remote multimode biometric recognition method and system thereof
WO2017107192A1 (en) Depth map generation apparatus, method and non-transitory computer-readable medium therefor
EP3588363A1 (en) Depth-based control method, depth-based control device and electronic device
CN109453517B (en) Virtual character control method and device, storage medium and mobile terminal
CN111104960B (en) Sign language identification method based on millimeter wave radar and machine vision
CN110458897A (en) Multi-cam automatic calibration method and system, monitoring method and system
CN103796001A (en) Method and device for synchronously acquiring depth information and color information
CN102663364A (en) Imitated 3D gesture recognition system and method
CN111602139A (en) Image processing method and device, control terminal and mobile device
WO2022068193A1 (en) Wearable device, intelligent guidance method and apparatus, guidance system and storage medium
CN109784028B (en) Face unlocking method and related device
CN112666705A (en) Eye movement tracking device and eye movement tracking method
KR100862352B1 (en) Apparatus for automatic power control of video appliance and the method thereof
CN109839827B (en) Gesture recognition intelligent household control system based on full-space position information
TWM474312U (en) Smart television having remote controller using gesture recognition
WO2022127181A1 (en) Passenger flow monitoring method and apparatus, and electronic device and storage medium
CN115735358A (en) Switching control method, medium and system for naked eye 3D display mode
D'Eusanio et al. Refinet: 3d human pose refinement with depth maps
CN110880161A (en) Depth image splicing and fusing method and system for multi-host multi-depth camera
CN104598021A (en) Display equipment and display method
WO2022041182A1 (en) Method and device for making music recommendation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200508

RJ01 Rejection of invention patent application after publication