CN111194122A - Somatosensory interactive light control system - Google Patents

Somatosensory interactive light control system Download PDF

Info

Publication number
CN111194122A
CN111194122A CN201811346674.1A CN201811346674A CN111194122A CN 111194122 A CN111194122 A CN 111194122A CN 201811346674 A CN201811346674 A CN 201811346674A CN 111194122 A CN111194122 A CN 111194122A
Authority
CN
China
Prior art keywords
light
user
data
behavior
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811346674.1A
Other languages
Chinese (zh)
Inventor
徐松炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanhzhou Yongdian Illumination Co ltd
Original Assignee
Hanhzhou Yongdian Illumination Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanhzhou Yongdian Illumination Co ltd filed Critical Hanhzhou Yongdian Illumination Co ltd
Priority to CN201811346674.1A priority Critical patent/CN111194122A/en
Publication of CN111194122A publication Critical patent/CN111194122A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a somatosensory interactive light control system, relates to the technical field of light control, and solves the problems that the light control system can only perform corresponding light display according to edited animations and cannot perform adaptive light control according to user behaviors to influence the interaction between people and the light control system, and the somatosensory interactive light control system comprises the following steps: the behavior acquisition device is used for acquiring the behavior of the user in real time; a lighting parameter database for storing user behavior and lighting parameter conditions of the lighting device matched with the user behavior; the control terminal acquires the user behaviors in real time through the action acquisition device, inquires out the light-emitting parameters of the light-emitting device matched with the user behaviors in the light-emitting parameter database, and adjusts the light-emitting device to the corresponding light-emitting parameters in real time to realize the interaction between a person and a lamp.

Description

Somatosensory interactive light control system
Technical Field
The invention relates to the technical field of light control, in particular to a somatosensory interactive light control system.
Background
After the animation is edited by the traditional light control system, the edited animation is displayed by the lamp, and the interactive system is a control system for human and light interaction and is realized through different sensors, such as: the temperature, the angle, the humidity, the position and the like can sense various actions of the human body, and the interaction with the light can be realized.
The above prior art solutions have the following drawbacks: the light control system can only carry out corresponding light display according to the edited animation, but cannot carry out adaptive light control according to the behavior of the user, thereby influencing the interactivity of people and the light control system and having improved space.
Disclosure of Invention
An object of the present invention is to provide a somatosensory interactive light control system having an effect of adaptively adjusting light according to a user's behavior to improve interactivity between a person and a light-emitting device.
The above object of the present invention is achieved by the following technical solutions:
the utility model provides an interactive lighting control system is felt to body, includes control terminal and illuminator, includes:
the behavior acquisition device is used for acquiring the behavior of the user in real time;
a lighting parameter database for storing user behavior and lighting parameter conditions of the lighting device matched with the user behavior;
the control terminal collects user behaviors in real time through the action collection device, inquires out light-emitting parameters of the light-emitting device matched with the user behaviors in the light-emitting parameter database, and adjusts the light-emitting device to the corresponding light-emitting parameters in real time to realize interaction between people and the lamp.
By adopting the technical scheme, the behavior acquisition device can acquire the behaviors of the user, the light-emitting parameter database can match the light-emitting parameters of the corresponding light-emitting device according to the behaviors acquired by the behavior acquisition device, and the control of the light-emitting device is realized through the control terminal.
The invention is further configured to: the behavior acquisition device includes:
the system comprises a plurality of light inertial navigation sensing input devices, a plurality of sensors and a controller, wherein the light inertial navigation sensing input devices are respectively fixed on the head, the trunk, the upper limbs and the lower limbs of a user, and each light inertial navigation sensing input device acquires the posture field data of each joint of the user in real time;
and the 3D reconstruction unit reconstructs a three-dimensional field model of the user according to the attitude field data.
By adopting the technical scheme, the behavior acquisition device realizes reconstruction of the three-dimensional field model of the user through the plurality of light inertial navigation sensing input devices and the 3D reconstruction unit, and is favorable for the control terminal to realize corresponding comparison between the behavior and the behavior in the luminous parameter database.
The invention is further configured to: and the behavior acquisition device is also provided with a correction module, and other light inertial navigation sensing input devices are corrected by utilizing the spatial relationship between a certain known light inertial navigation sensing input device and the corresponding body segment and a human body biomechanical model.
Through adopting above-mentioned technical scheme, can rectify the action of human body through the setting of correction module, avoid appearing smiling in the actual testing process and appearing incorrect phenomenon.
The invention is further configured to: the behavior acquisition device includes:
the 3D motion sensing camera collects facial expression and voice training data or facial expression and voice field data of a user.
By adopting the technical scheme, the expression and voice training data of the user can be acquired through the arrangement of the 3D motion sensing camera, the behaviors at the position refer to information such as the expression and the voice, and the behaviors are acquired as the behaviors of the user through the acquisition of the expression and the voice.
The invention is further configured to: still be equipped with expression processing unit on the action collection device, expression processing unit includes:
the first calibration module is used for respectively acquiring speckle pattern data of each reference plane, and the reference planes are virtual vertical planes which are sequentially arranged on the stage at intervals from near to far along the lens direction of the 3D motion sensing camera;
a storage module that stores the speckle pattern data for each of the reference planes;
the interpolation calculation module is used for obtaining the training expression of the user according to the cross-correlation operation result of the expression training data and each speckle pattern data; and obtaining the field expression according to the expression field data.
By adopting the technical scheme, the speckle pattern data of the user is collected through the arrangement of the first calibration module, the storage module is used for correspondingly storing the speckle pattern data, and finally, the further accurate calculation is carried out according to the interpolation calculation module so as to obtain the behavior of the user.
The invention is further configured to: the behavior acquisition device includes:
the motion capture unit collects motion training data or motion field data of a user, wherein the motion training data or the motion field data comprise position data, direction data and speed data of body movement of the user in the motion process.
By adopting the technical scheme, the motion capture unit can timely acquire the motion training data of the user, and the motion training set of the user is created according to the motion field data of the user, so that the field motion of the user is finally acquired.
The invention is further configured to: still be equipped with image calculation processing unit on the action collection device, image calculation processing unit includes:
a calibration module for obtaining relative position data between the motion capture unit and the projection image through calibration of the motion capture unit,
an image processing module for calculating and analyzing the continuous image data of the user motion process to obtain the position data, the direction data and the speed data of the body movement of the user in the motion capture unit,
a coordinate conversion module to convert the position data, direction data, and velocity data in a motion capture unit image coordinate system to projection position data, projection direction data, and projection velocity data in a projection unit image coordinate system.
By adopting the technical scheme, the relative position database can be obtained through the arrangement of the calibration module, and finally, the action data of the user can be further accurately obtained by combining the image processing module and the coordinate conversion module.
The invention is further configured to: the light-emitting parameter database also records the query times of the control terminal about a certain behavior, and ranks the behaviors from at least more times according to the query times, and the control terminal queries from at least more times according to the query times of the behavior when the light-emitting parameter database is queried.
By adopting the technical scheme, the using frequency of the behavior is acquired through the behavior query times set in the light-emitting parameter database, and the behaviors are sequenced according to the using frequency of the corresponding behaviors of the user, so that the matching condition of the corresponding behaviors acquired by the control terminal in the light-emitting parameter database is improved, and the response matching efficiency of the whole lamp and the behaviors of people is improved.
The invention is further configured to: the parameter database of the light-emitting device also records corresponding behaviors and the times of subsequent actions for connecting the corresponding behaviors, and the subsequent actions are sorted from at least more times, and when the control terminal continuously inquires for multiple times, the follow-up actions are preferably inquired according to the conditions sorted by the times of the subsequent actions.
By adopting the technical scheme, the light-emitting parameter database is internally arranged, so that the light-emitting parameter database can be called according to the frequency of subsequent actions according to the action condition before the user after one action of the user is acquired in time, and the linking efficiency and frequency are further improved.
In conclusion, the beneficial technical effects of the invention are as follows: the behavior of the user can be timely acquired through the control terminal and the behavior acquisition device, and the light-emitting parameters of the light-emitting device corresponding to the behavior of the user are matched in the light-emitting parameter database, so that interaction between a person and a lighting device is realized.
Drawings
Fig. 1 is a system block diagram i of a somatosensory interactive light control system.
Fig. 2 is a system block diagram ii of the somatosensory interactive light control system.
Fig. 3 is a system block diagram three of the somatosensory interactive light control system.
Fig. 4 is a system block diagram of the somatosensory interactive light control system four.
In the figure, 1, a control terminal; 2. a light emitting device; 3. a behavior acquisition device; 4. a lighting parameter database; 5. a light inertial navigation sensing input device; 6. a 3D reconstruction unit; 7. a correction module; 8. a 3D motion sensing camera; 9. an expression processing unit; 10. a first calibration module; 11. a storage module; 12. an interpolation calculation module; 13. a motion capture unit; 14. an image calculation processing unit; 15. a second calibration module; 16. an image processing module; 17. and a coordinate conversion module.
Detailed Description
The invention is described in further detail below with reference to figures 1-4.
Referring to fig. 1, the somatosensory interactive light control system disclosed by the invention comprises a control terminal 1 and a light-emitting device 2, and comprises a behavior acquisition device 3 for acquiring user behaviors and a light-emitting parameter database 4 for storing the user behaviors and light-emitting parameter conditions of the light-emitting device 2 matched with the user behaviors, wherein the control terminal 1 preferably selects a central processing unit.
The control terminal 1 collects user behaviors in real time through the action collection device, inquires out the light emitting parameters of the light emitting device 2 matched with the user behaviors in the light emitting parameter database 4, and adjusts the light emitting device 2 to the corresponding light emitting parameters in real time to realize the interaction between people and the lamp, wherein the light emitting parameters are a series of parameters such as the brightness of the lamp light, the cold light of the lamp light, the light emitting ratio of the light emitting device 2 and the like.
In order to improve the efficiency of the control terminal 1 when the lighting parameter database 4 calls the corresponding lighting device parameters, the lighting parameter database 4 also records the query times of the control terminal 1 about a certain behavior, and sequences the behaviors from at least more times according to the query times, and the control terminal 1 queries from at least more times according to the query times of the behaviors when the lighting parameter database 4 queries.
Further, in consideration of how to better link the light with the user's actions, the parameter database of the lighting device 2 further records the corresponding actions and the times of subsequent actions for linking the corresponding actions and sorts the corresponding actions according to at least the times, and when the control terminal 1 performs continuous multiple queries, the queries are prioritized according to the conditions sorted by the times of the subsequent actions.
In actual operation, the behavior collection device 3 may be various, and the behavior collection device 3 will be described below one by one.
As shown in fig. 2, the behavior acquisition device 3 may be a plurality of light inertial navigation sensing input devices 5 respectively fixed to the head, trunk, upper and lower limbs of the user and each light inertial navigation sensing input device 5 acquires the posture field data of each joint of the user in real time, and a 3D reconstruction unit 6 that reconstructs a three-dimensional preset model of the user from the posture training data and a three-dimensional field model of the user from the posture field data.
Each light inertial navigation sensing input device 5 is integrated with a 3-dimensional acceleration sensor, a 3-dimensional gyroscope and a 3-dimensional magnetic sensor. The plurality of inertial navigation sensing input devices 101 are respectively fixed on the upper limbs, the lower limbs, the trunk and the head of a performer, in the embodiment, the number of the inertial light type inertial navigation sensing input devices 5 is 12-16, wherein 1 light type inertial navigation sensing input device 5 is fixed on the head of the performer, 1-5 light type inertial navigation sensing input devices 5 are dispersedly fixed on the trunk of the performer, 4 light type inertial navigation sensing input devices 5 are respectively fixed on the upper arms and the lower arms of the two upper limbs of the performer, and 6 light type inertial navigation sensing input devices 5 are respectively fixed on the thighs, the calves and the feet of the two lower limbs of the performer. Each light inertial navigation sensing input device 5101 collects the posture field data of each joint of the performer in real time.
Since the initial position of the lightweight inertial navigation sensing input device 5 with the human body trunk is unknown when it is attached to the performer, it is difficult to estimate the distance between the human body trunks by the method of acceleration numerical integration. Therefore, appropriate corrections should be taken to determine the spatial relationship between the sensors and the torso section and the dimensional information of the body. The behavior acquisition device 3 is also provided with a correction module 7, and other light inertial navigation sensing input devices 5 are corrected by utilizing the space relation between a certain known light inertial navigation sensing input device 5 and the corresponding body trunk section and a human body biomechanical model. Among them, the known lightweight inertial navigation sensing input device 5 can select the lightweight inertial navigation sensing input device 5 provided on the head. The specific method comprises the following steps: and describing sensor signals and a human body three-dimensional model in the light inertial navigation sensing input device 5 as random events, and constructing a sensor fusion process containing prediction and correction steps to aggregate the sensor signals and the human body three-dimensional model. In the prediction process, signals of all the sensors are processed through an inertial guidance system (INS) algorithm, and then mechanical motion of a body segment is predicted by using a known spatial relation between the sensors and the body segment of the corresponding body and a human body biomechanical model. After the above-described processing is performed for a longer period of time, integrating inertial sensor data can result in drift errors due to factors such as sensor noise, signal offset, and attitude errors. To correct for such estimators as direction, velocity, displacement, etc., the sensor fusion process will continually update these estimators. The correction process comprises the data updating, the data updating is based on human body biological kinematic characteristics, main joints, and body segment connection point detection containing external position and speed limiting factors, and the motion estimation result is fed back to the INS algorithm and the body segment motion process of the next frame.
As shown in fig. 3, the behavior collection device 3 may also be a 3D motion sensing camera 8 for collecting facial expression and voice training data or facial expression and voice field data of a user, the behavior collection device 3 is further provided with an facial expression processing unit 9, the facial expression processing unit 9 includes a calibration module for respectively collecting speckle pattern data of each reference plane, the reference planes are virtual vertical planes sequentially arranged on a stage from near to far along a lens direction of the 3D motion sensing camera 8 at intervals, a storage module 11 for storing the speckle pattern data of each reference plane, and an interpolation calculation module 12 for obtaining a training facial expression of the user according to a cross-correlation calculation result of the facial expression training data and each speckle pattern data and obtaining a field facial expression according to the facial expression field data.
As shown in fig. 4, the behavior acquisition device 3 may also be a motion capture unit 13 for acquiring motion training data or motion scene data of the user, the motion training data or motion scene data including position data, direction data, and speed data of body movement during the user's motion.
The behavior acquisition device 3 is further provided with an image calculation processing unit 14, the image calculation processing unit 14 comprises a calibration module for obtaining relative position data between the motion capture unit 13 and the projection images through calibration of the motion capture unit 13, an image processing module 16 for performing calculation analysis according to continuous image data of the motion process of the user and obtaining position data, direction data and speed data of body movement in the motion capture unit 13 during the motion process of the user, and a coordinate conversion module 17 for converting the position data, the direction data and the speed data in the image coordinate system of the motion capture unit 13 into projection position data, projection direction data and projection speed data in the image coordinate system of the projection unit.
The implementation principle of the embodiment is as follows: in actual operation, firstly, the behavior of the user is collected by the behavior collection device 3, and then the control terminal 1 obtains the light emitting parameters of the light emitting device 2 to be adjusted by comparing the corresponding behaviors in the light emitting parameter database 4, and performs corresponding control.
The embodiments of the present invention are preferred embodiments of the present invention, and the scope of the present invention is not limited by these embodiments, so: all equivalent changes made according to the structure, shape and principle of the invention are covered by the protection scope of the invention.

Claims (9)

1. The utility model provides an interactive light control system is felt to body, includes control terminal (1) and illuminator (2), its characterized in that includes:
the behavior acquisition device (3) is used for acquiring the behavior of the user in real time;
a lighting parameter database (4) for storing user behavior and lighting parameter conditions of the lighting devices (2) matched with the user behavior;
the control terminal (1) collects user behaviors in real time through the action collection device, inquires out the light-emitting parameters of the light-emitting device (2) matched with the user behaviors in the light-emitting parameter database (4), and adjusts the light-emitting device (2) to the corresponding light-emitting parameters in real time to realize the interaction between people and the lamp.
2. Somatosensory interactive light control system according to claim 1, wherein the behavior acquisition device (3) comprises:
the device comprises a plurality of light inertial navigation sensing input devices (5), a plurality of light inertial navigation sensing input devices and a control unit, wherein the light inertial navigation sensing input devices (5) are respectively fixed on the head, the trunk, the upper limbs and the lower limbs of a user, and each light inertial navigation sensing input device (5) collects posture field data of each joint of the user in real time;
and the 3D reconstruction unit (6) reconstructs a three-dimensional field model of the user according to the attitude field data.
3. A somatosensory interactive light control system according to claim 2, wherein the behavior collection device (3) is further provided with a correction module (7) for correcting other light inertial navigation sensing input devices (5) by using a known spatial relationship between the light inertial navigation sensing input device (5) and a corresponding body segment and a human body biomechanical model.
4. Somatosensory interactive light control system according to claim 1, wherein the behavior acquisition device (3) comprises:
and the 3D motion sensing camera (8) is used for collecting facial expression and voice training data or facial expression and voice field data of the user.
5. A somatosensory interactive light control system according to claim 4, wherein the behavior acquisition device (3) is further provided with an expression processing unit (9), and the expression processing unit (9) comprises:
the first calibration module (10) is used for respectively collecting speckle pattern data of each reference plane, and the reference planes are virtual vertical planes which are sequentially arranged on the stage at intervals from near to far along the lens direction of the 3D motion sensing camera (8);
a storage module (11) that stores the speckle pattern data for each of the reference planes;
the interpolation calculation module (12) is used for obtaining the training expression of the user according to the cross-correlation operation result of the expression training data and each speckle pattern data; and obtaining the field expression according to the expression field data.
6. Somatosensory interactive light control system according to claim 1, wherein the behavior acquisition device (3) comprises:
the motion capture unit (13) collects motion training data or motion field data of the user, wherein the motion training data or the motion field data comprise position data, direction data and speed data of body movement of the user in the motion process.
7. A somatosensory interactive light control system according to claim 6, wherein an image computing and processing unit (14) is further arranged on the behavior acquisition device (3), and the image computing and processing unit (14) comprises:
the second calibration module (15) is used for obtaining relative position data between the motion capture unit (13) and the projection image through calibration of the motion capture unit;
the image processing module (16) is used for calculating and analyzing the continuous image data of the user motion process to obtain position data, direction data and speed data of the body movement of the user in the motion capturing unit (13) during the user motion process;
a coordinate conversion module (17) for converting the position data, the direction data and the speed data in the image coordinate system of the motion capture unit (13) into projection position data, projection direction data and projection speed data in the image coordinate system of the projection unit.
8. A somatosensory interactive light control system according to claim 1, wherein the lighting parameter database (4) further records the number of queries of the control terminal (1) about a behavior, and ranks the behaviors by at least the number of queries, and the control terminal (1) queries by at least the number of queries of the behavior when the lighting parameter database (4) queries.
9. A somatosensory interactive light control system according to claim 8, wherein the database of parameters of the lighting device (2) further records the corresponding behaviors and the number of times of subsequent actions for connecting the corresponding behaviors, and sorts the actions according to at least the number of times, and when the control terminal (1) performs continuous multiple queries, the queries are prioritized according to the conditions sorted by the number of times of the subsequent actions.
CN201811346674.1A 2018-11-13 2018-11-13 Somatosensory interactive light control system Pending CN111194122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811346674.1A CN111194122A (en) 2018-11-13 2018-11-13 Somatosensory interactive light control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811346674.1A CN111194122A (en) 2018-11-13 2018-11-13 Somatosensory interactive light control system

Publications (1)

Publication Number Publication Date
CN111194122A true CN111194122A (en) 2020-05-22

Family

ID=70708926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811346674.1A Pending CN111194122A (en) 2018-11-13 2018-11-13 Somatosensory interactive light control system

Country Status (1)

Country Link
CN (1) CN111194122A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111726921A (en) * 2020-05-25 2020-09-29 磁场科技(北京)有限公司 Somatosensory interactive light control system
CN111918453A (en) * 2020-08-18 2020-11-10 深圳市秀骑士科技有限公司 LED light scene control system and control method thereof
CN113110285A (en) * 2021-05-08 2021-07-13 深圳市明泰润投资发展有限公司 Jewelry wall interaction system and method
CN113609958A (en) * 2021-08-02 2021-11-05 金茂智慧科技(广州)有限公司 Light adjusting method and related device
CN114585141A (en) * 2022-01-10 2022-06-03 自贡海天文化股份有限公司 Human interactive LED landscape building light control system
CN116095929A (en) * 2023-03-03 2023-05-09 哈尔滨师范大学 Lighting control system based on intelligent switch application
CN116455522A (en) * 2023-06-13 2023-07-18 良业科技集团股份有限公司 Method and system for transmitting lamplight interaction control information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110175840A1 (en) * 2008-09-23 2011-07-21 Koninklijke Philips Electronics N.V. Interactive ambience creating system
CN103578135A (en) * 2013-11-25 2014-02-12 恒德数字舞美科技有限公司 Virtual image and real scene combined stage interaction integrating system and realizing method thereof
CN204350420U (en) * 2015-02-06 2015-05-20 恒德数字舞美科技有限公司 A kind of stage light control system based on micro-inertia sensor
US20170108838A1 (en) * 2015-10-14 2017-04-20 Hand Held Products, Inc. Building lighting and temperature control with an augmented reality system
CN106793420A (en) * 2017-01-22 2017-05-31 北京鸿光丽辉科技有限公司 The method of illuminator and control illuminator
CN107734785A (en) * 2017-11-15 2018-02-23 广东工业大学 A kind of lamp light control system of LED
CN207150925U (en) * 2018-01-02 2018-03-27 江西省中业景观工程安装有限公司 A kind of body-sensing interaction lamp light control system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110175840A1 (en) * 2008-09-23 2011-07-21 Koninklijke Philips Electronics N.V. Interactive ambience creating system
CN103578135A (en) * 2013-11-25 2014-02-12 恒德数字舞美科技有限公司 Virtual image and real scene combined stage interaction integrating system and realizing method thereof
CN204350420U (en) * 2015-02-06 2015-05-20 恒德数字舞美科技有限公司 A kind of stage light control system based on micro-inertia sensor
US20170108838A1 (en) * 2015-10-14 2017-04-20 Hand Held Products, Inc. Building lighting and temperature control with an augmented reality system
CN106793420A (en) * 2017-01-22 2017-05-31 北京鸿光丽辉科技有限公司 The method of illuminator and control illuminator
CN107734785A (en) * 2017-11-15 2018-02-23 广东工业大学 A kind of lamp light control system of LED
CN207150925U (en) * 2018-01-02 2018-03-27 江西省中业景观工程安装有限公司 A kind of body-sensing interaction lamp light control system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111726921A (en) * 2020-05-25 2020-09-29 磁场科技(北京)有限公司 Somatosensory interactive light control system
CN111726921B (en) * 2020-05-25 2022-09-23 磁场科技(北京)有限公司 Somatosensory interactive light control system
CN111918453A (en) * 2020-08-18 2020-11-10 深圳市秀骑士科技有限公司 LED light scene control system and control method thereof
CN113110285A (en) * 2021-05-08 2021-07-13 深圳市明泰润投资发展有限公司 Jewelry wall interaction system and method
CN113609958A (en) * 2021-08-02 2021-11-05 金茂智慧科技(广州)有限公司 Light adjusting method and related device
CN114585141A (en) * 2022-01-10 2022-06-03 自贡海天文化股份有限公司 Human interactive LED landscape building light control system
CN116095929A (en) * 2023-03-03 2023-05-09 哈尔滨师范大学 Lighting control system based on intelligent switch application
CN116095929B (en) * 2023-03-03 2024-03-08 哈尔滨师范大学 Lighting control system based on intelligent switch application
CN116455522A (en) * 2023-06-13 2023-07-18 良业科技集团股份有限公司 Method and system for transmitting lamplight interaction control information
CN116455522B (en) * 2023-06-13 2023-08-29 良业科技集团股份有限公司 Method and system for transmitting lamplight interaction control information

Similar Documents

Publication Publication Date Title
CN111194122A (en) Somatosensory interactive light control system
CN103578135B (en) The mutual integrated system of stage that virtual image combines with real scene and implementation method
US11967101B2 (en) Method and system for obtaining joint positions, and method and system for motion capture
CN103733227B (en) Three-dimensional object modelling fitting & tracking
Tian et al. Accurate human navigation using wearable monocular visual and inertial sensors
CN106843507B (en) Virtual reality multi-person interaction method and system
CN112069933A (en) Skeletal muscle stress estimation method based on posture recognition and human body biomechanics
CN111862299A (en) Human body three-dimensional model construction method and device, robot and storage medium
CN108269302B (en) Three-dimensional human body rapid reconstruction method based on simple measurement garment
CN108628306B (en) Robot walking obstacle detection method and device, computer equipment and storage medium
CN110598590A (en) Close interaction human body posture estimation method and device based on multi-view camera
CN110544302A (en) Human body action reconstruction system and method based on multi-view vision and action training system
CN113449570A (en) Image processing method and device
CN111680586B (en) Badminton player motion attitude estimation method and system
CN110390685B (en) Feature point tracking method based on event camera
CN113034594A (en) Pose optimization method and device, electronic equipment and storage medium
CN203630822U (en) Virtual image and real scene combined stage interaction integrating system
CN111596767A (en) Gesture capturing method and device based on virtual reality
US20230067081A1 (en) System and method for real-time creation and execution of a human Digital Twin
KR101021470B1 (en) Generating method of robot motion data using image data and generating apparatus using the same
CN111401340B (en) Method and device for detecting motion of target object
JP7318814B2 (en) DATA GENERATION METHOD, DATA GENERATION PROGRAM AND INFORMATION PROCESSING DEVICE
WO2019150431A1 (en) Information processing device
CN112114660A (en) Method for realizing large-scale movement of virtual world character by utilizing motion of human foot in small space range
KR101962045B1 (en) Apparatus and method for testing 3-dimensional position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200522

RJ01 Rejection of invention patent application after publication