CN105446623A - Multi-interaction projection method and system - Google Patents

Multi-interaction projection method and system Download PDF

Info

Publication number
CN105446623A
CN105446623A CN201510811560.XA CN201510811560A CN105446623A CN 105446623 A CN105446623 A CN 105446623A CN 201510811560 A CN201510811560 A CN 201510811560A CN 105446623 A CN105446623 A CN 105446623A
Authority
CN
China
Prior art keywords
operational motion
video
operating body
image
projected image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510811560.XA
Other languages
Chinese (zh)
Inventor
杨伟樑
高志强
王梓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vision Technology (shenzhen) Co Ltd
Original Assignee
Vision Technology (shenzhen) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vision Technology (shenzhen) Co Ltd filed Critical Vision Technology (shenzhen) Co Ltd
Priority to CN201510811560.XA priority Critical patent/CN105446623A/en
Publication of CN105446623A publication Critical patent/CN105446623A/en
Priority to PCT/CN2016/083257 priority patent/WO2017084286A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a multi-interaction projection method and system. The system comprises a processor, a projector, a first video acquisition device and a second video acquisition device; the first video acquisition device acquires a first video image; the second video acquisition device acquires a second video image; the processor is used for identifying a first operational motion performed by a first operation body and a first video position of the first operation motion, obtaining a first mapping position mapped by a first video position, identifying a second operation motion performed by a second operation body and a second video position of the second operation motion, and obtaining a second mapping position mapped by a second video position in a projected image; the projected image is adjusted according to the first mapping position, the first operation motion, the second mapping position and the second operation motion; and the adjusted projected image is sent to the projector. By means of the manner, the projected image is operated by the two operation bodies in different operation regions, such that multi-interaction projection is realized.

Description

A kind of method and system of many interactions
Technical field
The present invention relates to projection process technical field, particularly relate to a kind of method and system of many interactions.
Background technology
Projector (also known as projector), be a kind of can by image or VIDEO PROJECTION to the equipment on curtain, it projects to the image on curtain or video presents several times when keeping sharpness or decades of times amplifies, be convenient for people to viewing, also the visual field that people are open is given, therefore, projector is deeply by the welcome of user.
At present, a kind of novel projection pattern is there is: interaction at projection art, interaction refers to: adopt computer vision technique and projection display technique, makes user can directly use the virtual scene in pin or hand and view field to carry out mutual a kind of projection pattern.The concrete principle of interactive projection system is: be carry out seizure shooting by image-capturing apparatus to user, then analyzed by image analysis system, adjustment projected image, make to produce interactive effect between user and view field.But the present inventor finds in long-term research: in current interaction technology, usually only in view field side, image-capturing apparatus is installed, user can only carry out interactive operation in the catching range of single image capture device, the scope of the interactive operation of limited subscriber is interactive single.
Summary of the invention
The technical matters that the present invention mainly solves is to provide a kind of method and system of many interactions, can realize two operating bodies and operate projected image in different operating region, thus realize many interactions.
For solving the problems of the technologies described above, the technical scheme that the present invention adopts is: the system providing a kind of many interactions, comprise processor, projector, the first video acquisition device and the second video acquisition device, described processor is connected with projector, the first video acquisition device and the second video acquisition device respectively; Described projector, for receiving the projected image that described processor sends, and projects described projected image; Described first video acquisition device, for gathering the first video image when the first operating body carries out interactive operation in the first operating area; Described second video acquisition device, for gathering the second video image when the second operating body carries out interactive operation in the second operating area, wherein, in described first video image and the second video image, the position of the position of each pixel and each pixel of projected image has mapping relations; Described processor, for: according to described first video image, identify the first operational motion that described first operating body carries out, and described first operational motion is arranged in the first video location of described first video image; Obtain the first mapping position mapped mutually with described first video location in described projected image; According to described second video image, identify the second operational motion that described second operating body carries out, and described second motion action is arranged in the second video location of described second video image; Obtain the second mapping position mapped mutually with described second video location in described projected image; Adjust described projected image according to described first mapping position, the first operational motion, the second mapping position and the second operational motion, and send the projected image after adjustment to described projector, to make the projected image after adjustment described in described projector projects.
Wherein, described processor is according to described first video image, identify that the step of the first operational motion that described first operating body carries out comprises: according to image difference algorithm, continuous print two two field picture in described first video image is carried out subtracting each other process, obtain the exercise data of described first operating body, and identify first operational motion corresponding to exercise data of described first operating body; Described processor is according to described second video image, identify that the step of the second operational motion that described second operating body carries out comprises: according to described image difference algorithm, continuous print two two field picture in described second video image is carried out subtracting each other process, obtain the exercise data of described second operating body, identify second operational motion corresponding to exercise data of described second operating body.
Wherein, described processor is according to described first mapping position, first operational motion, the step that second mapping position and the second operational motion adjust described projected image comprises: according to described first operational motion, identify the first operational order, to identify in described projected image that corresponding with described first mapping position first is operated picture, operated picture described first, perform the operation indicated by described first operational order, and, according to described second operational motion, identify the second operational order, to identify in described projected image that corresponding with described second mapping position second is operated picture, operated picture described second, perform the operation indicated by described second operational order, and according to the operation of described execution, adjustment projected image.
Wherein, described system also comprises the first voice acquisition device and the second voice acquisition device; Described first voice acquisition device, for gathering the first voice command sent when described first operating body carries out interactive operation in the first operating area; Described second voice acquisition device, for gathering the second voice command sent when described first operating body carries out interactive operation in the second operating area; The step that described processor adjusts described projected image according to described first mapping position, the first operational motion, the second mapping position and the second operational motion comprises: according to described first mapping position, the first operational motion, the second mapping position and the second operational motion, and in conjunction with described first voice command and the second voice command, adjust described projected image.
Wherein, described first video acquisition device and the second video acquisition device are the picture pick-up device being configured with charge coupled cell CCD or complementary metal oxide semiconductor (CMOS) CMOS.
Wherein, described projector is the digital light process DLP micro-projection device with zoom function.
For solving the problems of the technologies described above, another technical solution used in the present invention is: a kind of method providing many interactions, be included in projector after projection projected image, gather the first video image when the first operating body carries out interactive operation in the first operating area, and gather second and operate in the second operating area the second video image when carrying out interactive operation, in described first video image and the second video image, the position of the position of each pixel and each pixel of projected image has mapping relations; According to described first video image, identify the first operational motion that described first operating body carries out, and described first operational motion is arranged in the first video location of described first video image; Obtain the first mapping position mapped mutually with described first video location in described projected image; According to described second video image, identify the second operational motion that described second operating body carries out, and described second motion action is arranged in the second video location of described second video image; Obtain the second mapping position mapped mutually with described second video location in described projected image; Described projected image is adjusted according to described first mapping position, the first operational motion, the second mapping position and the second operational motion.
Wherein, described according to described first video image, identify that the step of the first operational motion that described first operating body carries out comprises: according to image difference algorithm, continuous print two two field picture in described first video image is carried out subtracting each other process, obtain the exercise data of described first operating body, and identify first operational motion corresponding to exercise data of described first operating body; Described according to described second video image, identify that the step of the second operational motion that described second operating body carries out comprises: according to described image difference algorithm, continuous print two two field picture in described second video image is carried out subtracting each other process, obtain the exercise data of described second operating body, identify second operational motion corresponding to exercise data of described second operating body.
Wherein, described according to described first mapping position, first operational motion, the step that second mapping position and the second operational motion adjust described projected image comprises: according to described first operational motion, identify the first operational order, to identify in described projected image that corresponding with described first mapping position first is operated picture, operated picture described first, perform the operation indicated by described first operational order, and, according to described second operational motion, identify the second operational order, to identify in described projected image that corresponding with described second mapping position second is operated picture, operated picture described second, perform the operation indicated by described second operational order, and according to the operation of described execution, adjustment projected image.
Wherein, during the first video image when described collection first operating body carries out interactive operation in the first operating area, described method also comprises the first voice command gathering described first operating body; During the second video image when described collection second operating body carries out interactive operation in the second operating area, described method also comprises the second voice command gathering described second operating body; The step adjusting described projected image according to described first mapping position, the first operational motion, the second mapping position and the second operational motion comprises: according to described first mapping position, the first operational motion, the second mapping position and the second operational motion, and in conjunction with described first voice command and the second voice command, adjust described projected image.
The invention has the beneficial effects as follows: the situation being different from prior art, the present invention gathers the first video image of the first operating body, and the second video image that second operates, the first operational motion and the first video location is identified according to the first video image, the second operational motion and the second video location is identified according to the second video image, according to the first mapping position that the first video location maps, first operational motion, the second mapping position that second video location maps and the second operational motion adjustment projected image, realize two operating bodies to operate projected image in different operating region, and then realize many interactions.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of the system embodiment of the many interactions of the present invention;
Fig. 2 is the process flow diagram of the mode embodiment of the many interactions of the present invention.
Embodiment
Below in conjunction with drawings and embodiments, the present invention is described in detail.
Refer to Fig. 1, the system 20 of many interactions comprises processor 21, projector 22, first video acquisition device 23 and the second video acquisition device 24, and processor 21 is connected with projector 22, first video acquisition device 23 and the second video acquisition device 24 respectively.What deserves to be explained is: processor 21 and the connection between projector 22, first video acquisition device 23 and the second video acquisition device 24 can be wired connection, also can be wireless connections, such as: WIFI wireless connections, blue teeth wireless connection, 3G or 4G radio communication connect etc.Processor 21 can run Windows operating system, Android operation system and iOS operating system etc., and processor 21 operation system, conveniently can expand the function of the system 20 of many interactions.
Projector 22, for the projected image that receiving processor 21 sends, and projects projected image.After projector 22 projects projected image, the first video acquisition device 23, for gathering the first video image when the first operating body carries out interactive operation in the first operating area.Second video acquisition device 24, for gathering the second video image when the second operating body carries out interactive operation in the second operating area.Wherein, in the first video image and the second video image, the position of the position of each pixel and each pixel of projected image has mapping relations.And the first operating body carries out interactive operation, refer to that image is thrown in the first operating body operation, same, the second operating body carries out interactive operation, refers to that image is thrown in the second operating body operation.
Processor 21, for according to the first video image, identify the first operational motion that the first operating body carries out, and first operational motion be arranged in the first video location of the first video image, obtain the first mapping position mapped mutually with the first video location in projected image, according to the second video image, identify the second operational motion that the second operating body carries out, and second motion action be arranged in the second video location of described second video image, obtain the second mapping position mapped mutually with the second video location in projected image.After first mapping position refers to that the first operational motion maps to projected image, the position that first operational motion is corresponding in projected image, after second mapping position refers to that the second operational motion maps to projected image, the position that the second operational motion is corresponding in projected image.
Processor 21 also for adjusting projected image according to the first mapping position, the first operational motion, the second mapping position and the second operational motion, and sends the projected image after adjustment to projector 22, projects the projected image after adjustment to make projector 22.In brief, after operating body operates, projected image adjusts according to the operational motion of operating body, and the adjustment being equivalent to projected image is carried out operating causing due to operating body, realizes interaction; Further, the present invention can be operated in two different operating regions by two operating bodies, according to the operational motion adjustment projected image of two operating bodies, realize many interactions, such as: shuttlecock game interface is carried out projection two players and can be positioned at different operating region and play by projector 22, and regulate shuttlecock game according to the operation of team member, and the shuttlecock game interface that projector 22 projects also can be sent out and can change.
What deserves to be explained is: the first operating area and the second operating area can be positioned at the both sides, front and back of the view field of projector 22, the front side of view field is projector 22 side, and the rear side of view field refers to the side away from projector 22; Certainly, in other alternate embodiments, the first operating area and the second operating area also can be positioned at front side or the rear side of view field simultaneously.In addition, above-mentioned just description two operating bodies operate in two different operating regions, realize the mode of many interactions, those skilled in the art also can arrange multiple operating areas such as three, four by technological thought according to the present invention, multiple operating body is positioned at different operating areas and operates, and realizes many interactions.
Concrete, in identifying operation action, can carry out by combining image difference algorithm, then processor 21 is according to the first video image, identify that the step of the first operational motion that the first operating body carries out comprises: processor 21 is according to image difference algorithm, continuous print two two field picture in first video image is carried out subtracting each other process, obtains the exercise data of the first operating body, and identify first operational motion corresponding to exercise data of the first operating body.Processor 21 is according to the second video image, identify that the step of the second operational motion that the second operating body carries out comprises: according to image difference algorithm, continuous print two two field picture in second video image is carried out subtracting each other process, obtain the exercise data of the second operating body, identify second operational motion corresponding to exercise data of the second operating body.Certainly, in other alternate embodiments, also the operational motion of alternate manner identifying operation body can be gathered, such as: by following the trail of the hand exercise track of the first operating body and the second operating body, the hand exercise track according to the first operating body and the second operating body identifies the first operational motion and the second operational motion respectively.
Processor 21 is according to the first mapping position, first operational motion, the step of the second mapping position and the second operational motion adjustment projected image comprises: according to the first operational motion, identify the first operational order, to identify in projected image that corresponding with the first mapping position first is operated picture, operated picture first, perform the operation indicated by the first operational order, and, according to the second operational motion, identify the second operational order, to identify in projected image that corresponding with the second mapping position second is operated picture, operated picture second, perform the operation indicated by described second operational order, and according to described operation, adjustment projected image.Operational motion and operational order can make a reservation for set up corresponding relation, and such as: hand support is opened and represented OPEN, and hand supporting rises and represents out code, hand slides to the right to represent and switches etc. to the right.
In order to improve the interactivity of alternative projection, the voice command of all right acquisition operations body, carry out adjustment project content in conjunction with voice command, then the system 20 of many interactions also comprises the first voice acquisition device 25 and the second voice acquisition device 26.First voice acquisition device 25, for gathering the first voice command sent when the first operating body carries out interactive operation in the first operating area.Second voice acquisition device 26, for gathering the second voice command sent when the first operating body carries out interactive operation in the second operating area.Processor 21 comprises according to the step of the first mapping position, the first operational motion, the second mapping position and the second operational motion adjustment projected image: according to the first mapping position, the first operational motion, the second mapping position and the second operational motion, and in conjunction with the first voice command and the second voice command, adjustment projected image.In embodiments of the present invention, first video acquisition device 23 and the second video acquisition device 24 are all preferably configured with the picture pick-up device of charge coupled cell CCD or complementary metal oxide semiconductor (CMOS) CMOS, and projector 22 preferably has the digital light process DLP micro-projection device of zoom function.
In embodiments of the present invention, gather the first video image of the first operating body, and the second video image that second operates, the first operational motion and the first video location is identified according to the first video image, the second operational motion and the second video location is identified according to the second video image, according to the first mapping position that the first video location maps, first operational motion, the second mapping position that second video location maps and the second operational motion adjustment projected image, realize two operating bodies to operate projected image in different operating region, and then realize many interactions.
The present invention provides again the method embodiment of interaction.Refer to Fig. 2, method comprises:
Step S301: after projector projects projected image, gather the first video image when the first operating body carries out interactive operation in the first operating area, and the second video image gathered when the second operating body carries out interactive operation in the second operating area, wherein, in the first video image and the second video image, the position of the position of each pixel and each pixel of projected image has mapping relations;
First operating body carries out interactive operation and refers to the operation that the first operating body enters for projected image in the first operating area, second operating body carries out interactive operation and refers to the operation that the second operating body carries out for projected image in the second operating area, such as: at the projected image of projection automobile component, operating body performs for the projected image of automobile component the action that two hands open.
Step S302: according to the first video image, identify the first operational motion that the first operating body carries out, and the first operational motion is arranged in the first video location of the first video image;
Operating body before operation after, its video image is not identical, therefore, can image difference algorithm be passed through, the difference of the video image that identification two is adjacent, and then identify the operational motion of operating body, then step S302 can be specially again: according to image difference algorithm, continuous print two two field picture in described first video image is carried out subtracting each other process, obtains the exercise data of described first operating body, and identify first operational motion corresponding to exercise data of described first operating body.Certainly, in other alternate embodiments, the first operational motion of alternate manner identification first operating body also can be gathered, such as: gesture track identification etc.
Step S303: obtain the first mapping position mapped mutually with the first video location in projected image;
First mapping position is for the first operational motion reflects to residing position during projected image.
Step S304: according to the second video image, identify the second operational motion that the second operating body carries out, and the second motion action is arranged in the second video location of described second video image;
Second operational motion also can carry out identifying by combining image difference algorithm, then step S304 can be specially again: according to image difference algorithm, continuous print two two field picture in second video image is carried out subtracting each other process, obtain the exercise data of the second operating body, identify second operational motion corresponding to exercise data of the second operating body.Certainly, in other alternate embodiments, the second operational motion of alternate manner identification second operating body also can be gathered, such as: gesture track identification etc.
Step S305: obtain the second mapping position mapped mutually with the second video location in projected image;
Second mapping position refers to that the second operational motion maps to position present in projected image.
Step S306: according to the first mapping position, the first operational motion, the second mapping position and the second operational motion adjustment projected image.
Operate at the first operating body and the second operating body, according to the operation adjustment projected image of the first operating body and the second operating body, realize many interactions.Realize in the present invention's one, step S306 is specially: according to the first operational motion, identify the first operational order, to identify in projected image that corresponding with the first mapping position first is operated picture, operated picture first, perform operation indicated by the first operational order, and, according to the second operational motion, identify the second operational order, to identify in projected image that corresponding with described second mapping position second is operated picture, operated picture second, perform the operation indicated by the second operational order, and according to the operation performed, adjustment projected image.
Worth explanation: when the operation adjustment project content according to the first operating body and the second operating body, that the time sequencing of the operation carried out according to the first operating body and the second operating body is carried out, such as: the first operating body first carries out interactive operation, interactive operation is carried out after second operating body, then first according to the interactive operation adjustment project content of the first operating body, then adjust project content according to the interactive operation of the second operating body.
Further, in order to improve the interactivity of alternative projection, the voice command of all right acquisition operations body, adjustment project content is carried out in conjunction with voice command, during the first video image when collection first operating body carries out interactive operation in the first operating area, also can gather the first voice command of the first operating body, and during the second video image when collection second operating body carries out interactive operation in the second operating area, the second voice command of described second operating body can also be gathered.Step S306 is specially: according to the first mapping position, the first operational motion, the second mapping position and the second operational motion, and in conjunction with the first voice command and the second voice command, adjustment projected image.
In embodiments of the present invention, gather the first video image of the first operating body, and the second video image that second operates, the first operational motion and the first video location is identified according to the first video image, the second operational motion and the second video location is identified according to the second video image, according to the first mapping position that the first video location maps, first operational motion, the second mapping position that second video location maps and the second operational motion adjustment projected image, realize two operating bodies to operate projected image in different operating region, and then realize many interactions.
The foregoing is only embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize instructions of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. a system for interaction more than, is characterized in that, comprises processor, projector, the first video acquisition device and the second video acquisition device, and described processor is connected with projector, the first video acquisition device and the second video acquisition device respectively;
Described projector, for receiving the projected image that described processor sends, and projects described projected image;
Described first video acquisition device, for gathering the first video image when the first operating body carries out interactive operation in the first operating area;
Described second video acquisition device, for gathering the second video image when the second operating body carries out interactive operation in the second operating area, wherein, in described first video image and the second video image, the position of the position of each pixel and each pixel of projected image has mapping relations;
Described processor, for:
According to described first video image, identify the first operational motion that described first operating body carries out, and described first operational motion is arranged in the first video location of described first video image;
Obtain the first mapping position mapped mutually with described first video location in described projected image;
According to described second video image, identify the second operational motion that described second operating body carries out, and described second motion action is arranged in the second video location of described second video image;
Obtain the second mapping position mapped mutually with described second video location in described projected image;
Adjust described projected image according to described first mapping position, the first operational motion, the second mapping position and the second operational motion, and send the projected image after adjustment to described projector, to make the projected image after adjustment described in described projector projects.
2. system according to claim 1, is characterized in that,
Described processor, according to described first video image, identifies that the step of the first operational motion that described first operating body carries out comprises:
According to image difference algorithm, continuous print two two field picture in described first video image is carried out subtracting each other process, obtains the exercise data of described first operating body, and identify first operational motion corresponding to exercise data of described first operating body;
Described processor, according to described second video image, identifies that the step of the second operational motion that described second operating body carries out comprises:
According to described image difference algorithm, continuous print two two field picture in described second video image is carried out subtracting each other process, obtains the exercise data of described second operating body, identify second operational motion corresponding to exercise data of described second operating body.
3. system according to claim 1, is characterized in that,
The step that described processor adjusts described projected image according to described first mapping position, the first operational motion, the second mapping position and the second operational motion comprises:
According to described first operational motion, identify the first operational order, to identify in described projected image that corresponding with described first mapping position first is operated picture, to described first by operation to picture, perform the operation indicated by described first operational order,
And,
According to described second operational motion, identify the second operational order, to identify in described projected image that corresponding with described second mapping position second is operated picture, operated picture described second, perform the operation indicated by described second operational order, and according to the operation of described execution, adjustment projected image.
4. system according to claim 1, is characterized in that, described system also comprises the first voice acquisition device and the second voice acquisition device;
Described first voice acquisition device, for gathering the first voice command sent when described first operating body carries out interactive operation in the first operating area;
Described second voice acquisition device, for gathering the second voice command sent when described first operating body carries out interactive operation in the second operating area;
The step that described processor adjusts described projected image according to described first mapping position, the first operational motion, the second mapping position and the second operational motion comprises: according to described first mapping position, the first operational motion, the second mapping position and the second operational motion, and in conjunction with described first voice command and the second voice command, adjust described projected image.
5. system according to claim 1, is characterized in that,
Described first video acquisition device and the second video acquisition device are the picture pick-up device being configured with charge coupled cell CCD or complementary metal oxide semiconductor (CMOS) CMOS.
6. system according to claim 1, is characterized in that,
Described projector is the digital light process DLP micro-projection device with zoom function.
7. a method for interaction more than, is characterized in that, comprising:
After projector projects projected image, gather the first video image when the first operating body carries out interactive operation in the first operating area, and the second video image gathered when the second operating body carries out interactive operation in the second operating area, in described first video image and the second video image, the position of the position of each pixel and each pixel of projected image has mapping relations;
According to described first video image, identify the first operational motion that described first operating body carries out, and described first operational motion is arranged in the first video location of described first video image;
Obtain the first mapping position mapped mutually with described first video location in described projected image;
According to described second video image, identify the second operational motion that described second operating body carries out, and described second motion action is arranged in the second video location of described second video image;
Obtain the second mapping position mapped mutually with described second video location in described projected image;
Described projected image is adjusted according to described first mapping position, the first operational motion, the second mapping position and the second operational motion.
8. method according to claim 7, is characterized in that,
Described according to described first video image, identify that the step of the first operational motion that described first operating body carries out comprises:
According to image difference algorithm, continuous print two two field picture in described first video image is carried out subtracting each other process, obtains the exercise data of described first operating body, and identify first operational motion corresponding to exercise data of described first operating body;
Described according to described second video image, identify that the step of the second operational motion that described second operating body carries out comprises:
According to described image difference algorithm, continuous print two two field picture in described second video image is carried out subtracting each other process, obtains the exercise data of described second operating body, identify second operational motion corresponding to exercise data of described second operating body.
9. method according to claim 7, is characterized in that,
The described step adjusting described projected image according to described first mapping position, the first operational motion, the second mapping position and the second operational motion comprises:
According to described first operational motion, identify the first operational order, to identify in described projected image that corresponding with described first mapping position first is operated picture, to described first by operation to picture, perform the operation indicated by described first operational order,
And,
According to described second operational motion, identify the second operational order, to identify in described projected image that corresponding with described second mapping position second is operated picture, operated picture described second, perform the operation indicated by described second operational order, and according to the operation of described execution, adjustment projected image.
10. method according to claim 7, is characterized in that,
During the first video image when described collection first operating body carries out interactive operation in the first operating area, described method also comprises the first voice command gathering described first operating body;
During the second video image when described collection second operating body carries out interactive operation in the second operating area, described method also comprises the second voice command gathering described second operating body;
The step adjusting described projected image according to described first mapping position, the first operational motion, the second mapping position and the second operational motion comprises:
According to described first mapping position, the first operational motion, the second mapping position and the second operational motion, and in conjunction with described first voice command and the second voice command, adjust described projected image.
CN201510811560.XA 2015-11-20 2015-11-20 Multi-interaction projection method and system Pending CN105446623A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510811560.XA CN105446623A (en) 2015-11-20 2015-11-20 Multi-interaction projection method and system
PCT/CN2016/083257 WO2017084286A1 (en) 2015-11-20 2016-05-25 Multi-interactive projection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510811560.XA CN105446623A (en) 2015-11-20 2015-11-20 Multi-interaction projection method and system

Publications (1)

Publication Number Publication Date
CN105446623A true CN105446623A (en) 2016-03-30

Family

ID=55556890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510811560.XA Pending CN105446623A (en) 2015-11-20 2015-11-20 Multi-interaction projection method and system

Country Status (2)

Country Link
CN (1) CN105446623A (en)
WO (1) WO2017084286A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699240A (en) * 2015-02-09 2015-06-10 联想(北京)有限公司 Control method and electronic equipment
CN106095098A (en) * 2016-06-07 2016-11-09 深圳奥比中光科技有限公司 Body feeling interaction device and body feeling interaction method
CN106293346A (en) * 2016-08-11 2017-01-04 深圳市金立通信设备有限公司 The changing method of a kind of virtual reality what comes into a driver's and terminal
WO2017084286A1 (en) * 2015-11-20 2017-05-26 广景视睿科技(深圳)有限公司 Multi-interactive projection system and method
WO2017197779A1 (en) * 2016-05-18 2017-11-23 广景视睿科技(深圳)有限公司 Method and system for implementing interactive projection

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113247007A (en) * 2021-06-22 2021-08-13 肇庆小鹏新能源投资有限公司 Vehicle control method and vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221886A (en) * 2010-06-11 2011-10-19 微软公司 Interacting with user interface through metaphoric body
CN103218058A (en) * 2012-01-18 2013-07-24 北京德信互动网络技术有限公司 Human-machine interaction system and method based on projection technology
CN103455141A (en) * 2013-08-15 2013-12-18 无锡触角科技有限公司 Interactive projection system and correction method of depth sensor and projector of interactive projection system
CN203930308U (en) * 2014-05-15 2014-11-05 上海味寻信息科技有限公司 A kind of novel interactive projector
CN104217619A (en) * 2014-09-19 2014-12-17 江苏卡罗卡国际动漫城有限公司 Multi-user dance teaching interactive projection device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090221368A1 (en) * 2007-11-28 2009-09-03 Ailive Inc., Method and system for creating a shared game space for a networked game
CN101995943B (en) * 2009-08-26 2011-12-14 介面光电股份有限公司 three-dimensional image interactive system
CN103176733A (en) * 2011-12-20 2013-06-26 西安天动数字科技有限公司 Electronic interactive aquarium system
CN105446623A (en) * 2015-11-20 2016-03-30 广景视睿科技(深圳)有限公司 Multi-interaction projection method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221886A (en) * 2010-06-11 2011-10-19 微软公司 Interacting with user interface through metaphoric body
CN103218058A (en) * 2012-01-18 2013-07-24 北京德信互动网络技术有限公司 Human-machine interaction system and method based on projection technology
CN103455141A (en) * 2013-08-15 2013-12-18 无锡触角科技有限公司 Interactive projection system and correction method of depth sensor and projector of interactive projection system
CN203930308U (en) * 2014-05-15 2014-11-05 上海味寻信息科技有限公司 A kind of novel interactive projector
CN104217619A (en) * 2014-09-19 2014-12-17 江苏卡罗卡国际动漫城有限公司 Multi-user dance teaching interactive projection device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699240A (en) * 2015-02-09 2015-06-10 联想(北京)有限公司 Control method and electronic equipment
CN104699240B (en) * 2015-02-09 2018-01-23 联想(北京)有限公司 A kind of control method and electronic equipment
WO2017084286A1 (en) * 2015-11-20 2017-05-26 广景视睿科技(深圳)有限公司 Multi-interactive projection system and method
WO2017197779A1 (en) * 2016-05-18 2017-11-23 广景视睿科技(深圳)有限公司 Method and system for implementing interactive projection
CN106095098A (en) * 2016-06-07 2016-11-09 深圳奥比中光科技有限公司 Body feeling interaction device and body feeling interaction method
CN106293346A (en) * 2016-08-11 2017-01-04 深圳市金立通信设备有限公司 The changing method of a kind of virtual reality what comes into a driver's and terminal

Also Published As

Publication number Publication date
WO2017084286A1 (en) 2017-05-26

Similar Documents

Publication Publication Date Title
CN105446623A (en) Multi-interaction projection method and system
CN204465706U (en) Terminal installation
CN102221887B (en) Interactive projection system and method
US8241125B2 (en) Apparatus and method of interaction with a data processor
CN104777991B (en) A kind of remote interaction optical projection system based on mobile phone
CN105898460A (en) Method and device for adjusting panorama video play visual angle of intelligent TV
WO2014082921A3 (en) Digital image capture device having a panorama mode
CN104117201A (en) Projection type billiard system gesture/billiard rod control system and implement method of projection type billiard system gesture/billiard rod control system
CN106537895A (en) Information-processing device, information processing method, and program
CN103543827A (en) Immersive outdoor activity interactive platform implement method based on single camera
CN103533276A (en) Method for quickly splicing multiple projections on plane
CN110764342A (en) Intelligent projection display device and adjustment method of projection picture thereof
CN202495992U (en) Studio and live interactive performance system
CN102188819A (en) Device and method for controlling video game
CN102902356A (en) Gesture control system and control method thereof
CN107079098A (en) Image playing method and device based on Pan/Tilt/Zoom camera
CN107368104B (en) Random point positioning method based on mobile phone APP and household intelligent pan-tilt camera
CN106851236A (en) Projector equipment, system and method
CN206214708U (en) A set of portable auxiliary training system with dual camera, wireless video transmission, time delay broadcasting and recording function
CN110446092B (en) Virtual auditorium generation method, system, device and medium for sports game
CN102455825B (en) Multimedia brief report indication system and method
CN103096107B (en) Three-dimensional display system and control method thereof
CN105631883B (en) A kind of method and apparatus of target area in determining image
CN209659477U (en) Projected image monitoring system
CN103024353A (en) Playing method and device for multi-screen compressed image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160330

RJ01 Rejection of invention patent application after publication