CN116524155A - MR fish tank interaction system and method supporting manual creation - Google Patents

MR fish tank interaction system and method supporting manual creation Download PDF

Info

Publication number
CN116524155A
CN116524155A CN202310384064.5A CN202310384064A CN116524155A CN 116524155 A CN116524155 A CN 116524155A CN 202310384064 A CN202310384064 A CN 202310384064A CN 116524155 A CN116524155 A CN 116524155A
Authority
CN
China
Prior art keywords
user
fish tank
mixed reality
dimensional
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310384064.5A
Other languages
Chinese (zh)
Inventor
杨承磊
任铭传
盖伟
吕高荣
王馨宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202310384064.5A priority Critical patent/CN116524155A/en
Publication of CN116524155A publication Critical patent/CN116524155A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides an MR aquarium interaction system and method supporting manual authoring, comprising: capturing color and contour information selected by a user in real time, and mapping the color and contour information to corresponding two-dimensional and three-dimensional virtual images; displaying an virtual image designed by a user in a three-dimensional way to the front glass of the fish tank or the inner space of the fish tank by using a time-division three-dimensional rendering method based on binocular parallax, and generating a mixed reality experience environment; acquiring an entity image matched with an virtual image designed by a user; receiving sensor data in the entity image after user modification, and performing interactive control on the virtual image in the mixed reality fish tank based on the sensor data; and controlling the virtual image associated with the user in the mixed reality fish tank based on the real action or language characteristic of the user, so that the virtual image displays the action matched with the action or language characteristic of the user, and different users interact among different mixed reality fish tanks in the mode of virtual image avatars.

Description

MR fish tank interaction system and method supporting manual creation
Technical Field
The disclosure belongs to the technical field of virtual reality, and particularly relates to an MR fish tank interaction system and method supporting manual creation.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Digital technology is gradually integrated into daily life of people, and is widely applied to various fields such as children education, and the like, so that new possibilities are provided for realizing the content creation. With the development of VR (Virtual Reality) technology, some research has emerged to support users to participate in designing virtual content and creating virtual scenes. However, most virtual content production requires a user to wear an expensive and heavy display device such as HMD (Head Mount Display), and is prone to fatigue and discomfort.
The MR (Mixed Reality) technology independent of the head-mounted display device not only can exert the advantages of the virtual environment, but also can avoid the burden of wearing the device for children. The fish tank is used as a commonly used ornamental device, has the advantages of simple structure, low cost and the like, provides a wide three-dimensional space inside the fish tank, and can be constructed into a natural mixed reality interaction environment after being combined with an MR technology, so that a suitable platform is provided for creation of an avatar. However, the use of VR, MR, etc. digital technologies requires a certain technical background, and the created content and the designed image are only stored in a virtual environment, which is very challenging for children.
Disclosure of Invention
In order to solve the problems, the present disclosure provides an MR fish tank interaction system and method supporting manual creation, where the scheme allows a user to design an avatar by means of touch and voice, and provides a mixed reality environment using a real fish tank as a carrier by using a stereoscopic out-screen display technology based on binocular parallax, thereby increasing the realism of user interaction; meanwhile, the scheme supports the materialization of the virtual image, allows a user to convert the designed virtual image into a tangible and touchable entity in a manner of craftwork, thereby exercising the comprehensive coordination ability of hands, eyes and brains of children and promoting the culture and development of creativity; in addition, in the scheme disclosed by the disclosure, the virtual image created by the user can be materialized and is endowed with interactivity, the scheme supports the user to reform the manufactured entity, and the sensor module is additionally arranged, so that the manufactured entity is changed into an interactive tool for the user from a common physical entity; finally, the scheme of the disclosure allows the user to give the characteristics of actions, languages and the like of the avatar, shuttle among different MR fish tanks in the manner of the avatar, and communicate through the limb actions or languages of the avatar.
According to a first aspect of embodiments of the present disclosure, there is provided an MR fish tank interaction system supporting manual authoring, comprising:
the virtual image design module is configured to capture color and contour information selected by a user in real time and map the color and contour information to corresponding two-dimensional and three-dimensional virtual images;
the three-dimensional screen-out display module is configured to display the virtual image three-dimensional screen designed by a user into the front glass of the fish tank or the inner space of the fish tank by using a time-division three-dimensional rendering method based on binocular parallax, so as to generate a mixed reality experience environment;
the virtual image entity making module is configured to obtain an entity image matched with the virtual image designed by the user;
the real object interaction module is configured to receive sensor data in the real object image after the user is reformed and to carry out interaction control on the virtual image in the mixed reality fish tank based on the sensor data; the improvement of the entity image is specifically as follows: adding a gyroscope and an acceleration sensor to an entity image which is participated in manufacturing by a user;
and the virtual avatar interaction module is configured to control virtual images associated with the user in the mixed reality fish tank based on the real actions or language characteristics of the user, so that the virtual images display actions matched with the actions or language characteristics of the user, and different users interact among different mixed reality fish tanks in the mode of virtual image avatars.
Further, the mixed reality fish tank comprises a fish tank body, a touch frame positioned on the glass at the front side of the fish tank and a projector, wherein the touch input of a user is realized through the infrared touch frame, and the projector is used for displaying a virtual picture on a liquid crystal dimming film on the glass at the rear side of the fish tank; the mixed reality fish tank is matched with the stereoscopic glasses worn by the user to realize a mixed reality environment based on the real fish tank; the entity image is obtained by adopting a manual manufacturing mode or a 3D printing mode which is participated by a user.
Further, the system also comprises a natural interaction module which is configured to receive touch or voice signals of a user based on an infrared touch technology and an intelligent voice technology and realize interaction control of the virtual image in the fish tank based on the touch or voice signals and a pre-associated control command.
Further, the natural interaction module comprises a touch frame and a microphone array which are arranged on glass at the front side of the mixed reality fish tank.
Further, the virtual avatar interaction module comprises a motion capture device, wherein the motion capture device is combined with a microphone array of the natural interaction module, acquires the motion or language of a user in real time, and controls the virtual image corresponding to the user based on a preset control command associated with the motion or language; wherein, the virtual image corresponding to the user can be displayed in different mixed reality fish tanks.
Furthermore, the system is pre-constructed with a two-dimensional virtual image profile library and a correspondingly arranged three-dimensional virtual image library, and a user can select different virtual image profiles to carry out color configuration by utilizing a touch frame on glass at the front side of the mixed reality fish tank according to requirements, so that the design of the virtual image is realized.
Further, the interactive control of the virtual image in the mixed reality fish tank based on the sensor data specifically comprises: acquiring acceleration sensor data and gyroscope data in real time, and when the acquired acceleration value exceeds a preset threshold value, considering that a user picks up a certain entity image at the moment, so as to trigger the display of an virtual image corresponding to the current entity image in the mixed reality fish tank; and mapping the acceleration value obtained by the acceleration sensor and the rotation angle value obtained in real time by the gyroscope to the avatar in the mixed reality fish tank, so as to realize synchronous movement of the avatar and the entity avatar.
According to a second aspect of the embodiments of the present disclosure, there is provided an MR aquarium interaction method supporting manual authoring, including:
capturing color and contour information selected by a user in real time, and mapping the color and contour information to corresponding two-dimensional and three-dimensional virtual images;
displaying an virtual image designed by a user in a three-dimensional way to the front glass of the fish tank or the inner space of the fish tank by using a time-division three-dimensional rendering method based on binocular parallax, and generating a mixed reality experience environment;
acquiring an entity image matched with an virtual image designed by a user;
receiving sensor data in the entity image after user modification, and performing interactive control on the virtual image in the mixed reality fish tank based on the sensor data; the improvement of the entity image is specifically as follows: adding a gyroscope and an acceleration sensor to an entity image which is participated in manufacturing by a user;
and controlling the virtual image associated with the user in the mixed reality fish tank based on the real action or language characteristic of the user, so that the virtual image displays the action matched with the action or language characteristic of the user, and different users interact among different mixed reality fish tanks in the mode of virtual image avatars.
According to a third aspect of the embodiments of the present disclosure, an electronic device is provided, including a memory, a processor, and a computer program running on the memory, where the processor implements the MR fish tank interaction method supporting manual creation when executing the program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the MR fish tank interaction method supporting manual authoring.
Compared with the prior art, the beneficial effects of the present disclosure are:
(1) The invention provides an MR fish tank interaction system and method supporting manual creation, wherein the scheme allows a user to design an virtual image in a touch control and voice mode, the natural interaction mode is easy to learn and use, and the scheme provides a mixed reality environment taking a real fish tank as a carrier by utilizing a three-dimensional screen-out display technology based on binocular parallax, so that the reality sense of user interaction is increased; meanwhile, the scheme supports the materialization of the virtual image, allows a user to convert the designed virtual image into a tangible and touchable entity in a manner of craftwork, thereby exercising the comprehensive coordination ability of hands, eyes and brains of children and promoting the cultivation and development of creativity.
(2) In the scheme, the virtual image created by the user can be materialized and is endowed with interactivity, the scheme supports the user to reform the manufactured entity, and the sensor module is additionally arranged to change the manufactured entity from a common physical entity into an interactive tool for the user; finally, the scheme of the disclosure allows the user to give the characteristics of actions, languages and the like of the avatar, shuttle among different MR fish tanks in the mode of the avatar, and communicate through the limb actions or languages of the avatar, so as to provide a stronger and shocking realism experience for the user.
(3) The scheme allows the user to personally create the entity toys and models, and the virtual image is embodied and materialized, so that curiosity of the user can be met, the practical ability and innovation consciousness of the user are cultivated, and the user is interesting; meanwhile, the entity image manufactured by the user is modified, so that the experience of the user interaction is improved in a more natural and flexible mode.
(4) The scheme is simple and interesting to operate, is suitable for various environments such as kindergarten, science and technology museum, nursing home, psychological rehabilitation and the like, and can be used for various approaches such as course learning, science and popularization education, rehabilitation and the like.
Additional aspects of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the exemplary embodiments of the disclosure and together with the description serve to explain the disclosure, and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flow chart of an MR aquarium interaction method supporting manual authoring according to an embodiment of the present disclosure;
FIG. 2 is a block diagram of a hardware composition of a real aquarium-based mixed reality environment in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a stereoscopic out-of-screen vision principle as described in embodiments of the present disclosure;
fig. 4 is a schematic view of an avatar design prototype according to an embodiment of the present disclosure
FIG. 5 is a schematic illustration of a palette and filler interface as described in embodiments of the present disclosure;
FIG. 6 (a) is a schematic diagram of a two-dimensional virtual contour area division effect according to an embodiment of the present disclosure;
FIG. 6 (b) is a schematic diagram of a method for partitioning a region of a three-dimensional virtual model according to an embodiment of the present disclosure;
fig. 7 (a) to 7 (f) are schematic views of the process of making and transforming the solid toy according to the embodiments of the present disclosure
FIG. 8 (a) is an exemplary diagram of a solid toy according to an embodiment of the present disclosure;
FIG. 8 (b) is an exemplary diagram of a solid 3D model based on PLA material, as described in embodiments of the disclosure;
fig. 8 (c) is an exemplary diagram of an edible chocolate-based solid 3D model as described in embodiments of the present disclosure;
FIG. 9 is a schematic diagram of sensor data communication protocol parsing as described in embodiments of the present disclosure;
FIG. 10 is a flowchart illustrating interactions with an avatar using an entity avatar in accordance with an embodiment of the present disclosure;
FIG. 11 is a schematic view of a sensor coordinate system orientation as described in embodiments of the present disclosure;
Detailed Description
The disclosure is further described below with reference to the drawings and examples.
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments in accordance with the present disclosure. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
Embodiment one:
the aim of the embodiment is to provide an MR fish tank interaction system supporting manual creation.
The conception sources of the scheme in the embodiment are as follows: based on the tangible content creation of the handicraft, children can obtain the physical image (such as a toy or a three-dimensional model), and the comprehensive coordination ability of hands, eyes and brains of the children is exercised, and meanwhile, another implementation form of the content creation is provided. The virtual content is designed by utilizing a digital technology, and the entity image is manufactured by a manual process, so that the development of the content creation potential of children is promoted, and the method has important significance for the development and learning of creativity. Physical interaction, which can digitally informatize physical objects, gives a person the ability to manipulate digital virtual information, has been applied to many virtual reality studies. The tactile and proprioceptive feel of the tangible object may enhance the perception and comprehensiveness of the three-dimensional shape by the user, providing a more realistic experience and immersion for the user, as compared to a purely visual representation. The physical interaction has the capability of connecting virtual and real contents, can skillfully combine the two content creation modes which are quite different in appearance, and establishes the connection between virtual and physical contents in a natural and interesting mode, thereby developing the creativity and imagination of children to the greatest extent.
Based on the above conception, the present embodiment provides an MR fish tank interaction system supporting manual creation, including:
the virtual image design module is configured to capture color and contour information selected by a user in real time and map the color and contour information to corresponding two-dimensional and three-dimensional virtual images;
the three-dimensional screen-out display module is configured to display the virtual image three-dimensional screen designed by a user into the front glass of the fish tank or the inner space of the fish tank by using a time-division three-dimensional rendering method based on binocular parallax, so as to generate a mixed reality experience environment;
the virtual image entity making module is configured to obtain an entity image matched with the virtual image designed by the user, wherein the obtaining mode of the entity image adopts a manual making mode or a 3D printing mode participated by the user;
the real object interaction module is configured to receive sensor data in the real object image after the user is reformed and to carry out interaction control on the virtual image in the mixed reality fish tank based on the sensor data; the improvement of the entity image is specifically as follows: adding a gyroscope and an acceleration sensor to an entity image which is participated in manufacturing by a user;
and the virtual avatar interaction module is configured to control virtual images associated with the user in the mixed reality fish tank based on the real actions or language characteristics of the user, so that the virtual images display actions matched with the actions or language characteristics of the user, and different users interact among different mixed reality fish tanks in the mode of virtual image avatars.
Specifically, for ease of understanding, the following detailed description is given from the specific use procedure of the entire system:
and step 1, providing creation of two-dimensional and three-dimensional virtual images. Taking the color design of the avatar as an example, a plurality of two-dimensional avatar outlines are provided, a user is allowed to modulate different colors through touch operations such as clicking and the like, corresponding color filling is carried out on the outlines, and the two-dimensional and three-dimensional avatar is generated.
In the scheme of the embodiment, the ocean living things of fish, dolphin, starfish and turtle are selected as the prototype of the virtual image design by taking the world of the seabed as an example. As shown in fig. 4, the solution described in this embodiment provides a two-dimensional design prototype for each living being, including a two-dimensional virtual outline and a three-dimensional virtual model.
In the scheme of the embodiment, touch control is adopted as a main interaction mode. When the user interacts, the front glass of the fish tank is touched, and an infrared touch frame is arranged around the glass and used for detecting and positioning the touch position of the user. The infrared touch frame is provided with a USB interface, is regarded as mouse equipment after being accessed to the host, and when a user clicks the surface of the fish tank, the system host obtains and responds to the coordinates of the clicking position, so that touch interaction is realized.
In the process of creating the avatar, the scheme of the embodiment will be described by taking a color design as an example, and the autonomous consciousness and the creative consciousness of the user are cultivated by modulating the colors and filling the colors. The user modulates the colors based on the preference of the user, and fills the colors into the two-dimensional virtual outline, so that a two-dimensional virtual creature image is obtained; and mapping the color data of the two-dimensional avatar to the three-dimensional model by using the mapping matrix, so that the three-dimensional avatar can be generated.
The specific steps of color mixing and color filling by a user through touch are as follows:
step 101, a color matching stage. The scheme described in this embodiment performs modulation of colors based on an RGB color model. As shown in fig. 5, the palette contains four color panels, respectively showing the colors of R, G, B of three single colors and the final color that is presented after they are mixed. An arc-shaped sliding bar is arranged around the monochromatic color plate, and a user can respectively adjust the numerical value of R, G, B by dragging a circular sliding block of the sliding bar. Each time an adjustment is performed, the user can view the modulated color effects from both the monochrome color plate and the hybrid color plate in real time. After the color modulation is completed, the user needs to click on the mixed color panel, thereby locking the modulated color.
Step 102, color filling stage. In order to increase the interest of image creation, the scheme of the embodiment divides the two-dimensional virtual outline into a plurality of areas, and allows children to fill colors in the areas. For example: the two-dimensional contour of the turtle is divided into six areas (see fig. 6 (a)). After the specific color is modulated, a certain area in the outline is directly touched, the corresponding color is filled in the area, and the design of the two-dimensional virtual living things is completed until all the areas are filled with the color.
To meet the consistency of interaction, the three-dimensional virtual model of each living being needs to be divided into the same number of regions. Different from the two-dimensional virtual contour, the three-dimensional model is divided by only cutting the original materials on the surface of the model in different areas, then giving new materials one by one, and the whole model is not required to be cut. For example: as shown in fig. 6 (b), the back material of the three-dimensional turtle model is cut.
The key point of generating the three-dimensional virtual creature image is to realize the color mapping between the two-dimensional image and the three-dimensional image, and the specific steps are as follows:
step 103, establishing and storing a mapping matrix C between each area of the two-dimensional image and the three-dimensional image;
104, when a user selects a certain two-dimensional biological contour, calling out a corresponding three-dimensional model, and performing transparency treatment on the model;
step 105, when a user fills colors in a certain area of the two-dimensional outline, reading and recording color attributes color of the area;
step 106, reading the mapping matrix C, finding out a corresponding three-dimensional image area, and assigning color to the color attribute of the material of the area of the three-dimensional model, so as to realize real-time updating of color data;
and 107, clicking a button written with a color-filling completion word by a user through touch control, storing the two-dimensional avatar as a JPG file, displaying the three-dimensional avatar with the completed color mapping, and storing the three-dimensional avatar as an FBX file.
Step 2, providing various interactive contents of the user and the three-dimensional virtual image, which mainly comprises the following steps: feeding to the avatar by touch control, directly talking with the avatar by voice, etc.
Wherein, the specific steps of feeding to the avatar and interacting with the avatar dialogue are as follows:
step 201, providing a menu for users to select, including feeding and talking two buttons written with interactive content names;
step 202, after clicking a button, the user displays a corresponding interactive content scene;
step 203, if the user selects feeding, calling out a three-dimensional virtual image, and playing an animation of the edible feed to realize the interaction effect of feeding;
step 204, if the user selects a dialogue, and speaks keywords such as "hello" in the face of the fish tank, the answer content corresponding to the three-dimensional image is played, so as to realize the interaction effect of the dialogue.
The implementation method of the dialogue interaction specifically comprises the following steps:
(1) A voice database is built, and keywords such as 'hello' are included.
(2) A mapping array A of keywords and answers is established, wherein each keyword corresponds to a section of audio of the answer.
(3) After the user selects dialogue interaction, a microphone is used for monitoring and detecting the occurrence of keywords in real time.
(4) If the occurrence of the keyword is detected, searching the mapping array A, and calling and playing the corresponding answer audio.
And 3, constructing a mixed reality environment based on the real fish tank. The stereoscopic screen-out display technology based on binocular parallax projects a projector on a virtual picture on the rear side glass of the fish tank, and the screen-out display is carried out to the front side glass of the fish tank or the inner space of the fish tank, so that the visual experience of mixed reality is generated, and the sense of reality of user interaction is increased.
The solution described in this embodiment uses a aquarium as a platform for a user to design and interact with an avatar. The virtual picture is projected onto the rear side glass of the fish tank, and a distance exists between the virtual picture and the front side glass touched by a user, so that the sense of reality of touch interaction is greatly reduced. Therefore, the scheme of the embodiment utilizes the stereoscopic screen-out display technology based on binocular parallax to enable virtual projection content to be stereoscopic screen-out to the front glass of the fish tank or the inner space of the fish tank, and generates a natural mixed reality environment after being fused with a scene in the real fish tank, so that the touch feeling and the strong stereoscopic vision experience of 'what you see is what you get' are provided for a user.
As shown in fig. 2, the mixed reality environment based on the real fish tank includes a fish tank, an infrared touch frame, a DLP projector, and stereoscopic glasses. The infrared touch frame is assembled on the glass at the front side of the fish tank, so that the touch input of a user is facilitated; a liquid crystal dimming film is stuck on the rear glass and used for displaying a virtual picture projected by a projector; the user needs to wear stereoscopic glasses to view and experience the mixed reality visual effect.
The fish tank is used as a carrier to construct a mixed reality environment, virtual pictures projected on the rear side of the fish tank are required to be displayed in a visual three-dimensional mode, and the screen-outlet distance is constant. The stereoscopic display technology based on binocular parallax mainly depends on the stereoscopic display technology based on binocular parallax, and the technology can display a virtual object to a proper position in an outgoing mode by utilizing the stereoscopic vision principle of human eyes, and keeps the outgoing distance constant, so that natural interaction of users is facilitated. The stereoscopic screen-out vision principle of the human eyes is shown in fig. 3. Wherein, the horizontal position of the virtual object in the left eye image is L, and the horizontal position in the right eye image is R, the horizontal parallax D of the human left and right eyes viewing the virtual object is:
D=R-L
r < L, D is a negative number, i.e. the virtual object has a negative parallax in the left and right eye images.
Assuming that E is the interpupillary distance of human eyes, C is the distance between the human eyes and the projection screen, H is the distance of the virtual object point out of the screen, and the following relationship exists:
the user is required to be at a fixed position in front of the fish tankThe creation of the avatar is performed so that the distance C between the human eye and the projection screen is fixed, c=1.2m in the scheme described in this embodiment. The interpupillary distance of a person is approximately between 60 and 65 millimeters, and the scheme described in this example considers e=0.06 m. Therefore, the size of H is related to D only, i.e., the virtual object out-of-screen distance is related to the parallax of the virtual object on the left and right eye images only.
The parallax D of the virtual object on the left and right eye images is related to the distance of the virtual object from the virtual camera and the size of the virtual object itself. The closer the virtual object is to the virtual camera, the larger the size is, and the larger the parallax is, so that the larger the screen-out distance is.
Based on the principle, according to the scheme, virtual objects are respectively displayed to the proper positions in an out-screen mode through the preprocessing operation of adjusting parallax, so that a mixed reality environment based on a real fish tank is constructed, and brand-new perception and interaction experience is brought to users. The virtual objects that need to be displayed in a stereoscopic way can be divided into two types: one type is two-dimensional virtual plane elements (such as menus, buttons, palettes, two-dimensional outlines and the like) which need to be visually displayed on the front glass of the fish tank in an out-screen mode, wherein the out-screen distance is the vertical distance between the front glass and the rear glass of the fish tank, and in the scheme of the embodiment, H=0.4m; another type is a three-dimensional virtual model (e.g., an authored three-dimensional avatar) that requires an off-screen display into the interior of the aquarium, which in this embodiment is considered to be h=0.2m.
And 4, supporting materialization of the virtual image, and allowing a user to convert the designed virtual image into a tangible and touchable entity through cloth, three-dimensional printing and other modes.
The scheme of the embodiment is provided with a service center for providing the entity authoring service based on the craftsmanship. The virtual image designed by the user in the mixed reality environment is stored in the computer in the form of a plane picture and a three-dimensional model, and staff in the service center is responsible for providing the virtual image to the user, guiding the user to finish the manufacture of the physical image by using a proper craftwork method or using three-dimensional printing equipment. In the authoring process, the user makes and prints based on two-dimensional or three-dimensional information of the avatar, thereby materializing the avatar and converting it into a tangible, touchable real object.
Taking the creation of solid toys as an example, the scheme of the embodiment selects non-woven fabrics as materials of the toys according to the character and cognition characteristics of children, and uses simple tools such as glue, cotton, scissors and the like to change the virtual image into a touchable and tangible toy in a manual cloth process mode, so that the creativity of the users is cultivated, and the toy is interesting.
As shown in fig. 7 (a) - (e), taking a dolphin figure as an example, the manufacturing of the solid toy comprises the following steps:
step 401, preparing non-woven fabric materials and manufacturing tools with corresponding colors according to color information in a two-dimensional virtual dolphin image JPG file;
step 402, cutting the non-woven fabric into different shapes by scissors according to the contour information in the two-dimensional virtual dolphin image JPG file;
step 403, sticking the edges of the cloth representing the front and back bodies of the dolphin with glue, and decorating the appearance of the doll (such as sticking eyes and bellies);
step 404, a gap is reserved when the front and back body cloths are adhered, and a magic tape is adhered to the gap, so that the transformation is convenient;
step 405, adding cotton into the body of the dolphin doll to make it soft to touch.
The kinds of handicraft for children are various. In addition to solid toys (see fig. 8 (a)), the solution described in this embodiment also provides printing and production services for solid 2D colored drawings, solid 3D models based on PLA materials (see fig. 8 (b)) and solid 3D models based on edible chocolate (see fig. 8 (c)), facilitating children to freely create according to their own preferences, giving full play to their imagination and creativity. Compared with the manufacturing of solid toys, the three kinds of craftwork manufacturing modes need to read the FBX file storing the three-dimensional information of the virtual image, and the electronic equipment such as a color ink-jet printer, a 3D food printer and the like is complex in operation and needs to be realized under the guidance of teachers, parents and service center staff.
And 5, supporting the user to modify the manufactured entity, and additionally installing a gyroscope and an acceleration sensor to change the entity from a common physical entity into an interactive tool for the user to trigger the appearance of the virtual image in the mixed reality scene and realize synchronous rotation between the virtual image and the entity image.
As shown in fig. 7 (f), in the scheme of this embodiment, by adding a gyroscope and an acceleration sensor to the physical image, the real-time communication of data between the sensor and the computer is utilized, so that the physical image has the capability of manipulating the virtual image, and the physical image is changed into an interactive tool for users from a common physical entity.
In order to realize communication between the sensor and the computer, the scheme of the embodiment needs to perform network distribution on the sensor, so that the sensor is connected to the WIFI of the computer and sends data in real time. Meanwhile, in the scheme of the embodiment, a server needs to be established at a computer end, and data sent by a sensor is received by utilizing a UDP communication principle, so that the data is acquired in real time, and the data communication is completed.
The protocol resolution of the sensor data communication is shown in fig. 9. A packet of data contains 17 fields, and the fields are divided. Wherein, for the first and last fields, two kinds of data information are contained in the fields; for the other fields, each field contains only one type of data information.
The scheme of the embodiment supports the user to realize two kinds of interaction contents by using entity images: triggering the appearance of the corresponding virtual image in the mixed reality fish tank by picking up the entity toy; by rotating the solid toy, the synchronous following rotation effect of the virtual image is realized.
As shown in fig. 10, based on data communication between the sensor and the computer, the specific steps of using the entity avatar to interact with the avatar are:
step 501, filtering captured sensor data in real time based on sensor data protocol analysis;
step 502, extracting three groups of data, namely the serial number of a sensor, the acceleration along an X/Y/Z axis and the rotation angle around the X/Y/Z axis in real time;
step 503, setting a threshold value of acceleration, and when the acceleration of the sensor is detected to be larger than the threshold value, considering that the user picks up a certain entity figure at the moment, thereby triggering the appearance of a corresponding virtual figure in the MR fish tank;
and 504, mapping the sensor rotation angle value captured in real time to the virtual image after processing, thereby realizing the effect of synchronous rotation of the virtual image and the real image.
The specific method for mapping the angle data of the virtual-real image rotation comprises the following steps:
(1) A three-dimensional coordinate system of the sensor is established. When the sensor is placed in the manner of fig. 11, the upward direction is the X-axis, the leftward direction is the Y-axis, and the outward direction perpendicular to the sensor module is the Z-axis. The direction of rotation is defined according to the right-hand rule, i.e. the thumb of the right hand points in the axial direction, and the direction of bending of the four fingers is the direction of rotation about the axis.
(2) After the sensor is placed in the physical image, the coordinate system direction of the physical image is regulated according to the coordinate system direction of the sensor.
(3) An algorithm M for angle conversion between the entity image and the virtual image coordinate system is constructed.
(4) The server acquires angle= (Ax, ay, az) in real time, transmits the Angle data angle= (Ax, ay, az) to the algorithm M, and outputs Angle data Angle' under the avatar coordinate system.
(5) The value of Angle' is assigned to the avatar.
And 6, supporting interaction of the user in the mode of an avatar. Through Kinect interaction technology and network communication technology, the real actions, language and other characteristics of the user are mapped to the designed virtual figures in the mixed reality environment in real time, so that a plurality of users are supported to shuttle among different MR fish tanks in the mode of virtual figures, and communication is allowed among the users through limb actions or languages of the virtual figures.
The scheme of the embodiment will be described by taking the virtual avatar interaction between two users as an example. The users are respectively numbered U1 and U2, the MR fish tanks used by the users are F1 and F2, and the corresponding hosts are PC1 and PC2.
Wherein, U1 shuttles from F1 to F2 in the mode of an avatar, and the specific implementation steps of communicating with U2 through limb actions or languages of the avatar are as follows:
step 602, establishing network-based data communication between two hosts;
step 602, two users respectively design virtual figures in front of two fish tanks to serve as avatars;
step 603, after the design of the avatar on the F1 is completed, transmitting the color and contour information of the avatar on the PC1 to the PC2 through a network communication protocol, and playing the animation that the three-dimensional avatar walks out of the fish tank;
step 604, after receiving the data, the PC2 calls out the corresponding three-dimensional model in the model library according to the contour information, maps the color information to the three-dimensional model according to the mapping matrix, and plays the animation of the three-dimensional model flowing into the fish tank at the same time, so as to realize the effect of shuttling the virtual image from F1 to F2;
step 606, capturing human skeleton joint point information of the U1 in real time by using Kinect, and judging actions made by the user U1 according to the relative position relation between the joint points. Acquiring voice information of U1 in real time based on a microphone array of PC1, and transmitting action information and voice information to PC2 through a network communication protocol;
step 607, after receiving the motion and the voice information, the PC2 plays the U1 voice information and triggers the three-dimensional model to play the animation corresponding to the motion of the U1, so as to achieve the effect that the U1 communicates with the U2 through the limb motion and the language of the avatar.
Example two
The purpose of this embodiment is to provide an MR aquarium interaction method supporting manual creation.
An MR fish tank interaction method supporting manual creation, comprising:
capturing color and contour information selected by a user in real time, and mapping the color and contour information to corresponding two-dimensional and three-dimensional virtual images;
displaying an virtual image designed by a user in a three-dimensional way to the front glass of the fish tank or the inner space of the fish tank by using a time-division three-dimensional rendering method based on binocular parallax, and generating a mixed reality experience environment;
acquiring an entity image matched with an virtual image designed by a user, wherein the acquisition mode of the entity image adopts a manual manufacturing mode or a 3D printing mode participated by the user;
receiving sensor data in the entity image after user modification, and performing interactive control on the virtual image in the mixed reality fish tank based on the sensor data; the improvement of the entity image is specifically as follows: adding a gyroscope and an acceleration sensor to an entity image which is participated in manufacturing by a user;
and controlling the virtual image associated with the user in the mixed reality fish tank based on the real action or language characteristic of the user, so that the virtual image displays the action matched with the action or language characteristic of the user, and different users interact among different mixed reality fish tanks in the mode of virtual image avatars.
Further, the specific technical details of the method in this embodiment are described in the first embodiment, so that they will not be described herein.
Those of ordinary skill in the art will appreciate that the elements of the various examples described in connection with the present embodiments, i.e., the algorithm steps, can be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The MR fish tank interaction system and the MR fish tank interaction method supporting manual creation can be realized, and have wide application prospects.
The foregoing description of the preferred embodiments of the present disclosure is provided only and not intended to limit the disclosure so that various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (10)

1. An MR fish tank interactive system supporting manual creation, comprising:
the virtual image design module is configured to capture color and contour information selected by a user in real time and map the color and contour information to corresponding two-dimensional and three-dimensional virtual images;
the three-dimensional screen-out display module is configured to display the virtual image three-dimensional screen designed by a user into the front glass of the fish tank or the inner space of the fish tank by using a time-division three-dimensional rendering method based on binocular parallax, so as to generate a mixed reality experience environment;
the virtual image entity making module is configured to obtain an entity image matched with the virtual image designed by the user;
the real object interaction module is configured to receive sensor data in the real object image after the user is reformed and to carry out interaction control on the virtual image in the mixed reality fish tank based on the sensor data; the improvement of the entity image is specifically as follows: adding a gyroscope and an acceleration sensor to an entity image which is participated in manufacturing by a user;
and the virtual avatar interaction module is configured to control virtual images associated with the user in the mixed reality fish tank based on the real actions or language characteristics of the user, so that the virtual images display actions matched with the actions or language characteristics of the user, and different users interact among different mixed reality fish tanks in the mode of virtual image avatars.
2. The MR fish tank interaction system supporting manual creation according to claim 1, wherein the mixed reality fish tank comprises a fish tank body, a touch frame positioned on the glass at the front side of the fish tank and a projector, wherein the touch input of a user is realized through the infrared touch frame, and the projector is used for displaying a virtual picture on a liquid crystal dimming film on the glass at the rear side of the fish tank; the mixed reality fish tank is matched with the stereoscopic glasses worn by the user to realize a mixed reality environment based on the real fish tank; the entity image is obtained by adopting a manual manufacturing mode or a 3D printing mode which is participated by a user.
3. An MR fish tank interaction system supporting manual creation as defined in claim 1, further comprising a natural interaction module configured to receive a touch or voice signal of a user based on an infrared touch technology and an intelligent voice technology and to implement interaction control of an avatar in a fish tank based on the touch or voice signal and a pre-associated control command.
4. An MR fish tank interactive system for supporting manual creation as defined in claim 1 wherein the natural interactive module comprises a touch frame and microphone array disposed on the glass of the front side of the mixed reality fish tank.
5. The MR fish tank interactive system of claim 1, wherein the avatar interactive module comprises a motion capture device which acquires the user's motion or language in real time in combination with a microphone array of the natural interactive module, and controls the avatar corresponding to the user based on a preset control command associated with the motion or language; wherein, the virtual image corresponding to the user can be displayed in different mixed reality fish tanks.
6. The MR fish tank interactive system supporting manual creation according to claim 1, wherein the system is pre-constructed with a two-dimensional avatar profile library and a correspondingly arranged three-dimensional avatar library, and a user can select different avatar profiles to configure colors by using a touch frame on glass on the front side of the mixed reality fish tank according to requirements, so as to realize the design of the avatar.
7. The MR fish tank interactive system for supporting manual creation according to claim 1, wherein the interactive control of the avatar in the mixed reality fish tank based on the sensor data comprises the following specific steps: acquiring acceleration sensor data and gyroscope data in real time, and when the acquired acceleration value exceeds a preset threshold value, considering that a user picks up a certain entity image at the moment, so as to trigger the display of an virtual image corresponding to the current entity image in the mixed reality fish tank; and mapping the acceleration value obtained by the acceleration sensor and the rotation angle value obtained in real time by the gyroscope to the avatar in the mixed reality fish tank, so as to realize synchronous movement of the avatar and the entity avatar.
8. An MR fish tank interaction method supporting manual creation, comprising:
capturing color and contour information selected by a user in real time, and mapping the color and contour information to corresponding two-dimensional and three-dimensional virtual images;
displaying an virtual image designed by a user in a three-dimensional way to the front glass of the fish tank or the inner space of the fish tank by using a time-division three-dimensional rendering method based on binocular parallax, and generating a mixed reality experience environment;
acquiring an entity image matched with an virtual image designed by a user, wherein the acquisition mode of the entity image adopts a manual manufacturing mode or a 3D printing mode participated by the user;
receiving sensor data in the entity image after user modification, and performing interactive control on the virtual image in the mixed reality fish tank based on the sensor data; the improvement of the entity image is specifically as follows: adding a gyroscope and an acceleration sensor to an entity image which is participated in manufacturing by a user;
and controlling the virtual image associated with the user in the mixed reality fish tank based on the real action or language characteristic of the user, so that the virtual image displays the action matched with the action or language characteristic of the user, and different users interact among different mixed reality fish tanks in the mode of virtual image avatars.
9. An electronic device comprising a memory, a processor and a computer program stored for execution on the memory, wherein the processor, when executing the program, implements an MR fish tank interaction method supporting manual authoring as claimed in any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor implements an MR fish tank interaction method supporting manual authoring as claimed in any one of claims 1-7.
CN202310384064.5A 2023-04-06 2023-04-06 MR fish tank interaction system and method supporting manual creation Pending CN116524155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310384064.5A CN116524155A (en) 2023-04-06 2023-04-06 MR fish tank interaction system and method supporting manual creation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310384064.5A CN116524155A (en) 2023-04-06 2023-04-06 MR fish tank interaction system and method supporting manual creation

Publications (1)

Publication Number Publication Date
CN116524155A true CN116524155A (en) 2023-08-01

Family

ID=87407420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310384064.5A Pending CN116524155A (en) 2023-04-06 2023-04-06 MR fish tank interaction system and method supporting manual creation

Country Status (1)

Country Link
CN (1) CN116524155A (en)

Similar Documents

Publication Publication Date Title
US20240005808A1 (en) Individual viewing in a shared space
Craig Understanding augmented reality: Concepts and applications
Craig et al. Developing virtual reality applications: Foundations of effective design
CN110688005A (en) Mixed reality teaching environment, teacher and teaching aid interaction system and interaction method
WO1993007561A1 (en) Apparatus and method for projection upon a three-dimensional object
CN111324334B (en) Design method for developing virtual reality experience system based on narrative oil painting works
CN108389249A (en) A kind of spaces the VR/AR classroom of multiple compatibility and its construction method
CN107656615A (en) The world is presented in a large amount of digital remotes simultaneously
CN106601043A (en) Multimedia interaction education device and multimedia interaction education method based on augmented reality
KR20160103897A (en) System for augmented reality image display and method for augmented reality image display
CN111862711A (en) Entertainment and leisure learning device based on 5G internet of things virtual reality
CN109828666B (en) Mixed reality interaction system and method based on tangible user interface
Tachi et al. Haptic media construction and utilization of human-harmonized “tangible” information environment
CN111292409A (en) Building design method based on VR technology
CN109032339A (en) A kind of method and system that real-time intelligent body-sensing is synchronous
CN106066692A (en) A kind of interacting toys based on AR technology and construction method
CN113941138A (en) AR interaction control system, device and application
Liu et al. Research on scene fusion and interaction method based on virtual reality technology
CN116524155A (en) MR fish tank interaction system and method supporting manual creation
CN110969237A (en) Man-machine virtual interaction construction method, equipment and medium under view angle of amphoteric relationship
CN111951617A (en) Virtual classroom for special children teaching
JP3939444B2 (en) Video display device
KR101334865B1 (en) The method of helping painting for kids to develope their creativity
Ip et al. Smart ambience games for children with learning difficulties
Figueiredo et al. Storyboards in VR Narratives Planning: How to Create and Evaluate Them

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination