CN109903391A - A kind of method and system for realizing scene interactivity - Google Patents

A kind of method and system for realizing scene interactivity Download PDF

Info

Publication number
CN109903391A
CN109903391A CN201711302296.2A CN201711302296A CN109903391A CN 109903391 A CN109903391 A CN 109903391A CN 201711302296 A CN201711302296 A CN 201711302296A CN 109903391 A CN109903391 A CN 109903391A
Authority
CN
China
Prior art keywords
scene
stage property
teaching stage
teaching
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711302296.2A
Other languages
Chinese (zh)
Inventor
宋卿
葛凯麟
王欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Joy Wisdom Technology (beijing) Co Ltd
Original Assignee
Joy Wisdom Technology (beijing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Joy Wisdom Technology (beijing) Co Ltd filed Critical Joy Wisdom Technology (beijing) Co Ltd
Priority to CN201711302296.2A priority Critical patent/CN109903391A/en
Publication of CN109903391A publication Critical patent/CN109903391A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of methods for realizing scene interactivity, comprising: identifies that the teaching stage property type includes scene class, role class and object type to one or more teaching stage property in detection zone;It is established based on the different classes of teaching stage property identified and shows corresponding virtual scene;After detecting position change of one or more of teaching stage properties in detection zone, in corresponding virtual scene, according to the type of one or more of teaching stage properties, corresponding adjustment is carried out to the virtual scene.Correspondingly, it the embodiment of the invention also provides a kind of device, solves the problems, such as that the scene interactivity of immobilization in the prior art causes poor user experience, maintenance cost high, shortens the development cycle, reduce development cost, improve user experience.

Description

A kind of method and system for realizing scene interactivity
Technical field
The invention belongs to information technology fields, and in particular, to a kind of method and system for realizing scene interactivity.
Background technique
In current teaching process, it is common to use the entities such as books, toy, object visual teaching article, as education is believed The continuous propulsion of breathization, the electronic equipments such as television set (electronic display), electronic whiteboard, projector, interactive all-in-one machine largely into Enter classroom, more and more contents of courses start to present in a manner of digitized.Currently, classroom interaction is not only teacher and learns Raw interaction, a large amount of interactive entertainment, interactive question and answer are also walked entering classrooms, are provided in teaching more abundant for student Hold.
However, in the prior art, for the human-computer interaction function on classroom also in the starting stage earlier, current shows Have technology can only the scene based on immobilization interacted for students experience, or based on fixed question-answering mode with student, for For student, scene is single, experience is poor, and for developer, scene development difficulty is big, if desired more new scene then may be used Teaching software and hardware be able to can be upgraded on a large scale, maintenance cost is high, and the development cycle is long.
Summary of the invention
The present invention provides a kind of method and system for realizing scene interactivity, solve the scene of immobilization in the prior art The problem that interaction causes poor user experience, maintenance cost high, shortens the development cycle, reduces development cost, improve user Experience.
To achieve the goals above, the present invention provides a kind of methods for realizing scene interactivity, comprising:
In detection zone to one or more teaching stage properties identify, the teaching stage property type include scene class, Role class and object type;
It is established based on the different classes of teaching stage property identified and shows corresponding virtual scene;
After detecting position change of one or more of teaching stage properties in detection zone, in corresponding virtual field Jing Zhong carries out corresponding adjustment to the virtual scene according to the type of one or more of teaching stage properties.
Optionally, the teaching stage property type further includes story class, then described based on the different classes of teaching identified Stage property is established and shows corresponding virtual scene, comprising:
The narration information for identifying the story class teaching stage property, is based on the narration information, by different classes of teaching Stage property carries out the broadcasting of animation and/or sound according to the plot in virtual scene.
Optionally, when the teaching stage property type identified is role class or object type, and the teaching stage property quantity It is described to be established based on the different classes of teaching stage property identified and show corresponding virtual scene when being one, comprising:
Establish corresponding virtual role or dummy object;
Or, when the teaching stage property type be scene class, and the teaching stage property quantity be one when, it is described be based on identify Different classes of teaching stage property out is established and shows corresponding virtual scene, comprising:
Establish scene corresponding with scene class teaching stage property.
Optionally, when the teaching stage property quantity be it is multiple, and the multiple teaching stage property types identified are at least wrapped It is described to establish and show based on the different classes of teaching stage property identified when including scene class and role class and/or object type Corresponding virtual scene, comprising:
Scene corresponding with scene class teaching stage property is established, and load is imparted knowledge to students with the role class in the scene The corresponding virtual role of stage property, and/or the load dummy object corresponding with role class teaching stage property in the scene.
Optionally, it is described established based on the different classes of teaching stage property identified and show corresponding virtual scene it Afterwards, the method also includes:
By the virtual role and/or dummy object under this scenario according to random plot or predefined event Thing section is interacted.
Optionally, the teaching stage property type further includes story class, and the teaching stage property type identified is at least wrapped Include scene class, story class and role and/or object type, then it is described to be established simultaneously based on the different classes of teaching stage property identified Show corresponding virtual scene, comprising:
Under the corresponding scene of the scene class, the virtual role and/or the dummy object are according to the story class Corresponding plot is interacted.
Optionally, when the teaching stage property type identified includes at least role class or object type, described in the foundation The type of one or more teaching stage property, carries out corresponding adjustment to the virtual scene, comprising:
The virtual role or the dummy object move according to the motion track identified;
When the different location being moved under the virtual scene, the virtual role or dummy object are done at special efficacy Reason, and/or, special effect processing is done into the band of position where virtual role described under the scene or dummy object.
Optionally, when the multiple teaching stage property types identified include at least role class or object type, the foundation The type of one or more of teaching stage properties, carries out corresponding adjustment to the virtual scene, comprising:
The virtual role or the dummy object are moved according to the motion track identified;
When the distance between the adjacent virtual role or the dummy object are less than preset threshold, the phase is triggered Adjacent virtual role or dummy object carries out scene interaction.
It is optionally, described that one or more teaching stage property is identified in detection zone, comprising:
The image that one or more of teaching stage properties are acquired in the detection zone, is identified using image recognition technology The information of the teaching stage property out;And/or
The identification code that the teaching stage property is induced in the detection zone, identifies the religion according to the identification code Learn the information of stage property.
Optionally, the method also includes:
One or more functions area is monitored, it is real when receiving one or more of functional areas trigger signals Now function corresponding with one or more of functional areas.
The embodiment of the present invention also provides a kind of system for realizing scene interactivity, the system comprises: processor and for depositing The memory of the enough computer programs run on a processor of energy storage;Wherein, the processor is for running the computer journey When sequence, the method that executes above-mentioned scene interactivity.
The embodiment of the present invention also provides a kind of computer readable storage medium, is stored thereon with computer executable instructions, The method that the computer executable instructions are used to execute above-mentioned scene interactivity.
The method and system of the embodiment of the present invention have the advantage that
In the embodiment of the present invention, by being identified to different types of teaching stage property, and based on the teaching road identified Tool is established on the screen and is shown virtual scene, and the motion track based on teaching stage property, phase is carried out on virtual scene interface The adjustment answered.In addition, in the embodiment of the present invention, be not only able to achieve single teaching stage property identification, establish corresponding virtual scene, Role or object can also be combined identification, establish the interaction under different scenes between different virtual role/dummy objects. It, can be at the different location of scene pair for movement of the single teaching stage property in detection zone in addition, in the embodiment of the present invention Corresponding virtual role/dummy object or the regional area where it generate special efficacy, and the combination for multiple teaching stage properties is mobile, According to the distance of various teaching stage property, scene interaction is carried out between automatic trigger various teaching stage property.Shorten exploitation week Phase reduces development cost, improves user experience, and brings commercial success.
Detailed description of the invention
Fig. 1 is the method flow diagram that scene interactivity is realized in the embodiment of the present invention;
Fig. 2 a is educational aid frame and content of courses area schematic diagram in the embodiment of the present invention;
Fig. 2 b is frame of the educational aid with direction signs and content of courses area schematic diagram in the embodiment of the present invention;
Fig. 2 c is another schematic diagram of educational aid frame and content of courses area in the embodiment of the present invention;
Fig. 3 is one schematic diagram of detection zone in the embodiment of the present invention;
Fig. 4 is detection zone recognition methods flow chart in the embodiment of the present invention;
Fig. 5 a is original image to be processed in the embodiment of the present invention;
Fig. 5 b is effect picture after binary conversion treatment in the embodiment of the present invention;
Fig. 5 c is effect picture after profile scalping in the embodiment of the present invention;
Fig. 5 d is that isolated point screens out rear effect picture in the embodiment of the present invention;
Fig. 5 e is effect picture after straight line fitting in the embodiment of the present invention;
Fig. 5 f is effect picture after correcting in the embodiment of the present invention;
Fig. 6 a is to detect teaching card schematic diagram in the embodiment of the present invention in detection zone;
Fig. 6 b is virtual scene schematic diagram in the embodiment of the present invention;
Fig. 6 c is to detect another schematic diagram of teaching card in the embodiment of the present invention in detection zone;
Fig. 6 d is another schematic diagram of virtual scene in the embodiment of the present invention;
Fig. 7 a is to detect seven-piece puzzle schematic diagram in the embodiment of the present invention in detection zone;
Fig. 7 b is that schematic diagram is presented in dummy object in the embodiment of the present invention;
Fig. 7 c is in the embodiment of the present invention while to detect teaching card and seven-piece puzzle schematic diagram;
Fig. 8 is S103 implementation method flow chart in the embodiment of the present invention;
Fig. 9 is functional areas schematic diagram in the embodiment of the present invention;
Figure 10 is one instance graph of functional areas in the embodiment of the present invention;
Figure 11 is the device composite structural diagram that scene interactivity is realized in the embodiment of the present invention.
Specific embodiment
In order to keep the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, to this hair It is bright to be further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and do not have to It is of the invention in limiting.In addition, as long as technical characteristic involved in the various embodiments of the present invention described below is each other Between do not constitute conflict and can be combined with each other.
To achieve the above objectives, as shown in Figure 1, the present invention provides a kind of method for realizing scene interactivity, this method packet It includes:
S101, the teaching stage property type, which includes field, to be identified to one or more teaching stage property in detection zone Scape class, role class and object type;
It should be noted that the executing subject of the embodiment of the present invention can be a terminal, such as has typical Feng Nuoyi The ARM/FPGA of graceful structure or the mobile terminal of mainstream, such as mobile phone, PDA also or are the services for having typical structure Device, or the cloud being made of multiple distributed servers.There is no restriction to executing subject for the embodiment of the present invention.
Teaching stage property can be the material object for having radio-frequency identification code, and the intelligence for being such as built-in with radio frequency chip or sensing device is played Tool, is also possible to the material object for having two-dimension identification code, such as be printed on the desk of two dimensional code, be printed on the card of two dimensional code, can be with Be individual phonetic, tone, Chinese character or number teaching card (teaching card material can be the scraps of paper, plastic sheet, wooden, rubber or The materials such as sheet metal), it is also possible to nicknack and small-sized material object, such as apple, stand, building blocks, seven-piece puzzle etc., the present invention Embodiment can be to the nicknack/small-sized material object different colours, different shape (such as polygon, circle, ellipse, sector Deng) identified, the material object stage property can not only be accurately positioned in which specific location of space, additionally it is possible to rotation angle is provided, Indicative information, the convenient interactive teaching and learning with student such as whether need to ajust.Teaching card can also be certain and be printed on phonetic, sound The conventional teaching aids such as the teaching aid of tune, Chinese character or number, such as books, small rod, ruler, set square, religion ruler/teacher's pointer, compasses.With teaching For card, in the embodiment of the present invention, teaching card has a frame, which can be in closed (or semi-enclosed) and have one The background color difference in fixed width degree, border color and content of courses area is obvious, such as border color can be black, content of courses area bottom Color can be white, alternatively, border color is white, content of courses area color is black (as shown in Figure 2 a).Frame is polygon Shape, such as triangle, rectangle, diamond shape and trapezoidal, frame may be oval, circular.For convenience of explanation, the present invention is implemented Example by taking quadrangle teaching card as an example, as shown in Figure 2 a, the quadrangle can be rectangular or squarish, such as square, rectangle, Round rectangle etc., and frame and content of courses section are every certain width.Quadrangle teaching card has frame this feature and has Following technical effect: can quickly be positioned by image recognition algorithm and identify frame, to quickly recognize the religion in frame The content of courses of school district.It is identified in teaching after first identifying frame compared with existing image recognition technology, in the embodiment of the present invention The scheme of appearance ensure that the time of identification teaching card is shorter, and identification accuracy is higher.In addition, can be acquired for convenience from any angle To the content of courses in teaching stage property, ambiguous content (such as 6 and 9, u and n), teaching card are seemed for some different directions It further include direction signs, the direction signs are located in the content area, or are placed on the frame.As shown in Figure 2 b, side To mark can be it is thicker wherein on one side frame (i.e. Houing while, excess-three side is narrow), be also possible to a point, two rectangles Corner, horizontal line etc.., can be in wherein one section of addition direction signs of round frame if frame is circle, such as justify A certain section of overstriking right above shape frame, or one or more points is added, or certain irregular figures are added.Direction signs have Following technical effect: can quickly navigate to the direction of the content of courses, to quickly position and identify the information of the content of courses, improve The accuracy rate and speed of identification.Fig. 2 c is a composition example of teaching card in the embodiment of the present invention, and Fig. 2 c is a Zhang Jiaoxue Blocking, has the closed frame of a round rectangle in card, frame has certain width (can be divided into interior frame and outer rim), and There is certain interval in frame and content of courses area.In addition, the frame meets the thin feature in its thick excess-three side of bottom edge, bottom edge is should The direction signs of teaching card.The embodiment of the present invention can quickly position and identify frame, and identify the teaching according to direction signs The rotation angle of card, to quickly recognize the content of courses information in frame in content of courses area.
Detection zone in embodiments of the present invention, can be the plane domain for having certain regular shape, such as rectangle, circle The regions such as shape, triangle, sector are also possible to the plane domain of irregular shape, can also be that three-dimensional spatial area is (such as whole A classroom), there is no restriction to this for the embodiment of the present invention.For convenience of explanation, the embodiment of the present invention uses the conduct of quadrangle frame It is explained in detail for detection zone.
Wherein, one or more teaching stage property is identified in detection zone, is specifically as follows:
As shown in figure 3, the quadrilateral area that detection zone is made of dotted line, the achievable acquisition quadrangle area of camera Domain.Wherein, dotted line is elongated, guarantees that length is not less than 5 pixels in the image that camera acquires, width is not less than 3 Pixel, and length-width ratio is greater than 1.5:1, is less than 10:1;Spacing between point is sufficiently large, guarantees image medium spacing in camera acquisition Not less than 3 pixels.In specific identification process, teaching stage property, which is placed in the dotted line frame, to be identified.
The region detection recognition methods process is as shown in figure 4, specific as follows:
S1011, image binaryzation processing: black and white binary image is converted by image using adaptive thresholding algorithm, in this way Dotted line point presents on bianry image and is independent black region;Original image is as shown in Figure 5 a, such as Fig. 5 b institute of the image after binaryzation Show.
S1012, extract black region profile: using profile Pursuit Algorithm to the black region in bianry image at Reason is added in profile set with obtaining the contour line of all black regions;
S1013, it takes out a profile: being taken out from profile set and delete a contour line and analyzed;
S1014, profile scalping: calculating the parameters such as the length of profile, the area of institute's enclosing region, center of gravity, perimeter area ratio, And preliminary screening is carried out using specific threshold, remove the candidate that shape is not obviously inconsistent with dotted line point;Image after profile scalping is such as Shown in Fig. 5 c;
S1015, dotted line point are tangentially analyzed: being carried out ellipse fitting to profile, taken elliptical long axis as tangential;
S1016, dotted line point set is added: is analyzed being put into dotted line point set by the profile screened for subsequent algorithm;
S1017, point set analysis: subsequent analysis is carried out to by the dotted line point of screening, to determine game area range;
S1018, dotted line point nearest neighbouringplot: search that position is adjacent, butt is to similar dotted line point in dotted line point set.'s Ellipse fitting is carried out to profile, takes elliptical long axis as tangential;
S1019, isolated point screen out: deleting neighbour's quantity less than 5 dotted line point (because true dotted line point periphery has it His equidirectional dotted line point);Isolated point screen out after picture as fig 5d;
S1020, Hough clustering: Hough ballot analysis is carried out to remaining point, obtains a series of cluster;
S1021, cluster screening: the cluster of dotted line point negligible amounts is removed first, then attempts to select in remaining cluster Two horizontal directions (with horizontal line angle less than 45 degree) and the most cluster of dotted line point quantity are taken, then chooses two vertical directions The most cluster of (with wire clamp angle is holded up less than 45 degree) and dotted line point quantity.If choosing failure (effective cluster numbers are insufficient), No longer this frame image is handled;
S1022, straight line fitting: straight line fitting is carried out to level, the cluster of vertical direction selected respectively, to determine game The absorbing boundary equation in region;Effect is as depicted in fig. 5e.
S1023, it calculates correction parameter: calculating the intersection point at four angles of game area according to the linear equation that fitting obtains, The correction parameter that game area is calculated in algorithm with regress analysis method is reused, this group of parameter can rectify subsequent image frames Just.Final effect is as shown in figure 5f.
Optionally, when stage property of imparting knowledge to students is the intelligent toy for being mounted with radio frequency identification, detection zone can also be wireless sense Region is answered, i.e., when the intelligent toy appears in the detection zone, automatic sensing simultaneously shows that the intelligent toy is corresponding virtual Scene/role/object.
In addition, when stage property of imparting knowledge to students is the teaching card for having frame, the method that identifies the teaching stage property specifically:
S1101, image binaryzation.The image that camera acquires is converted into black and white binary map using adaptive thresholding algorithm Picture, to highlight the frame of teaching stage property.The judgment basis of adaptive thresholding algorithm are as follows:
Wherein
Wherein, v is the gray value of pixel, and N (v) is the pixel set near v, and C is pre-set threshold value, and v ' is represented Pixel inside field N (v).
S1102, two-value contours extract is carried out to the result, the connected region in bianry image is scanned, area is obtained The data such as the lines of outline in domain, the area in region, perimeter.
S1103, profile screening is carried out, i.e., fast geometric morphological analysis is carried out to the profile that previous step is extracted, only retains shape Like the wheel of quadrangle (usage scenario of the embodiment of the present invention includes but is not limited to quadrangle, is only illustrated by taking quadrangle as an example) Exterior feature, to reduce the processing time of subsequent step.Specifically, once being put down using the method for local average to contour line first It is sliding, the tangent vector of each point on contour line is then calculated using neighbour's calculus of finite differences, finally using hierarchical clustering method to all Tangent vector coordinate is analyzed, if the significant cluster formed exactly 4, then it is assumed that the profile is similar to quadrangle.
S1104, four side fractionations are carried out to the processing result of S1103.Cluster analysis result based on previous step, it is aobvious by 4 It writes the corresponding profile point coordinate of cluster to extract in 4 set, corresponds to the fitting data on 4 sides of quadrangle.
S1105, least square fitting is carried out to the fitting data of S1104.The data for the four edges that previous step generates can divide Not carry out straight line fitting to obtain the equation of four edges, straight line fitting is carried out using least-squares algorithm in the present invention, it is excellent Change target are as follows:
It completes to can determine the specific location of the frame of teaching card in the picture after the fitting of four edges.
S1106, to adjacent domain image flame detection.Since shooting angle is varied, card can generate deformation in the picture. The deformation of card can be corrected using frame obtained in the previous step, card content there may also be 0 degree, 90 degree, 180 at this time Degree, 270 degree of these four different directions.
S1107, direction signs are detected.The embodiment of the present invention is using machine learning method to the direction sign in card Knowledge is detected and is identified, by acquiring the card image of upper thousand sheets difference direction, and is labeled (for example, 5 can be carried out Classification is 0 degree, 90 degree, 180 degree, 270 degree, directionless mark respectively), it is trained, can be obtained using deep neural network later To a direction signs classifier, direction signs detection can be carried out to result obtained in the previous step and is differentiated.The program is trained To classifier 99.6% or more can reach to the recognition accuracy of direction signs.
S1108a, image is rotated by mark.For detecting the card of direction signs, card can be turned by bearing mark Just.Become a full member will the card go to horizontal position.
S1108b, all directional images are obtained.For not detecting the card of bearing mark, the embodiment of the present invention is direct The image for generating 4 different directions is analyzed for subsequent content recognizer.
S1109, card content is identified.The present invention identifies card content using machine learning method, first The defined card of a upper thousand sheets is first had hundreds in sample database, to these sample extraction histograms of oriented gradients features (HOG) after, the training multi-class classifier of SVM;If the sample in database is very more (more than 1,000), depth also can be used directly Degree neural network is trained.Image obtained in the previous step is differentiated (if front does not detect direction using classifier Mark, then the image in 4 directions is all differentiated, as long as one of them is effective), after differentiation according to result again with number It is once compared (checking computations) according to the master sample in library, checking computations are thought successfully to be detected result after.
S1100, output recognition result.If previous step successfully recognizes card, the classification of card, position, direction are believed Breath output.
S102, it is established based on the different classes of teaching stage property identified and shows corresponding virtual scene;
It can be specifically divided into single teaching stage property again according to the type of teaching stage property and the difference of quantity, the embodiment of the present invention Identification and the combination identification of multiple teaching stage properties.
Wherein, the identification of single teaching stage property is as follows:
When the teaching stage property type identified is role class or object type, corresponding virtual role or dummy object are established And it shows on the screen;For example, teaching stage property is the teaching card of kitten, doggie, then kitten or doggie are shown on the screen Motion graphics or animation optionally can also export kitten/doggie cry, and/or play one section accordingly by loudspeaker Background music etc..
When the teaching stage property type identified is scene class, foundation scene corresponding with scene class teaching stage property.Example Such as, the teaching stage property identified is a site-teaching card with " river ", then shows the background of river on the screen;Again For example, then correspondingly, showing travel on the screen when the teaching stage property identified is the site-teaching card with " recreation ground " The background of field.
The scheme of multiple teaching stage property combination identifications is as follows:
When the teaching stage property type identified includes multiple role class or multiple objects class or role class and object type phase When combination, by the virtual role and/or dummy object under this scenario according to random plot or predefined event Thing section is interacted, or according to different virtual role/dummy objects, exports different sound.For example, working as virtual role When for kitten and doggie, kitten and doggie can chase quarrel and fight noisily according to predefined plot, or carry out one section of dialogue, Or the cry or background sound music of kitten and/or doggie, display kitten and/or the animated image of doggie etc. are played, it predefines Plot can be a triggering response events, for example, when identifying kitten and when doggie, automatic trigger simultaneously shows that this is small Cat doggie chases the scene quarrelled and fought noisily.Alternatively, identifying that when being kitten and mouselet, automatic trigger cat chases after the scene of mouse.Again or Person identifies when being a monkey and a hand of bananas, the scene that automatic trigger monkey eats banana.The predefined plot can To be one section of conditional code, it is also possible to the response mechanism of a signal and slot, the embodiment of the present invention is not reinflated tired to be stated. Optionally, the embodiment of the present invention can be gone out with random response in different plot, such as upper example, kitten and doggie, and first It is secondary identify after, kitten can chase quarrel and fight noisily with doggie, and after identifying for the second time, kitten and doggie then carry out one section of dialogue, Conversation content and scene Stochastic Expansion.The mechanism implementation of random response, which may is that, randomly selects pre-stored each event The plot is loaded on each virtual role/object by thing section, for example, kitten is according to the story feelings randomly selected Desired trajectory is run in section, and doggie then follows the desired trajectory to be chased according to the plot that this is randomly selected, Nervous music can be exported in loudspeaker simultaneously.One of example of the embodiment of the present invention as shown in figures 6 a and 6b, is schemed 6a is that in detection zone while the teaching card of " cat " and " mouse " of scene class " recreation ground ", virtual role class, Fig. 6 b occur It is the content accordingly shown on the screen.As shown in figure 6 a and 6b, virtual role " cat " and " mouse " and virtual scene " are travelled " corresponding respectively at the teaching card in detection zone.Optionally, when occurring cat and mouse in recreation ground, meeting is automatic/random out Hair cat chases the animation of mouse.
Optionally, teaching stage property type further includes story class, and story class teaching stage property can be the religion for being printed on story text Card, such as " playing ", " quarrelling and fighting noisily " are learned, " playing games " etc. is then based on when the narration information for identifying the story class teaching stage property The narration information interacts different classes of teaching stage property according to the plot in virtual scene.Example as above, works as knowledge It Chu not identify that plot is the Story teaching card of " playing " in detection zone after kitten and doggie, then triggering should Virtual role kitten and doggie are played, if identify the Story teaching card of " quarrelling and fighting noisily ", virtual role kitten and doggie into Row is quarrelled and fought noisily.That is, in upper example, plot can automatic trigger or it is random occur, and in this example, plot is then curable at teaching The form of card is freely combined for student and uses.
As shown in Fig. 7 a and Fig. 7 b, the embodiment of the present invention can be spliced into specific figure using seven-piece puzzle, such as seven-piece puzzle is spelled A windmill style is picked out, then after identifying, can show the windmill on the screen.In addition, can also know in addition to may recognize that teaching card Not Chu teaching card and the seven-piece puzzle combination.It is specific as shown in Figure 7 c.
S103, after detecting one or more of position changes of the teaching stage properties in detection zone, corresponding In virtual scene, according to the type of one or more of teaching stage properties, corresponding adjustment is carried out to the virtual scene.
Identical as S102, according to the different number and type of teaching stage property, S103 can be divided into:
Single teaching stage property is mobile:
When teaching stage property quantity is 1, and stage property of imparting knowledge to students is role class/object type, by corresponding virtual role/virtual Object progress is moved according to the motion track identified;
When the different location being moved under the virtual scene, the virtual role or dummy object are done at special efficacy Reason, and/or, special effect processing is done into the band of position where virtual role described under the scene or dummy object.For example, when knowing Not Chu teaching stage property when being kitten, show the image of virtual kitten on the screen, and when the teaching card of kitten in detection zone When being moved, for example, kitten teaching card from moving left to the right side, kitten virtual at this time goes to from the left side of screen/go to the right side Side, it is synchronous with the teaching card motion track holding in detection zone.At this point, kitten can make various special efficacys, such as kitten can cry Call out two sound, or roll about or kitten around region the special efficacy etc. of gorgeous riot of color is presented.For another example detecting kitten Teaching card when moving from the bottom up, in virtual role from the bottom up moving process, cartoon kitten image is gradually become smaller, finally Become smaller into a pore, from the point of view of vision presentation, process which the more walks the more remote similar to cartoon kitten.
Under single scene, single teaching stage property is mobile:
Fig. 6 a is the teaching stage property in detection zone, and Fig. 6 b is the virtual scene shown on the screen after identifying, such as Fig. 6 a And shown in Fig. 6 b, the teaching stage property identified is kitten and recreation ground, then shows a recreation ground on the screen, and kitten is shown in trip In happy field, and when in detection zone kitten teaching card when moving, correspondingly, virtual kitten role on the screen also exists Mobile, and when kitten is moved at the bench of recreation ground, kitten can jump onto the bench, alternatively, when to be moved to sliding slide attached for kitten When close, meeting automatic trigger plays the animation that kitten is played on sliding slide.That is, under specific scene areas, it can be automatic Trigger the special efficacy feedback of virtual role.
Multiple teaching stage properties are mobile:
The virtual role or the dummy object are moved according to the motion track identified;
When the distance between the adjacent virtual role or the dummy object are less than preset threshold, the phase is triggered Adjacent virtual role or dummy object carries out scene interaction.Fig. 6 c be detected simultaneously by detection zone scene class " in river ", " pony " " squirrel " and " old ox " teaching card of " river water " teaching card and role class.Fig. 6 d is then corresponding void on the screen Effect picture is presented in quasi- scene, and as shown in figs. 6 c and 6d, " pony " is adjacent with " squirrel " teaching card, and " squirrel " imparts knowledge to students with " old ox " Block adjacent, if the teaching card of mobile " squirrel " at this time, is allowed to be less than preset threshold with the actual range of " old ox " teaching card, then phase Ying Di then shows squirrel on screen and is seated on the back of a bull.Optionally, pony, squirrel and old ox can also be unfolded pair between the parties Words, conversation content can talk with plot for the story of classical " JackKen ".
It should be noted that working as the embodiment of the present invention using camera collection image and further image analysis come S103 When, its implementation can be as shown in figure 8, specific as follows:
S1031, image difference analysis: image uniform is cut into multiple small cubes, the statistical picture in each small cube Gradient orientation histogram, and by the histogram of front and back two field pictures corresponding region carry out differential comparison, by variable quantity be greater than set The block of threshold value is determined labeled as variable domain.Subsequent needs to analyze the variable domain.The step for can exclude one It is not required to region to be processed, to reduce the time overhead of subsequent algorithm, improves the response speed of entire algorithm.
S1032, target detection is carried out in variable domain: variable domain being scanned using machine learning algorithm and defeated Effective testing result out.Specifically, can be used described in S1101-S1100 when stage property of imparting knowledge to students is the teaching card with frame Algorithm be used for quickly detecting and identify;Contour shape analysis and outline algorithm can be used to carry out on the object of building blocks class Detection identification;Deep neural network can be used to carry out detection identification other generic objects (such as desk, apple etc.).Depth The algorithm of neural network recognization belongs to the prior art, and the embodiment of the present invention is not repeated herein.
S1033, before and after frames data correlation: because the fractional object that present frame detects may be with the fractional object of former frame It is identical, it is therefore desirable to be associated analysis.In the embodiment of the present invention similarity, distance of integrated survey before and after frames object etc. because Element, the matching between being gathered using Greedy strategy.(Greedy strategy: for example, there are two triangles in former frame, currently Frame detects three triangles, that is likely to be user and has put a new triangle, and other two is not moved.It is logical The relative positional relationship for calculating these triangles, color similarity etc. are crossed, it is found that there are two triangle and former frames in a later frame In two triangle positions it is close, angle is also essentially identical with color, then it is assumed that the two triangles are identical as former frame, And the other is new placement.) complete matching after, present frame do not establish matched object be considered as it is emerging, and it is previous Matched also need using the trial opening relationships of subsequent track algorithm is not found in frame object.For example, not having in former frame Detect that the card of " cat ", present frame have the card for detecting one " cat ", then this " cat " is considered as emerging.Compare again Such as, the card of " dog " is detected in former frame, and present frame does not detect, it is likely that it is that user walks this Qana, Be also likely to be user moving this card cause fuzzy pictures be not detected come.In order to distinguish both of these case, need to make It is analyzed with whereabouts of the target tracking algorism to this card, if it is possible to nearby find similar fuzzy objective, then recognize It is on the contrary then be likely to be and taken away by user to be that user is moving this card.
S1034, target tracking algorism: the teaching card moved in detection zone is carried out using existing target tracking algorism Analysis, and tracking and positioning result.
S1035, data fusion: will test result and integrated, and export the display result in above-mentioned example.
It should be noted that further including functional areas in image to be detected in the embodiment of the present invention, functional areas quantity can be One or more, can be distributed in the surrounding (as shown in Figure 9) of detection zone, can also be distributed in detection zone, independently of detection Region and exist.The purpose of functional areas is by some or multiple function immobilizations, when detecting that user has touching to the functional areas When touching/click/and the operation such as sliding, corresponding function is triggered.That is, be monitored to the one or more functions area, when receiving When one or more functions area trigger signal, function corresponding with one or more of functional areas is realized.Figure 10 is an allusion quotation The functional areas schematic diagram of type.As shown in Figure 10, there are label (alphabetical A, B, the Mike of several button patterns on the right side of detection zone Weathercock note, palm phenotypic marker), which is functional areas, and A and B are button, can be defined as different functions, such as volume Plus-minus, become clear plus-minus, click/revocation etc..It touches microphone and marks the functions such as achievable speech recognition, and touch palm label then Switching, click or other operations can be achieved.When user is when the functional areas touch/click/sliding etc. operates, or when user uses certain When a little material objects carry out the operation such as blocking/touch to the functional areas, different operation corresponding with not isolabeling can trigger.Wherein, four A image tagged can define inside a functional areas, also may be defined as functional areas in varying numbers, and a such as label is a Functional areas, i.e., four labels are a functional areas;Or the group of two labels is combined into a functional areas, remaining each label point It Wei not functional areas etc..The method in identification function area can be with are as follows: using touch tablet or uses image recognition.Touch tablet is i.e. in phase It answers in functional areas using the electronic structure for having touch function, can produce response signal after user touches;And figure identifies Method are as follows: the pattern recognition device real-time monitoring functional areas are utilized, if the functional areas in current frame image are blocked, or current For frame image compared with previous frame image, functional areas image change is larger, exceeds certain threshold value, i.e., it is believed that the functional areas are touched Hair, to generate corresponding response signal, realizes corresponding function.In addition, false triggering in order to prevent, can detect present frame When functional areas change, the situation of change of subsequent a few frame images is persistently detected, if subsequent continuous multiple frames functional areas are not restored Original state then responds relevant feature operation that is, it is believed that user is Trigger Function area consciously.
The embodiment of the invention also provides a kind of device, which includes: processor and can be in processor for storing The memory of the computer program of upper operation, wherein the processor is for executing above-mentioned reality when running the computer program The method of existing scene interactivity.
The embodiment of the invention also provides a kind of storage mediums, are stored thereon with computer instruction, and the instruction is by processor The method of above-mentioned realization scene interactivity is realized when execution.
Figure 11 is a kind of apparatus structure schematic diagram provided in an embodiment of the present invention.The device 1100 may include one or one A above central processing unit (central processing units, CPU) 1110 (for example, one or more processors) With memory 1120, storage medium 1130 (such as one of one or more storage application programs 1132 or data 1134 Or more than one mass memory unit).Wherein, memory 1120 and storage medium 1130 can be of short duration storage or persistently deposit Storage.The program for being stored in storage medium 1130 may include one or more modules (diagram is not shown), and each module can To include to the series of instructions operation in device.Further, central processing unit 1110 can be set to and storage medium 1130 communications execute the series of instructions operation in storage medium 1130 on device 1100.Device 1100 can also include one A or more than one power supply 1140, one or more wired or wireless network interfaces 1150, one or more inputs Output interface 1160, step performed by above method embodiment can be based on apparatus structures shown in the Figure 11.
It should be understood that the size of the serial number of each process is not meant to execution sequence in the various embodiments of the application Successively, the execution sequence of each process should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application Constitute any restriction.
Those of ordinary skill in the art may be aware that mould described in conjunction with the examples disclosed in the embodiments of the present disclosure Block and method and step can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
The various pieces of this specification are all made of progressive mode and are described, same and similar portion between each embodiment Dividing may refer to each other, and what each embodiment introduced is and other embodiments difference.Especially for device and it is For embodiment of uniting, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to method reality Apply the explanation of example part.
Finally, it should be understood that being not intended to the foregoing is merely the preferred embodiment of technical scheme Limit the protection scope of the application.Obviously, those skilled in the art can carry out various modification and variations without de- to the application From scope of the present application.If these modifications and variations of the application belong to the range of the claim of this application and its equivalent technologies Within, then any modification, equivalent replacement, improvement and so on, should be included within the scope of protection of this application.

Claims (12)

1. a kind of method for realizing scene interactivity, which is characterized in that the described method includes:
The teaching stage property type, which includes scene class, role, to be identified to one or more teaching stage property in detection zone Class and object type;
It is established based on the different classes of teaching stage property identified and shows corresponding virtual scene;
After detecting position change of one or more of teaching stage properties in detection zone, in corresponding virtual scene In, according to the type of one or more of teaching stage properties, corresponding adjustment is carried out to the virtual scene.
2. the method according to claim 1, wherein the teaching stage property type further includes story class, then described It is established based on the different classes of teaching stage property identified and shows corresponding virtual scene, comprising:
The narration information for identifying the story class teaching stage property, is based on the narration information, by different classes of teaching stage property The broadcasting of animation and/or sound is carried out in virtual scene according to the plot.
3. the method according to claim 1, wherein when the teaching stage property type that identify be role class or Object type, and the teaching stage property quantity be one when, it is described to establish and show based on the different classes of teaching stage property identified Show corresponding virtual scene, comprising:
Establish corresponding virtual role or dummy object;
It is described based on identifying or, when the teaching stage property type is scene class, and when the teaching stage property quantity is one Different classes of teaching stage property is established and shows corresponding virtual scene, comprising:
Establish scene corresponding with scene class teaching stage property.
4. the method according to claim 1, wherein when the teaching stage property quantity is multiple, and the identification It is described based on identifying not when multiple teaching stage property types out include at least scene class and role class and/or object type Generic teaching stage property is established and shows corresponding virtual scene, comprising:
Scene corresponding with scene class teaching stage property is established, and load and role class teaching stage property in the scene Corresponding virtual role, and/or the load dummy object corresponding with role class teaching stage property in the scene.
5. according to the method described in claim 4, it is characterized in that, described based on the different classes of teaching stage property identified After establishing and showing corresponding virtual scene, the method also includes:
By the virtual role and/or dummy object under this scenario according to random plot or predefined story feelings Section is interacted, or the output virtual role and/or the corresponding sound of dummy object.
6. the method according to claim 1, wherein the teaching stage property type further includes story class, and described The teaching stage property type identified includes at least scene class, story class and role and/or object type, then described to be based on identifying Different classes of teaching stage property establish and show corresponding virtual scene, comprising:
Under the corresponding scene of the scene class, the virtual role and/or the dummy object are corresponding according to the story class Plot interacted.
7. the method according to claim 1, wherein the teaching stage property type identified includes at least role When class or object type, the type according to one or more of teaching stage properties carries out corresponding tune to the virtual scene It is whole, comprising:
The virtual role or the dummy object move according to the motion track identified;
When the different location being moved under the virtual scene, the virtual role or dummy object are done into special effect processing, and/ Or, special effect processing is done in the band of position where virtual role described under the scene or dummy object.
8. the method according to claim 1, wherein the multiple teaching stage property types identified include at least When role class or object type, the type according to one or more of teaching stage properties corresponds to the virtual scene Adjustment, comprising:
The virtual role or the dummy object are moved according to the motion track identified;
When the distance between the adjacent virtual role or the dummy object are less than preset threshold, trigger described adjacent Virtual role or dummy object carry out scene interaction.
9. the method according to claim 1, wherein it is described in detection zone to one or more impart knowledge to students stage properties It is identified, comprising:
The image that one or more of teaching stage properties are acquired in the detection zone, identifies institute using image recognition technology State the information of teaching stage property;And/or
The identification code that the teaching stage property is induced in the detection zone, identifies the teaching road according to the identification code The information of tool.
10. the method according to claim 1, wherein the method also includes:
One or more functions area is monitored, when receiving one or more of functional areas trigger signals, realize with The corresponding function in one or more of functional areas.
11. a kind of system for realizing scene interactivity, which is characterized in that the system comprises: processors and can be for storing The memory of the computer program run on processor;Wherein, the processor is for executing when running the computer program The method of the described in any item scene interactivities of claims 1 to 10.
12. a kind of computer readable storage medium, is stored thereon with computer executable instructions, which is characterized in that the computer The method that executable instruction requires 1 to 10 described in any item scene interactivities for perform claim.
CN201711302296.2A 2017-12-10 2017-12-10 A kind of method and system for realizing scene interactivity Pending CN109903391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711302296.2A CN109903391A (en) 2017-12-10 2017-12-10 A kind of method and system for realizing scene interactivity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711302296.2A CN109903391A (en) 2017-12-10 2017-12-10 A kind of method and system for realizing scene interactivity

Publications (1)

Publication Number Publication Date
CN109903391A true CN109903391A (en) 2019-06-18

Family

ID=66941445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711302296.2A Pending CN109903391A (en) 2017-12-10 2017-12-10 A kind of method and system for realizing scene interactivity

Country Status (1)

Country Link
CN (1) CN109903391A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115580716A (en) * 2022-12-09 2023-01-06 普赞加信息科技南京有限公司 Projection picture output method, system and equipment based on physical module

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366610A (en) * 2013-07-03 2013-10-23 熊剑明 Augmented-reality-based three-dimensional interactive learning system and method
CN104102412A (en) * 2014-07-24 2014-10-15 央数文化(上海)股份有限公司 Augmented reality technology-based handheld reading equipment and reading method thereof
CN106530392A (en) * 2016-10-20 2017-03-22 中国农业大学 Method and system for interactive display of cultivation culture virtual scene
CN106571072A (en) * 2015-10-26 2017-04-19 苏州梦想人软件科技有限公司 Method for realizing children education card based on AR
US20170155631A1 (en) * 2015-12-01 2017-06-01 Integem, Inc. Methods and systems for personalized, interactive and intelligent searches
CN106843532A (en) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 The implementation method and device of a kind of virtual reality scenario
CN107122051A (en) * 2017-04-26 2017-09-01 北京大生在线科技有限公司 Build the method and system of three-dimensional teaching environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366610A (en) * 2013-07-03 2013-10-23 熊剑明 Augmented-reality-based three-dimensional interactive learning system and method
CN104102412A (en) * 2014-07-24 2014-10-15 央数文化(上海)股份有限公司 Augmented reality technology-based handheld reading equipment and reading method thereof
CN106571072A (en) * 2015-10-26 2017-04-19 苏州梦想人软件科技有限公司 Method for realizing children education card based on AR
US20170155631A1 (en) * 2015-12-01 2017-06-01 Integem, Inc. Methods and systems for personalized, interactive and intelligent searches
CN106530392A (en) * 2016-10-20 2017-03-22 中国农业大学 Method and system for interactive display of cultivation culture virtual scene
CN106843532A (en) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 The implementation method and device of a kind of virtual reality scenario
CN107122051A (en) * 2017-04-26 2017-09-01 北京大生在线科技有限公司 Build the method and system of three-dimensional teaching environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
戴晓林等: "动画自动生成***中的智能虚拟角色的研究", 《电脑与信息技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115580716A (en) * 2022-12-09 2023-01-06 普赞加信息科技南京有限公司 Projection picture output method, system and equipment based on physical module
CN115580716B (en) * 2022-12-09 2023-09-05 普赞加信息科技南京有限公司 Projection picture output method, system and equipment based on object module

Similar Documents

Publication Publication Date Title
CN108121986B (en) Object detection method and device, computer device and computer readable storage medium
CN109902541A (en) A kind of method and system of image recognition
US10803365B2 (en) System and method for relocalization and scene recognition
KR101832693B1 (en) Intuitive computing methods and systems
JP6011938B2 (en) Sensor-based mobile search, related methods and systems
CN108958610A (en) Special efficacy generation method, device and electronic equipment based on face
CN110163115A (en) A kind of method for processing video frequency, device and computer readable storage medium
CN110478892A (en) A kind of method and system of three-dimension interaction
CN107578023A (en) Man-machine interaction gesture identification method, apparatus and system
KR20120127655A (en) Intuitive computing methods and systems
CN109064387A (en) Image special effect generation method, device and electronic equipment
CN109214471A (en) Evaluate the method and system of the written word in copybook of practising handwriting
CN109190516A (en) A kind of static gesture identification method based on volar edge contour vectorization
CN103605986A (en) Human motion recognition method based on local features
CN103020614B (en) Based on the human motion identification method that space-time interest points detects
CN112734803B (en) Single target tracking method, device, equipment and storage medium based on character description
CN109598198A (en) The method, apparatus of gesture moving direction, medium, program and equipment for identification
CN102930270A (en) Method and system for identifying hands based on complexion detection and background elimination
CN103336967A (en) Hand motion trail detection method and apparatus
Le et al. DeepSafeDrive: A grammar-aware driver parsing approach to Driver Behavioral Situational Awareness (DB-SAW)
CN114904270A (en) Virtual content generation method and device, electronic equipment and storage medium
CN111985184A (en) Auxiliary writing font copying method, system and device based on AI vision
CN109754644A (en) A kind of teaching method and system based on augmented reality
CN108628455A (en) A kind of virtual husky picture method for drafting based on touch-screen gesture identification
CN109903391A (en) A kind of method and system for realizing scene interactivity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190618