CN111460227A - Method for making video containing limb movement, video product and using method - Google Patents

Method for making video containing limb movement, video product and using method Download PDF

Info

Publication number
CN111460227A
CN111460227A CN202010285371.4A CN202010285371A CN111460227A CN 111460227 A CN111460227 A CN 111460227A CN 202010285371 A CN202010285371 A CN 202010285371A CN 111460227 A CN111460227 A CN 111460227A
Authority
CN
China
Prior art keywords
video
picture
expected
words
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010285371.4A
Other languages
Chinese (zh)
Inventor
赵琰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010285371.4A priority Critical patent/CN111460227A/en
Publication of CN111460227A publication Critical patent/CN111460227A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7834Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using audio features
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a method for making a video containing body actions, a video product and a using method, wherein electronic equipment synthesizes three different videos by obtaining a picture which can show the meaning of a target word by an image, the voice audio of the target word, the body actions containing a pronunciation mouth shape and a Chinese voice explanation; the three different videos contain limb actions, so that children can learn the limb actions and follow pronunciation independently, the children can learn English words no longer monotonously, and the method accords with the active and active nature of the children; the three learning methods which expect the video to be used alternately are scientific and reasonable, meet the requirements of different-speed flash cards and large word exchange amount to activate the right brain to enter the learning state quickly under the condition that children are in a progressive condition, and review the past knowledge in time, so that the learning interest and the learning effect are greatly improved.

Description

Method for making video containing limb movement, video product and using method
Technical Field
The invention relates to the technical field of education tools of internet education, in particular to a method for manufacturing a video containing limb actions, a video product and a using method.
Background
As parents pay more and more attention to foreign language learning of children, many children start to learn English from children, a common learning mode is that teachers teach children in classrooms, teach children to read pictures to speak English, sing English songs, teach English greetings and the like, although children speak English and sing English in mouth, the children do not understand meanings and are not really interested, and the children cannot concentrate on attention for a long time, so that the phenomenon that the children speak English while doing small actions is often caused.
Disclosure of Invention
In order to solve the problems, the invention provides a method for manufacturing a video containing body actions, a video product and a using method, wherein electronic equipment synthesizes three different videos by acquiring a picture capable of indicating the meaning of a target word by an image, the voice audio of the target word, the body actions containing a pronunciation mouth shape and a Chinese voice explanation; the three different videos contain limb actions, so that children can learn the limb actions and follow pronunciation independently, the children can learn English words no longer monotonously, and the method accords with the active and active nature of the children; the three learning methods which expect the video to be used alternately are scientific and reasonable, meet the requirements of different-speed flash cards and large word exchange amount to activate the right brain to enter the learning state quickly under the condition that children are in a progressive condition, and review the past knowledge in time, so that the learning interest and the learning effect are greatly improved.
The specific technical scheme provided by the invention is as follows:
a method for making a video containing limb movements comprises the following steps:
step 1: acquiring a target word selected by a user;
step 2: acquiring a desired picture expressing the meaning of the target word;
preferably, before acquiring the desired picture, the method further comprises the step 2-1:
acquiring a picture matched with the meaning of the target word from electronic equipment and/or a cloud database, wherein the picture is used as a primary selection picture;
acquiring a target image selected and marked in the initially selected picture by a user, and setting a minimum threshold A of the target image in the total image of the picture and a minimum threshold B of the similarity between the image in the picture and the target image.
Preferably, the method further comprises the step 2-2:
extracting a plurality of feature points of an image in a picture, comparing the feature points with the feature points of the target image, when the similarity is larger than or equal to B, the image is a similar image, calculating the proportion of the similar image in the total image of the image, when the proportion of the similar image is larger than or equal to A, the image is an alternative image corresponding to the target word, and selecting an alternative image with the best similarity and proportion of the similar image from the alternative images as an expected image; and respectively marking picture distinguishing information for the expected pictures corresponding to all the target words.
And step 3: respectively acquiring limb action videos of the teacher expressing the meaning of the expected pictures;
preferably, the method further comprises the step 3-1:
shooting a limb action video including a pronunciation mouth shape, which is made by a teacher according to the expected picture, cutting each segment of the limb action video, and marking video distinguishing information;
and 4, step 4: acquiring a voice audio of the target word;
preferably, the method further comprises the step 4-1:
selecting voice streams from foreign language audios or foreign language videos, separating voice audios of the target words, and respectively marking voice audio distinguishing information;
and 5: and generating the expected pictures which are sequentially popped up to be displayed on the same picture, the corresponding limb action videos and the expected videos of the voice audio.
Preferably, the method further comprises a step 5-1 of generating the first video:
opening PPT making software, sequentially arranging fifty expected pictures in a blank picture frame according to the picture distinguishing information, matching corresponding limb action videos containing a teacher slowly pronouncing mouth shape according to the video distinguishing information, matching corresponding voice audios according to the voice audio distinguishing information and corresponding primary Chinese voice explanation, and generating expected videos which sequentially pop up the expected pictures, the limb action videos containing the teacher slowly pronouncing mouth shape, the corresponding voice audios and the primary Chinese voice explanation and display the expected pictures, the limb action videos containing the teacher slowly pronouncing mouth shape, the corresponding voice audios and the primary Chinese voice explanation;
preferably, the method further comprises a step 5-2 of generating the second video:
opening PPT making software, sequentially arranging fifty expected pictures in a blank picture frame according to the picture distinguishing information, matching corresponding limb action videos containing a teacher slowly pronounces mouth shapes according to the video distinguishing information, matching corresponding voice audios according to the voice audio distinguishing information, and generating expected videos which sequentially pop up the expected pictures, the limb action videos containing the teacher slowly pronounces mouth shapes and the corresponding voice audios;
preferably, the method further comprises a step 5-3 of generating a third video:
opening PPT manufacturing software, sequentially arranging fifty expected pictures in a blank picture frame according to the picture distinguishing information, matching corresponding limb action videos according to the video distinguishing information, matching corresponding voice audios according to the voice audio distinguishing information, switching the switching speed of one expected picture in one second, and generating expected videos which sequentially pop up the expected pictures, the limb action videos and the corresponding voice audios on the same picture.
It should be noted that the body movements including the pronunciation mouth shape accord with the lively and lively nature of children, so that students do not learn the body movements independently and read the pictures, and finally the body movements are closely linked with the pictures, the pronunciation and the meanings, so that the learning interest is greatly improved, the comprehension and the memory are greatly improved, and the learning effect is effectively improved.
It should be noted that each target word matches a picture expressing meaning, the ratio of the target image in each picture and the similarity between the image and the target image both meet a set threshold value and then can be selected as candidate pictures, and then the best picture is selected from the candidate pictures as the desired picture corresponding to the target word. The target image of the expected picture selected according to the step is prominent, has no complex background, is not easy to cause ambiguity, accords with the resolving power of children, enables the children to be clear at a glance when watching the video, and simultaneously enhances the memory.
It should be noted that the voice audio of the target word separated from the voice stream in the foreign language audio or the foreign language video matches with the corresponding expected picture, so that the pronunciation accuracy can be ensured, and the confusion of difficult correction caused by inaccurate pronunciation in the period of children for later learning can be effectively avoided.
It should be noted that the picture distinguishing information includes positioning matching information such as the word meaning, the serial number, the time and the like of the expected picture; the video distinguishing information comprises positioning matching information such as the name, the meaning explanation name, the serial number, the time and the like of the limb action video; the voice audio distinguishing information comprises positioning matching information such as the name, the serial number, the time and the like of the voice audio.
The three expected videos are studied in a crossing manner, scientifically and reasonably, the requirements that children activate the right brain to enter a learning state quickly by flashing cards at different speeds and having large word exchange amount on the premise of progressive progression are met, previous knowledge is reviewed in time, and learning interest and learning effect are greatly improved.
The invention also provides a video product containing limb movements obtained using the production method according to any one of the preceding claims, comprising: the first video comprises fifty expected pictures, the expected pictures are sequentially popped up to display the expected pictures, the limb action video containing the teacher slowly pronouncing mouth shape, the corresponding voice audio and the video of one-time Chinese voice explanation, and when each expected picture in the video is displayed, the corresponding limb action video containing the teacher slowly pronouncing mouth shape and the corresponding voice audio can be played for 1 to 3 times;
the second video is characterized in that the video comprises fifty expected pictures, the expected pictures are sequentially popped up to be displayed on the same picture, the expected pictures comprise body motion videos and corresponding voice audios, wherein the body motion videos comprise mouth shapes which are slowly sounded by teachers, and when the expected pictures are displayed, the body motion videos and the corresponding voice audios which are slowly sounded by the teachers can be played for 1 to 3 times.
And a third video, wherein the video comprises fifty desired pictures, the switching speed of one desired picture is switched in one second, and videos which display the desired picture, the body motion video and the corresponding voice audio in the same picture are popped up in sequence.
The invention also provides a method of using a video product according to the preceding claim, wherein a first set of words, a second set of words and a third set of words are provided, each set of words comprising a different fifty words, and each set of words forms a first video, a second video and a third video, respectively, the method comprising the steps of: in the first week: viewing a video formed by the first set of words:
watching the first video once every monday;
watching the second video once on tuesday;
watching the third video three times every day from three weeks to sunday, and circularly playing for three times every time, wherein the interval time of each time is not less than 1 hour;
in the second week:
viewing a first video formed by the second set of words once monday;
viewing a second video formed by the second set of words once tuesday;
watching a third video formed by the second group of words three times every day from three weeks to sunday, and circularly playing for three times every time, wherein the interval time of each time is not less than 1 hour; then look at a third video formed by the first set of words a total of three times;
in the third week:
viewing once monday a first video formed by the third set of words;
viewing a second video formed by the third set of words once tuesday;
watching a third video formed by the third group of words three times every day from three weeks to sunday, and circularly playing for three times every time, wherein the interval time of each time is not less than 1 hour; the third video formed by the first set of words is then played three times again and the third video formed by the second set of words is played three times again.
It should be noted that the use method of the video containing the limb movement of the invention accords with the lively and active nature, so that students can learn the limb movement and read the limb movement independently while watching pictures, the learning interest is greatly improved, the first video, the second video and the third video are arranged in a crossing way during learning, the method is scientific and reasonable, children can review the past knowledge in time on the premise of gradual learning, and the learning interest and the learning effect are greatly improved.
The invention has the beneficial effects that:
the invention provides a method for making a video containing body actions, a video product and a using method, wherein electronic equipment synthesizes three different videos by obtaining a picture which can show the meaning of a target word by an image, the voice audio of the target word, the body actions containing a pronunciation mouth shape and a Chinese voice explanation; the three different videos contain limb actions, so that children can learn the limb actions and follow pronunciation independently, the children can learn English words no longer monotonously, and the method accords with the active and active nature of the children; the three learning methods which expect the video to be used alternately are scientific and reasonable, and meet the requirements that children activate the right brain to rapidly enter a learning state through flash cards at different speeds and large word exchange amount on the premise of progressive progression, and review past knowledge in time, so that the learning interest and the learning effect are greatly improved.
Drawings
FIG. 1 is a flow chart of video production according to an embodiment of the present invention;
fig. 2 is a flowchart of selecting a desired picture according to an embodiment of the present invention;
FIG. 3 is a flow chart of limb movement generation according to an embodiment of the present invention;
FIG. 4 is a flow chart of voice audio production according to an embodiment of the present invention;
fig. 5 is a flowchart of generating a first video according to an embodiment of the present invention;
FIG. 6 is a flowchart of generating a second video according to an embodiment of the present invention;
fig. 7 is a flowchart of generating a third video according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a video product according to an embodiment of the present invention;
fig. 9 is a flowchart of a method for using a video product according to an embodiment of the present invention.
Wherein: 1-a terminal device; a 10-PPT template providing module; 11-a video acquisition module; 12-setting a module; 13-a display screen; 14-a first communication module;
2-a server; 20-a processor; 21-a memory; 22-a second communication module;
200-a graphics processing module; 201-an audio processing module; 202-video production module.
Detailed Description
As used in the specification and in the claims, certain terms are used to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This specification and claims do not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to,"; the description which follows is a preferred embodiment of the present application, but is made for the purpose of illustrating the general principles of the application and not for the purpose of limiting the scope of the application. The protection scope of the present application shall be subject to the definitions of the appended claims.
As shown in fig. 1-7:
the first embodiment of the present invention:
the invention provides a method for making a video containing limb actions, which is applied to electronic equipment 1 and comprises the following steps:
step 1: fifty target words that the user selects and can literally express a meaning are transmitted to the server 2 through the first communication module 14 and stored in the memory 21, and the processor 20 retrieves the fifty target words from the memory 21.
Step 2: the graphic processing module 200 searches for a picture containing an image matched with the meaning of the target word, selects an expected picture from the picture set, and marks picture distinguishing information of the character meaning, the serial number and the time of the expected picture;
preferably, before acquiring the desired picture, the method further comprises the step 2-1:
the image processing module 200 identifies images including images matched with the target images from the electronic device and/or the cloud database, and obtains image types respectively matched with the target words;
the graphic processing module 200 classifies each kind of picture into the primary selection picture corresponding to the target word.
The method comprises the steps of obtaining a target image selected and marked in a primary selected picture by a user, setting a minimum threshold value A of the target image in a total image of the picture and a minimum threshold value B of the similarity between the image in the picture and the target image through a setting module 12, transmitting the target image, the threshold value A and the threshold value B to a server 2 through a first communication module 14, storing the target image, the threshold value A and the threshold value B in a memory 21, and obtaining information from the memory 21 by a graphic processing module 200.
Preferably, the method further comprises the step 2-2:
the graphic processing module 200 extracts a plurality of feature points of an image in a picture to compare with the feature points of the target image, when the similarity is greater than or equal to B, the image is a similar image, the graphic processing module 200 calculates the proportion of the similar image in the total image of the picture, when the proportion of the similar image is greater than or equal to A, the picture is an alternative picture corresponding to the target word, the graphic processing module 200 selects an alternative picture with the best feature point similarity and similar image proportion as the expected picture corresponding to the target word, and all the expected pictures are marked with picture distinguishing information containing literal meanings.
And step 3: the video production module 202 respectively obtains the body motion videos including the pronunciation mouth shape, which are made by the teacher to the desired pictures by the user through the video obtaining module 11, cuts the body motion videos into individual body motion videos and meaning explanations relative to each desired picture, and marks the video distinguishing information of the sequence number and the time;
preferably, the method further comprises the step 3-1:
after the video acquisition module 11 captures the body motion videos including the pronunciation mouth shape, which are made by the teacher for the expected picture, and sends the videos to the processor 20, the video making module 202 cuts fifty body motion videos, and marks video distinguishing information of the names, meaning explanation names, sequence numbers and time of the body motion videos;
and 4, step 4: the audio processing module 201 separates the voice audio corresponding to the target word from the foreign language audio or the foreign language video, obtains the voice audio of the target word, and marks the voice audio distinguishing information of the name, the sequence number and the time of the voice audio;
preferably, the method further comprises the step 4-1:
the audio processing module 201 processes the voice stream in the foreign language audio or the foreign language video, and separates the voice audio of the target word respectively, so as to mark the voice audio distinguishing information of the name, the sequence number, and the time of the voice audio.
And 5: opening the PPT file, the video production module 202 sequentially arranges the fifty expected pictures in a blank picture frame according to the picture distinguishing information, and the corresponding limb action videos are matched in parallel in the same picture, and meanwhile, the corresponding voice audio is inserted to generate the expected video.
As in fig. 5, an embodiment of generating a first video:
opening a PPT file that a user provides fifty blank picture frames through the PPT template providing module 10, arranging fifty expected pictures in the blank picture frames in sequence by the video making module 202 according to the picture distinguishing information, matching the corresponding limb action video containing the teacher's slow pronunciation mouth shape according to the video distinguishing information, matching the corresponding voice audio according to the voice audio distinguishing information, and corresponding one-time Chinese voice explanation; generating an expected video which sequentially pops up the same picture to display the expected picture, the limb action video containing the mouth shape slowly pronounced by the teacher, the corresponding voice audio and the primary Chinese voice explanation; when each expected picture in the expected videos is displayed, the corresponding limb action video containing the teacher slowly pronounces the mouth shape and the corresponding voice audio can be played for 1 to 3 times, and the total time length is 21 minutes.
The generated video is stored in the memory 21 and is transmitted to the display 13 of the terminal device 1 through the second communication module 22 for playing.
As in fig. 6, an embodiment of generating a second video:
opening a PPT file which is provided by a user through a PPT template providing module 10 and comprises fifty blank picture frames, arranging fifty expected pictures in the blank picture frames in sequence by a video making module 202 according to the picture distinguishing information, matching corresponding limb action videos containing a teacher slowly pronouncing mouth shape according to the video distinguishing information, matching corresponding voice audios according to the voice audio distinguishing information, and generating expected videos which sequentially pop up the expected pictures, the limb action videos containing the teacher slowly pronouncing mouth shape and the corresponding voice audios; when every in the video expect that the picture shows, the corresponding contain teacher slowly pronounce the mouth type the limbs action video and the corresponding pronunciation audio frequency can play 1 to 3 times, the total broadcast duration of second kind video is 9 minutes.
The generated PPT video is stored in the memory 21 and is sent to the display 13 of the terminal device 1 through the second communication module 22 for playing.
As in fig. 7, an embodiment of generating a third video:
opening a PPT file which is provided by a user through a PPT template providing module 10 and comprises fifty blank picture frames, arranging fifty expected pictures in the blank picture frames in sequence by a video making module 202 according to the picture distinguishing information, matching corresponding limb action videos according to the video distinguishing information, matching corresponding voice audios according to the voice audio distinguishing information, switching the switching speed of one expected picture in one second, generating expected videos which sequentially pop up the expected pictures, the limb action videos and the corresponding voice audios on the same picture, and playing the third video for 1 minute.
The generated PPT video is stored in the memory 21 and is sent to the display 13 of the terminal device 1 through the second communication module 22 for playing.
It should be noted that the body movements including the pronunciation mouth shape accord with the lively and lively nature of children, so that students do not learn the body movements independently and read the pictures, and finally the body movements are closely linked with the pictures, the pronunciation and the meanings, so that the learning interest is greatly improved, the comprehension and the memory are greatly improved, and the learning effect is effectively improved.
It should be noted that each target word matches one picture, and the ratio of the target image in each picture and the similarity between the image and the target image both meet a set threshold value and can be selected as an alternative picture, and then the best picture is selected from the alternative pictures as the desired picture corresponding to the target word. The target image of the expected picture selected according to the step is prominent, has no complex background, is not easy to cause ambiguity, accords with the resolving power of children, enables the children to be clear at a glance when watching the video, and simultaneously enhances the memory.
It should be noted that the voice audio of the target word separated from the voice stream in the foreign language audio or the foreign language video matches with the corresponding expected picture, so that the pronunciation accuracy can be ensured, and the confusion of difficult correction caused by inaccurate pronunciation in the period of children for later learning can be effectively avoided.
It should be noted that the picture distinguishing information includes positioning matching information such as the word meaning, the serial number, the time and the like of the expected picture; the video distinguishing information comprises positioning matching information such as the name, the meaning explanation name, the serial number, the time and the like of the limb action video; the voice audio distinguishing information comprises positioning matching information such as the name, the serial number, the time and the like of the voice audio.
It should be noted that, according to the requirement that children need to learn english words step by step, the first video, the second video and the third video are respectively generated, so that children can meet the lively and active nature on the premise of step by step, the learning interest is greatly improved, and the learning effect is effectively improved.
As shown in fig. 8, the present invention also provides another embodiment:
the invention also provides a video product containing limb actions obtained by using the manufacturing method, which comprises the following steps: the first video is characterized in that the video comprises fifty expected pictures, the expected pictures are sequentially popped up to be displayed on the same picture, the expected pictures contain the body action video, the corresponding voice audio and the video of one-time Chinese voice explanation, when the expected pictures are displayed, the corresponding images contain the teacher slowly pronounce the mouth, the body action video and the corresponding voice audio can be played for 1-3 times, and the total duration of the video is 21 minutes.
A second video, wherein the video comprises fifty expected pictures, and the videos sequentially pop up the expected pictures, the limb action video containing the mouth shape slowly pronounced by the teacher and the corresponding voice audio; when every in the video expect that the picture shows, the corresponding contain teacher slowly pronounce the mouth type the limbs action video and the corresponding pronunciation audio frequency can play 1 to 3 times, the video total duration is 9 minutes.
And a third video, wherein the video comprises fifty expected pictures, the conversion speed of one expected picture is switched in one second, and the expected picture, the limb movement video, the corresponding voice audio and the video with the total time length of 1 minute are sequentially popped up and displayed on the same picture.
As shown in fig. 9, the present invention also provides another embodiment:
the invention also provides a using method of the video product, which comprises the following steps:
a first group, a second group and a third group which respectively comprise different fifty words are arranged, and each group respectively forms a first video, a second video and a third video through a video production module 202; and the first video takes 21 minutes; the second video takes 9 minutes; the third video took 1 minute.
The using steps comprise:
in the first week:
once monday, a first video formed by the first set of words takes 21 minutes;
viewing a second video formed by the first set of words once a second time, 9 minutes in common;
watching a third video formed by the first group of words three times a day from three weeks to one week, wherein the interval time is not less than 1 hour each time, and the third video is played circularly for three times each time, and the third video is shared for 45 minutes;
in the second week:
once monday, 21 minutes, watching the first video formed by the second set of words;
viewing a second video formed by the second set of words once a second time, for 9 minutes;
watching a third video formed by the second group of words three times a day from three weeks to one week, wherein the interval time is not less than 1 hour each time, and the third video is played for 45 minutes each time in a circulating way; then look at a third video formed by the first set of words three more times, taking 3 minutes; sharing time is 48 minutes;
in the third week:
once monday, 21 minutes, watching the first video formed by the third set of words;
viewing a second video formed by the third set of words once a second time, for 9 minutes;
watching a third video formed by the third group of words three times a day from three weeks to one week, wherein the interval time is not less than 1 hour each time, and the third video is played for 45 minutes each time in a circulating way; three more views, 3 minutes, of a third video formed by the first set of words, and 3 minutes, of a third video formed by the second set of words; the time for the application is 51 minutes.
It should be noted that 120 children without english base from 4 to 6 years old learn 150 words after 21 days, 96% of the children learn one hundred fifty words, and can speak english words while doing physical movements while watching pictures; 4% of children learn more than 137 words, can speak English words while doing body movements while watching pictures, and pronounce correctly.
It should be noted that, the method for using the video with limb movement of the invention, according to the progressive requirement of children for learning english words, arranges the first video, the second video and the third video in a crossing way during learning, is scientific and reasonable, enables the children to accord with the lively and active nature on the premise of progressive learning, enables the students to learn the limb movement and read the limb movement independently while watching pictures, and timely reviews the past knowledge, thereby greatly improving the learning interest and learning effect.
The foregoing describes several preferred embodiments of the present application, but, as noted above, it is to be understood that the application is not limited to the forms disclosed herein, but is not to be construed as excluding other embodiments and is capable of use in various other combinations, modifications, and environments and is capable of changes within the scope of the inventive concept as expressed herein, commensurate with the above teachings, or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the application, which is to be protected by the claims appended hereto.

Claims (10)

1. A method for making a video containing limb actions is characterized by comprising the following steps: the method comprises the following steps:
step 1: acquiring a target word selected by a user;
step 2: acquiring a desired picture expressing the meaning of the target word;
and step 3: respectively acquiring limb action videos of the teacher expressing the meaning of the expected pictures;
and 4, step 4: acquiring a voice audio of the target word;
and 5: and generating an expected video which sequentially pops up the same picture to display the expected picture, the corresponding limb action video and the voice audio.
2. The method for producing a video containing body movements according to claim 1, wherein: further comprising the step 2-1: before the desired picture is obtained, the method further comprises the following steps:
acquiring a picture matched with the meaning of the target word from electronic equipment and/or a cloud database, wherein the picture is used as a primary selection picture;
acquiring a target image selected and marked in the initially selected picture by a user, and setting a minimum threshold A of the target image in the total image of the picture and a minimum threshold B of the similarity between the image in the picture and the target image.
3. The method for producing a video containing body movements according to claim 2, wherein: further comprising the step 2-2:
extracting a plurality of feature points of an image in a picture, comparing the feature points with the feature points of the target image, when the similarity is larger than or equal to B, the image is a similar image, calculating the proportion of the similar image in the total image of the image, when the proportion of the similar image is larger than or equal to A, the image is an alternative image corresponding to the target word, and selecting an alternative image with the best similarity and proportion of the similar image from the alternative images as an expected image; and respectively marking picture distinguishing information for the expected pictures corresponding to all the target words.
4. The method for producing a video containing body movements according to claim 3, wherein: further comprising the step 3-1:
and shooting a limb action video including a pronunciation mouth shape, which is made by a teacher aiming at the expected picture, cutting each segment of the limb action video, and marking video distinguishing information.
5. The method for producing a video containing body movements according to claim 4, wherein: further comprising the step 4-1:
and selecting a voice stream from the foreign language audio or the foreign language video, separating the voice audio of the target word, and respectively marking voice audio distinguishing information.
6. The method for producing a video containing body movements according to claim 5, wherein: the method further comprises a step 5-1, wherein the first video generation mode comprises the following steps:
opening PPT making software, arranging fifty expected pictures in a blank picture frame in sequence according to the picture distinguishing information, matching corresponding limb action videos containing a teacher slowly pronounces the mouth shape according to the video distinguishing information, matching corresponding voice audios according to the voice audio distinguishing information and corresponding primary Chinese voice explanation, and generating expected videos which pop up the expected pictures in sequence, display the limb action videos containing the teacher slowly pronounces the mouth shape, correspond the voice audios and primary Chinese voice explanation, wherein the expected videos are displayed on the same picture in a popping-up mode.
7. The method for producing a video containing body movements according to claim 5, wherein: the method further comprises a step 5-2 of generating a second video:
opening PPT making software, sequentially arranging fifty expected pictures in a blank picture frame according to the picture distinguishing information, matching corresponding limb action videos containing a teacher slowly pronounces mouth shapes according to the video distinguishing information, matching corresponding voice audios according to the voice audio distinguishing information, and generating expected videos which sequentially pop up the limb action videos containing the expected pictures, the teacher slowly pronounces mouth shapes and the corresponding voice audios, wherein the expected videos are displayed on the same picture.
8. The method for producing a video containing body movements according to claim 5, wherein: the method further comprises the following steps of 5-3, and a third video generation mode:
opening PPT manufacturing software, sequentially arranging fifty expected pictures in a blank picture frame according to the picture distinguishing information, matching corresponding limb action videos according to the video distinguishing information, matching corresponding voice audios according to the voice audio distinguishing information, switching the switching speed of one expected picture in one second, and generating expected videos which sequentially pop up the expected pictures, the limb action videos and the corresponding voice audios on the same picture.
9. A video product containing limb movements obtained using the production method according to any one of claims 1 to 8, comprising: the first video comprises fifty expected pictures, the expected pictures are sequentially popped up to display the expected pictures, the limb action video containing the teacher slowly pronouncing mouth shape, the corresponding voice audio and the video of one-time Chinese voice explanation, and when each expected picture in the video is displayed, the corresponding limb action video containing the teacher slowly pronouncing mouth shape and the corresponding voice audio can be played for 1 to 3 times;
a second video, wherein the video comprises fifty desired pictures, the same picture is popped up in sequence to display the desired pictures, the limb action video containing the mouth shape slowly sounded by the teacher and the video of the corresponding voice audio, and when each desired picture in the video is displayed, the corresponding limb action video containing the mouth shape slowly sounded by the teacher and the corresponding voice audio can be played for 1 to 3 times;
and a third video, wherein the video comprises fifty desired pictures, the switching speed of one desired picture is switched in one second, and videos which display the desired picture, the body motion video and the corresponding voice audio in the same picture are popped up in sequence.
10. A method for using a video product according to claim 9, wherein a first set of words, a second set of words, and a third set of words are provided, each set of words comprising a different fifty words, and each set of words forms a first video, a second video, and a third video, respectively, the using step comprising:
in the first week: viewing a video formed by the first set of words:
watching the first video once every monday;
watching the second video once on tuesday;
watching the third video three times every day from three weeks to sunday, and circularly playing for three times every time, wherein the interval time of each time is not less than 1 hour;
in the second week:
viewing a first video formed by the second set of words once monday;
viewing a second video formed by the second set of words once tuesday;
watching a third video formed by the second group of words three times every day from three weeks to sunday, and circularly playing for three times every time, wherein the interval time of each time is not less than 1 hour; then look at a third video formed by the first set of words a total of three times;
in the third week:
viewing once monday a first video formed by the third set of words;
viewing a second video formed by the third set of words once tuesday;
watching a third video formed by the third group of words three times every day from three weeks to sunday, and circularly playing for three times every time, wherein the interval time of each time is not less than 1 hour; the third video formed by the first set of words is then played three times again and the third video formed by the second set of words is played three times again.
CN202010285371.4A 2020-04-13 2020-04-13 Method for making video containing limb movement, video product and using method Pending CN111460227A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010285371.4A CN111460227A (en) 2020-04-13 2020-04-13 Method for making video containing limb movement, video product and using method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010285371.4A CN111460227A (en) 2020-04-13 2020-04-13 Method for making video containing limb movement, video product and using method

Publications (1)

Publication Number Publication Date
CN111460227A true CN111460227A (en) 2020-07-28

Family

ID=71681752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010285371.4A Pending CN111460227A (en) 2020-04-13 2020-04-13 Method for making video containing limb movement, video product and using method

Country Status (1)

Country Link
CN (1) CN111460227A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538987A (en) * 2021-08-12 2021-10-22 郑州趣听说教育科技有限公司 Immersive English learning method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080267503A1 (en) * 2007-04-26 2008-10-30 Fuji Xerox Co., Ltd. Increasing Retrieval Performance of Images by Providing Relevance Feedback on Word Images Contained in the Images
CN101958060A (en) * 2009-08-28 2011-01-26 陈美含 English spelling instant technical tool
US20110053123A1 (en) * 2009-08-31 2011-03-03 Christopher John Lonsdale Method for teaching language pronunciation and spelling
CN102169642A (en) * 2011-04-06 2011-08-31 李一波 Interactive virtual teacher system having intelligent error correction function
CN103080991A (en) * 2010-10-07 2013-05-01 阿姆司教育株式会社 Music-based language-learning method, and learning device using same
CN106961559A (en) * 2017-03-20 2017-07-18 维沃移动通信有限公司 The preparation method and electronic equipment of a kind of video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080267503A1 (en) * 2007-04-26 2008-10-30 Fuji Xerox Co., Ltd. Increasing Retrieval Performance of Images by Providing Relevance Feedback on Word Images Contained in the Images
CN101958060A (en) * 2009-08-28 2011-01-26 陈美含 English spelling instant technical tool
US20110053123A1 (en) * 2009-08-31 2011-03-03 Christopher John Lonsdale Method for teaching language pronunciation and spelling
CN103080991A (en) * 2010-10-07 2013-05-01 阿姆司教育株式会社 Music-based language-learning method, and learning device using same
CN102169642A (en) * 2011-04-06 2011-08-31 李一波 Interactive virtual teacher system having intelligent error correction function
CN106961559A (en) * 2017-03-20 2017-07-18 维沃移动通信有限公司 The preparation method and electronic equipment of a kind of video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538987A (en) * 2021-08-12 2021-10-22 郑州趣听说教育科技有限公司 Immersive English learning method and system
CN113538987B (en) * 2021-08-12 2023-02-24 郑州趣听说教育科技有限公司 Immersive English learning method and system

Similar Documents

Publication Publication Date Title
Afidah et al. INVESTIGATING STUDENTS’PERSPECTIVES ON THE USE OF TIKTOK AS AN INSTRUCTIONAL MEDIA IN DISTANCE LEARNING DURING PANDEMIC ERA
Yildirim et al. Exploring the Value of Animated Stories with Young English Language Learners.
CN112887790A (en) Method for fast interacting and playing video
CN111460220A (en) Method for making word flash card video and video product
CN111460227A (en) Method for making video containing limb movement, video product and using method
Su et al. Using subtitled animated cartoons and textbook-based CDs to test elementary students’ English listening and reading comprehension in a cram school
Şenel The semiotic approach and language teaching and learning
Григоренко et al. Using video in the process of teaching a foreign language at a university
US11941998B2 (en) Method for memorizing foreign words
Imama et al. Designing Stop Motion Video Using Learning Style Approach to Teach Vocabulary at 4th Grade SD Muhammadiyah Purwodiningratan II in the Academic Year 2015/2016
Nazar et al. The Effectiveness of the Use of Cartoons in Teaching English to the Children of Grade 5: An Experimental Study
Kulmagambetova et al. Video as a means of generating lexical skills in an English lesson
Minalla Enhancing Young EFL Learners' Vocabulary Learning Through Contextualizing Animated Videos
Rybalka et al. TV series as a modern didactic tool for foreign language teaching
Nakplad et al. Using an Animation Movie to Develop Ability of Stress in English of Primary School Students
Wagner Design in the educational film
Zamzami et al. Designing Teaching Activity
Miftakhova et al. Principles of working with audio-visual materials in course of Russian as a Foreign language at the beginning stage of teaching
Kraiova et al. Use of authentic video materials in teaching listening
Guo et al. Fostering the Learning of English Idioms by Setting Children within a Virtual Environment
Qizi Utilizing cartoons to teach lexicon to efl learners
Chobthamdee et al. Creating Instructional Media through Video Learning on Social Media Platforms for Graduate Diploma in Teaching Profession Students
Huong et al. An Investigation into English–Majored Sophomores’ Perceptions of English Movies to Improve Vocabulary at HUIT
Nurzhanovna Peculiarities of using animated films as a way to develop speaking skills at foreign language classes (at primary school)
Uchkunovich LANGUAGE TEACHING TECHNIQUES

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination