CN110248235B - Software teaching method, device, terminal equipment and medium - Google Patents

Software teaching method, device, terminal equipment and medium Download PDF

Info

Publication number
CN110248235B
CN110248235B CN201910560226.XA CN201910560226A CN110248235B CN 110248235 B CN110248235 B CN 110248235B CN 201910560226 A CN201910560226 A CN 201910560226A CN 110248235 B CN110248235 B CN 110248235B
Authority
CN
China
Prior art keywords
image
target
target video
information
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910560226.XA
Other languages
Chinese (zh)
Other versions
CN110248235A (en
Inventor
刘均
李向煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Golo Iov Data Technology Co ltd
Original Assignee
Golo Iov Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Golo Iov Data Technology Co ltd filed Critical Golo Iov Data Technology Co ltd
Priority to CN201910560226.XA priority Critical patent/CN110248235B/en
Publication of CN110248235A publication Critical patent/CN110248235A/en
Application granted granted Critical
Publication of CN110248235B publication Critical patent/CN110248235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application is suitable for the technical field of video playing, and provides a software teaching method, a device, terminal equipment and a medium, wherein the software teaching method comprises the steps of performing screen splitting operation on an interface for displaying a target video to obtain a display area and an operation area, loading a corresponding target simulation operation page in the operation area, further acquiring operation information through the target simulation operation page, and generating a simulation operation image, so that in the process of playing the target video, simulation operation can be performed on the content of the target video through the operation area, and an interaction scheme with a larger application range and a better interaction effect is provided for the video resource playing process.

Description

Software teaching method, device, terminal equipment and medium
Technical Field
The application belongs to the technical field of video playing, and particularly relates to a software teaching method, a software teaching device, terminal equipment and a computer readable storage medium.
Background
With the increasing popularity of mobile terminals and the continuous development of the internet industry, many applications running on the terminals are increasing. When a user uses the terminal, resources, such as text resources, audio resources or video resources, can be directly acquired from the internet through the application program. By utilizing the characteristic of internet that resources are easy to acquire, many applications for acquiring resources are added with functions of interacting with users, for example, adding commodity connections or adding questionnaires during video playing.
However, in the existing interaction methods, only relevant content is fixedly displayed in a display page of a resource, for example, elements such as a link animation and the like are added in a video resource, or content such as a questionnaire and the like are added for a user to click. For video resources with operation skill guidance, interaction cannot be realized through characters or other elements, and the problem that the application range of the existing interaction mode is small can be seen.
Disclosure of Invention
In view of this, embodiments of the present application provide a software teaching method, an apparatus, a terminal device, and a computer-readable storage medium, so as to solve the problem that an existing interaction method is small in application range.
A first aspect of an embodiment of the present application provides a software teaching method, including:
if a preset instruction for performing simulation operation on the content of a target video is detected, performing screen splitting operation on an interface displaying the target video to obtain a display area for playing the target video and an operation area for performing simulation operation;
acquiring image frame information of the target video in the display area, and loading a corresponding target simulation operation page in the operation area based on the image frame information;
and acquiring operation information through the target simulation operation page, and generating a simulation operation image according to the operation information.
Further, the acquiring image frame information of the target video in the display area and loading a corresponding target simulation operation page in the operation area based on the image frame information includes:
acquiring an image frame of the target video, and extracting a region to be identified from the image frame according to a preset image extraction strategy;
determining a plurality of feature points from the area to be identified;
determining a target image file of the equipment to be displayed from a preset database based on the plurality of feature points;
and loading the target image file in the operation area to obtain the target simulation operation page.
Further, the characteristic points carry sorting labels;
the determining a target image file of the device to be displayed from a preset database based on the plurality of feature points comprises:
acquiring a three-primary-color RGB value of each feature point;
sorting the RGB values of each feature point according to the sorting labels to obtain an RGB value set;
determining a target image file of the equipment to be displayed from a preset database according to the RGB value set; and the information in the preset database is used for describing the corresponding relation among the RGB value set, the information of the equipment to be displayed and the target image file.
Further, the acquiring image frame information of the target video in the display area and loading a corresponding target simulation operation page in the operation area based on the image frame information includes:
acquiring an image frame time stamp of the target video;
determining a target image file of the equipment to be displayed from a preset database according to the image frame timestamp; the information in the preset database is used for describing the corresponding relation among the image frame timestamp, the information of the equipment to be displayed and the target image file;
and loading the target image file in the operation area to obtain the target simulation operation page.
Further, before the step of obtaining the image frame time stamp of the target video, the method further includes:
acquiring information of all devices to be displayed in a target video and a time period of each device to be displayed appearing in the target video;
storing the information, the time period and the target image file of each device to be displayed in a preset database in an associated manner; and the target image file is an image file of the equipment to be displayed.
Further, after the step of obtaining the operation information through the target simulation operation page and generating the simulation operation image according to the operation information, the method further includes:
comparing the simulated operation image with a reference image displayed in the display area;
if the simulated operation image is matched with the reference image, determining that the operation represented by the simulated operation image is correct operation;
and if the simulated operation image is matched with the reference image, determining that the operation represented by the simulated operation image is an error operation.
Further, after the step of determining that the operation represented by the simulated operation image is an erroneous operation if the simulated operation image matches the reference image, the method further includes:
recording the times of error operation of the operation represented by the simulated operation image;
when the times reach a preset threshold value, determining a difference characteristic area between the simulated operation image and the reference image;
marking the difference characteristic region in the simulated operation image to obtain a guide image;
and displaying the guide image.
A second aspect of an embodiment of the present application provides a software teaching apparatus, including:
the screen splitting unit is used for performing screen splitting operation on an interface displaying the target video if a preset instruction for performing simulation operation on the content of the target video is detected, so that a display area for playing the target video and an operation area for performing simulation operation are obtained;
the first loading unit is used for acquiring image frame information of the target video in the display area and loading a corresponding target simulation operation page in the operation area based on the image frame information;
and the first image generation unit is used for acquiring operation information through the target simulation operation page and generating a simulation operation image according to the operation information.
Further, the first loading unit includes: the device comprises an area extracting unit, a first determining unit, a second determining unit and a second loading unit. Specifically, the method comprises the following steps:
the region extraction unit is used for acquiring the image frame of the target video and extracting a region to be identified from the image frame according to a preset image extraction strategy;
the first determining unit is used for determining a plurality of characteristic points from the area to be identified;
the second determining unit is used for determining a target image file of the equipment to be displayed from a preset database based on the plurality of feature points;
and the second loading unit is used for loading the target image file in the operation area to obtain the target simulation operation page.
Further, the characteristic points carry sorting labels; the first loading unit includes: the device comprises a first acquisition unit, a sorting unit and a third determination unit. Specifically, the method comprises the following steps:
a first acquisition unit configured to acquire a three primary color RGB value of each of the feature points;
the sorting unit is used for sorting the RGB values of each feature point according to the sorting labels to obtain an RGB value set;
the third determining unit is used for determining a target image file of the equipment to be displayed from a preset database according to the RGB value set; and the information in the preset database is used for describing the corresponding relation among the RGB value set, the information of the equipment to be displayed and the target image file.
Further, the first loading unit includes: the device comprises a second acquisition unit, a fourth determination unit and a third loading unit. Specifically, the method comprises the following steps:
the second acquisition unit is used for acquiring an image frame time stamp of the target video;
the fourth determining unit is used for determining a target image file of the equipment to be displayed from a preset database according to the image frame time stamp; the information in the preset database is used for describing the corresponding relation among the image frame timestamp, the information of the equipment to be displayed and the target image file;
and the third loading unit is used for loading the target image file in the operation area to obtain the target simulation operation page.
Further, the first loading unit further includes: a third acquisition unit and a storage unit. Specifically, the method comprises the following steps:
the third acquisition unit is used for acquiring information of all devices to be displayed in the target video and a time period of each device to be displayed appearing in the target video;
the storage unit is used for storing the information, the time period and the target image file of each device to be displayed in a preset database in an associated manner; and the target image file is the image file of the equipment to be displayed.
Further, the apparatus further comprises: the device comprises a comparison unit, a first execution unit and a second execution unit. Specifically, the method comprises the following steps:
the comparison unit is used for comparing the simulation operation image with a reference image displayed in the display area;
the first execution unit is used for determining that the operation represented by the simulated operation image is correct operation if the simulated operation image is matched with the reference image;
and the second execution unit is used for determining that the operation represented by the simulated operation image is an error operation if the simulated operation image is matched with the reference image.
Further, the apparatus further comprises: the device comprises a recording unit, a fifth determining unit, a marking unit and a display unit. Specifically, the method comprises the following steps:
the recording unit is used for recording the times of error operation of the operation represented by the simulation operation image;
a fifth determining unit, configured to determine, when the number of times reaches a preset threshold, a difference feature region from the simulated operation image to the reference image;
the marking unit is used for marking the difference characteristic region in the simulation operation image to obtain a guide image;
a display unit for displaying the guide image.
A third aspect of the embodiments of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the terminal device, where the processor implements the steps of the software teaching method provided by the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium storing a computer program, which when executed by a processor, implements the steps of the software teaching method provided by the first aspect.
The software teaching method, the device, the terminal equipment and the computer readable storage medium provided by the embodiment of the application have the following beneficial effects:
according to the software teaching method, the display area and the operation area are obtained by performing split screen operation on the interface displaying the target video, the corresponding target simulation operation page is loaded in the operation area, operation information is further obtained through the target simulation operation page, and the simulation operation image is generated, so that in the process of playing the target video, the content of the target video can be simulated through the operation area, and an interaction scheme with a larger application range and a better interaction effect is provided for the video resource playing process.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flowchart of an implementation of a software teaching method provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating an implementation of a software teaching method according to another embodiment of the present application;
FIG. 3 is a flowchart illustrating an implementation of a software teaching method according to yet another embodiment of the present application;
fig. 4 is a block diagram of a software teaching apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of a terminal device according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The scheme of the application provides an environment for simulating operation of video content, and can be applied to software teaching scenes.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a software teaching method according to an embodiment of the present disclosure. In this embodiment, the main execution body of the software teaching method is a terminal, and the terminal may access the server through a browser to obtain the video resource, or the terminal may access the server through an application installed on the terminal and obtain the video resource, or the terminal accesses the memory thereof to obtain the video resource.
The software teaching method as shown in fig. 1 includes the following steps:
s11: if a preset instruction for performing simulation operation on the content of the target video is detected, performing screen splitting operation on an interface displaying the target video to obtain a display area for playing the target video and an operation area for performing simulation operation.
In step S11, the target video may be a video stored in the terminal, or a video acquired by the terminal accessing the server through a browser or an application. The target video is generally a teaching video, and the learning and operation processes of software teaching are realized through split screen operation, so that the learning effect is further strengthened.
In all embodiments of the present application, the content in the target video includes image content for describing manual operation skills, for example, an operation image for connecting or using an instrument device, or an operation image for performing maintenance on an equipment device.
As for when the preset instruction for performing the simulation operation on the content of the target video is detected, the following two scenarios may be included, but not limited thereto.
Scene 1: the terminal accesses the server through the application program to acquire the target video resource, and can further trigger a preset instruction for performing simulation operation on the content of the target video by selecting a video playing mode before playing the target video.
For example, the terminal selects a video resource to be played through an application program, sets a simulation operation mode, and triggers a preset instruction for performing simulation operation on the content of the target video based on the set simulation operation mode when the video resource is played.
Scene 2: when the terminal plays the target video, a preset instruction for performing simulation operation on the content of the target video can be triggered.
For example, in the process of playing the video resource, the user clicks a trigger button of the preset instruction to further trigger the preset instruction for performing the simulation operation on the content of the target video.
It can be understood that, detecting a preset instruction for performing a simulation operation on the content of the target video and executing the preset instruction may be implemented by an application program playing the target video and a functional component configured in the application program, and may also be implemented by a functional optimization component in an operating system of the terminal, which is not limited herein.
In this embodiment, the interface for displaying the target video is a video playing window created when the target video is played, and the video playing window can implement full-screen playing, moving-window playing, and partial-screen playing of the target video. The method comprises the steps of performing screen splitting operation on an interface for displaying a target video, namely, under the condition that a video playing window runs, re-creating a new window as an operation area, and simultaneously adjusting the size of the video playing window to obtain a display area, so that the video playing window and the new window can be displayed simultaneously, namely, the display area for playing the target video and the operation area for performing simulation operation are obtained.
In order to ensure that the new window can be used for performing simulation operation as an operation region and that the display region can play the target video, when the interface for displaying the target video is subjected to split screen operation, the area of the display region and the area of the operation region can be set as the same area by default, that is, the obtained display region and the operation region are two regions with the same area.
In practical applications, when the interface displaying the target video is subjected to split screen operation, a boundary line for adjusting the size of the area may be generated, and the sizes of the display area and the operation area may be adjusted by moving the boundary line.
It can be understood that, in order to avoid that the playing effect of the target video is affected by interface pause occurring in the screen splitting process, the pause processing may be performed on the target video first and then the screen splitting operation may be performed on the interface displaying the target video while the screen splitting operation is performed on the interface displaying the target video.
S12: acquiring image frame information of the target video in the display area, and loading a corresponding target simulation operation page in the operation area based on the image frame information.
In step S12, the image of the target video in the display area is the image when the preset instruction is triggered. The image frame information is attribute information of the image frame, for example, timestamp information of the image frame in the target video, or annotation information pre-configured in the target video for the image frame, and the like. The target simulation operation page comprises operation elements corresponding to operated objects in the image content of the target video.
For example, the content in the target video includes that an image used for describing manual operation skills is an instrument device, and the target simulation operation page includes the instrument device in the target video and an operation element used for simulating connection or use of the instrument device.
For another example, the content in the target video includes an image used for describing manual operation skills, and is an equipment device, and the target simulation operation page includes the equipment device in the target video, and further includes an operation element used for simulating maintenance of the equipment device.
In all embodiments of the application, the page file corresponding to the target simulation operation page loaded in the operation area may be configured in advance and stored in the server or the local terminal, and after the screen splitting operation is performed on the interface displaying the target video by triggering a preset instruction for performing the simulation operation on the content of the target video, the page file corresponding to the target simulation operation page is loaded in the operation area.
As a possible implementation manner of this embodiment, step S12 may include: acquiring image frames of the target video, and extracting a region to be identified from the image frames according to a preset image extraction strategy; determining a plurality of feature points from the area to be identified; determining a target image file of the equipment to be displayed from a preset database based on the plurality of feature points; and loading the target image file in the operation area to obtain the target simulation operation page.
In this embodiment, the preset image extraction policy is used to describe an extraction manner, an extraction condition, or an attribute of the to-be-identified region. According to the preset image extraction strategy, after the image frame of the target video in the display area is obtained, according to the extraction mode described by the preset image extraction strategy, the area image meeting the extraction condition is extracted from the image frame to be used as the area to be identified, or the area image matched with the attribute of the area to be identified is extracted from the image frame to be used as the area to be identified.
It should be noted that the feature points are pixel points in the region to be identified, and in order to ensure the accuracy of loading the target image file in the operation region, a plurality of feature points may be determined from the region to be identified according to a preset coordinate parameter set, each feature point carries a coordinate parameter, and the coordinate parameter is used for describing the position of the feature point in the region to be identified, so that the plurality of feature points determined from the same region to be identified each time are not changed.
The characteristic points determined from the same area to be identified are not changed every time, so that the target image file of the equipment to be displayed determined from the preset database based on the characteristic points is also not changed, the target image file is loaded in the operation area, and the obtained target simulation operation page is also not changed.
Further, as a possible implementation manner of this embodiment, the feature points carry sorting labels. The step of determining the target image file of the device to be displayed from the preset database based on the plurality of feature points includes:
acquiring a three-primary-color RGB value of each feature point; sorting the RGB values of each feature point according to the sorting labels to obtain an RGB value set; determining a target image file of the equipment to be displayed from a preset database according to the RGB value set; and the information in the preset database is used for describing the corresponding relation among the RGB value set, the information of the equipment to be displayed and the target image file.
In the present embodiment, the sort label is used to distinguish the arrangement order of the RGB values of the feature points. Because a plurality of feature points determined from the same region to be recognized are not changed every time, the RGB value of each feature point determined from the same region to be recognized is a fixed parameter. When the target image file of the equipment to be displayed is determined, the accurate target image file of the equipment to be displayed can be determined from the preset database according to the uniqueness of the RGB value set.
In practical applications, in a certain period of time, the image of the target video may be an image that is only operated for one or more devices, so the content of all image frames in the period of time may be the same image frame, the content of the region to be identified extracted from the image frame may also be the same region to be identified, and the plurality of feature points determined from the region to be identified may also be the same plurality of feature points.
As a possible implementation manner of this embodiment, step S12 may include: acquiring an image frame time stamp of the target video; determining a target image file of the equipment to be displayed from a preset database according to the image frame timestamp; the information in the preset database is used for describing the corresponding relation among the image frame timestamp, the information of the equipment to be displayed and the target image file; and loading the target image file in the operation area to obtain the target simulation operation page.
In the embodiment, the image frame timestamp of the target video is the time information of the image frame in the target video, and each frame of image in the target video has a unique timestamp.
Further, as a possible implementation manner of this embodiment, before the step of acquiring image frame information of the target video in the display area and loading a corresponding target simulation operation page in the operation area based on the image frame information, the method further includes:
acquiring information of all devices to be displayed in a target video and a time period of each device to be displayed appearing in the target video; storing the information, the time period and the target image file of each device to be displayed in a preset database in an associated manner; and the target image file is the image file of the equipment to be displayed.
In this embodiment, the time period of each device to be displayed appearing in the target video includes all image frame timestamps corresponding to the devices to be displayed. And pre-configuring the corresponding relation among the time period of each device to be displayed in the target video, the information of the device to be displayed and the target image file into a preset database, so that after the image frame time stamp is determined, the target image file of the device to be displayed can be determined from the preset database according to the image frame time stamp.
The image frame timestamp of the target video is the time information of the image frame in the target video, and each frame of image in the target video has the unique timestamp, so that the target image file of the device to be displayed can be determined from the preset database according to the image frame timestamp, and the accuracy of the target image file loaded in the operation area is ensured.
S13: and acquiring operation information through the target simulation operation page, and generating a simulation operation image according to the operation information.
In step S13, the target simulation operation page includes a non-operation element and an operation element, where the non-operation element is an image content that does not need to be operated by the user, and the operation element is an element content that needs to be clicked, pressed, or dragged during the interaction process. The operation information is all data contents of the operation elements operated by the user in the target simulation operation page.
In all embodiments of the application, a buried point can be configured for an operation element in a target simulation operation page, and then all data contents of the operation element operated by a user are collected through the buried point. The operation information may be operation information at one time or operation information in one time period, and when the operation information is operation information at one time, the simulated operation image generated based on the operation information is one frame image, and when the operation information is operation information in one time period, the simulated operation image generated based on the operation information is an image set composed of two or more continuous frame images.
As can be seen from the above, according to the software teaching method provided by this embodiment, a display area and an operation area are obtained by performing a split screen operation on an interface displaying a target video, a corresponding target simulation operation page is loaded in the operation area, and then operation information is obtained through the target simulation operation page to generate a simulation operation image, so that in the process of playing the target video, the content of the target video can be simulated through the operation area, and an interaction scheme with a wider application range and a better interaction effect is provided for the video resource playing process.
Referring to fig. 2, fig. 2 is a flowchart illustrating an implementation of a software teaching method according to another embodiment of the present application. Compared with the embodiment corresponding to fig. 1, the method for playing video provided by this embodiment further includes steps S21 to S23 after step S13. The details are as follows:
s21: and comparing the simulation operation image with a reference image displayed in the display area.
S22: and if the simulated operation image is matched with the reference image, determining that the operation represented by the simulated operation image is correct operation.
S23: and if the simulated operation image is matched with the reference image, determining that the operation represented by the simulated operation image is an error operation.
In this embodiment, steps S22 and S23 are parallel, that is, steps S22 and S23 are not executed sequentially, step S23 is not executed after step S22 is executed, and step S22 is not executed after step S23 is executed until the simulated operation image is compared with the reference image displayed in the display area.
It should be noted that the comparison between the simulation operation image and the reference image may be a comparison between feature regions or feature points in the two images.
Taking the reference image as an example for describing the connection condition of the input port and the output port of the instrument device, the simulation operation image and the reference image both include the same element of the instrument device, and the positions or directions of the elements of the instrument device in the simulation operation image and the reference image are all the same, and whether the simulation operation image is matched with the reference image is determined by identifying whether the connection condition of the input port and the output port in the simulation operation image is the same as the connection condition of the input port and the output port in the reference image.
In practical applications, the operation in the simulated target video may be a simulated operation result or an operation action. That is, in addition to determining whether the simulation operation is correct by comparing the operation results, it is also possible to determine whether the simulation operation is correct by comparing the operation processes.
In all embodiments of the present application, the analog operation image may further include two or more consecutive frames, and the comparison between the analog operation image and the reference image may be a simultaneous comparison of two or more consecutive frames of the two images.
In this embodiment, whether the operation represented by the simulation operation image is correct or not is determined by comparing the simulation operation image with the reference image displayed in the display area, so that a user can obtain the result of the simulation operation after performing the simulation operation on the content of the target video, thereby providing the condition for performing the simulation operation on the content of the target video for the user, providing the judgment result of the simulation operation, and enhancing the interaction effect.
Referring to fig. 3, fig. 3 is a flowchart illustrating an implementation of a software teaching method according to yet another embodiment of the present application. Based on any of the above embodiments, the method for playing video provided by this embodiment further includes steps S31 to S34 after step S23. The details are as follows:
s31: and recording the times of the operation represented by the simulated operation image as an error operation.
S32: and when the times reach a preset threshold value, determining a difference characteristic region between the simulated operation image and the reference image.
S33: and marking the difference characteristic region in the simulated operation image to obtain a guide image.
S34: and displaying the guide image.
In this embodiment, the difference feature area is an area where the content of the erroneous operation in the simulated operation image is located when the operation represented by the simulated operation image is the erroneous operation. The difference characteristic region is marked, and the difference characteristic region can be marked from the simulation operation image by filling the difference characteristic region with light and color or drawing a marking frame.
When the simulation operation image is a single simulation operation image frame, the guide image is an image for marking the difference characteristic region on the basis of the simulation operation image frame; when the simulation operation image is two or more continuous simulation operation image frames, the guide image is an image for marking the difference characteristic region on the two or more continuous simulation operation image frames.
In order to improve the guiding effect of the guiding image, a corresponding text label may be added to the guiding image, where the text label may be subtitle content corresponding to the target video or an operation step profile configured for the operation element.
According to the scheme provided by the embodiment, the display area and the operation area are obtained by performing the split screen operation on the interface for displaying the target video, the corresponding target simulation operation page is loaded in the operation area, the operation information is further acquired through the target simulation operation page, and the simulation operation image is generated, so that in the process of playing the target video, the content of the target video can be subjected to the simulation operation through the operation area, and an interaction scheme with a larger application range and a better interaction effect is provided for the video resource playing process.
In this embodiment, the guiding image is obtained by determining the difference characteristic region between the simulated operation image and the reference image, and marking and displaying the difference characteristic region, so that the user can perform correct simulated operation on the video content again according to the content represented by the guiding image.
Referring to fig. 4, fig. 4 is a block diagram of a software teaching apparatus according to an embodiment of the present disclosure. The software teaching device in this embodiment includes units for executing the steps in the embodiments corresponding to fig. 1 to 3. Please specifically refer to fig. 1 to 3 and the related descriptions of the embodiments corresponding to fig. 1 to 3. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 4, the software teaching apparatus 400 includes: a split screen unit 41, a first loading unit 42, and a first image generating unit 43. Wherein:
the screen splitting unit 41 is configured to, if a preset instruction for performing a simulation operation on the content of the target video is detected, perform a screen splitting operation on an interface displaying the target video to obtain a display area for playing the target video and an operation area for performing the simulation operation.
The first loading unit 42 is configured to acquire image frame information of the target video in the display area, and load a corresponding target simulation operation page in the operation area based on the image frame information.
A first image generating unit 43, configured to obtain operation information through the target simulation operation page, and generate a simulation operation image according to the operation information.
As an embodiment of the present application, the first loading unit 42 includes: the device comprises an area extracting unit, a first determining unit, a second determining unit and a second loading unit. Specifically, the method comprises the following steps:
and the region extraction unit is used for acquiring the image frames of the target video and extracting the region to be identified from the image frames according to a preset image extraction strategy.
And the first determining unit is used for determining a plurality of characteristic points from the area to be identified.
And the second determining unit is used for determining a target image file of the equipment to be displayed from a preset database based on the plurality of characteristic points.
And the second loading unit is used for loading the target image file in the operation area to obtain the target simulation operation page.
As an embodiment of the present application, the feature points carry sorting labels; the first loading unit 42 includes: the device comprises a first acquisition unit, a sorting unit and a third determination unit. Specifically, the method comprises the following steps:
and the first acquisition unit is used for acquiring the three primary colors RGB value of each feature point.
And the sorting unit is used for sorting the RGB values of each feature point according to the sorting labels to obtain an RGB value set.
The third determining unit is used for determining a target image file of the equipment to be displayed from a preset database according to the RGB value set; and the information in the preset database is used for describing the corresponding relation among the RGB value set, the information of the equipment to be displayed and the target image file.
As an embodiment of the present application, the first loading unit 42 includes: the device comprises a second acquisition unit, a fourth determination unit and a third loading unit. Specifically, the method comprises the following steps:
and the second acquisition unit is used for acquiring the image frame time stamp of the target video.
The fourth determining unit is used for determining a target image file of the equipment to be displayed from a preset database according to the image frame time stamp; and the information in the preset database is used for describing the corresponding relation among the image frame timestamp, the information of the equipment to be displayed and the target image file.
And the third loading unit is used for loading the target image file in the operation area to obtain the target simulation operation page.
As an embodiment of the present application, the first loading unit 42 further includes: a third acquisition unit and a storage unit. Specifically.
And the third acquisition unit is used for acquiring the information of all the devices to be displayed in the target video and the time period of each device to be displayed appearing in the target video.
The storage unit is used for storing the information, the time period and the target image file of each device to be displayed in a preset database in an associated manner; and the target image file is the image file of the equipment to be displayed.
As an embodiment of the present application, the apparatus 400 further includes: the device comprises a comparison unit, a first execution unit and a second execution unit. Specifically, the method comprises the following steps:
and the comparison unit is used for comparing the simulation operation image with the reference image displayed in the display area.
And the first execution unit is used for determining that the operation represented by the simulated operation image is correct operation if the simulated operation image is matched with the reference image.
And the second execution unit is used for determining that the operation represented by the simulated operation image is an error operation if the simulated operation image is matched with the reference image.
As an embodiment of the present application, the apparatus 400 further includes: the device comprises a recording unit, a fifth determining unit, a marking unit and a display unit. Specifically, the method comprises the following steps:
and the recording unit is used for recording the times of error operation of the operation represented by the simulated operation image.
And the fifth determining unit is used for determining a difference characteristic area between the simulated operation image and the reference image when the times reach a preset threshold value.
And the marking unit is used for marking the difference characteristic area in the simulation operation image to obtain a guide image.
A display unit for displaying the guide image.
It can be seen from the above that, according to the scheme provided by this embodiment, the display area and the operation area are obtained by performing the split-screen operation on the interface displaying the target video, and the corresponding target simulation operation page is loaded in the operation area, so that the operation information is obtained through the target simulation operation page, and the simulation operation image is generated, so that in the process of playing the target video, the content of the target video can be simulated through the operation area, and an interaction scheme with a wider application range and a better interaction effect is provided for the video resource playing process.
In addition, the difference characteristic area between the simulated operation image and the reference image is determined, marked to obtain the guide image, and the guide image is displayed, so that the user can perform correct simulated operation on the video content again according to the content represented by the guide image.
Fig. 5 is a block diagram of a terminal device according to another embodiment of the present application. As shown in fig. 5, the terminal device 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52, such as a program for a software teaching method, stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the various embodiments of the software teaching methods described above, such as S11-S13 shown in fig. 1. Alternatively, when the processor 50 executes the computer program 52, the functions of the units in the embodiment corresponding to fig. 4, for example, the functions of the units 41 to 43 shown in fig. 4, are implemented, for which reference is specifically made to the relevant description in the embodiment corresponding to fig. 5, which is not repeated herein.
Illustratively, the computer program 52 may be divided into one or more units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present application. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 52 in the terminal device 5. For example, the computer program 52 may be divided into a screen unit, a first loading unit, and a first image generating unit, and the specific functions of the units are as described above.
The terminal device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of a terminal device 5 and does not constitute a limitation of terminal device 5 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the terminal device 5. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A software teaching method, comprising:
in the process of playing the video resource, a user triggers a preset instruction for carrying out simulation operation on the content of a target video by clicking a trigger button of the preset instruction;
if a preset instruction for performing simulation operation on the content of a target video is detected, performing screen splitting operation on an interface displaying the target video to obtain a display area for playing the target video and an operation area for performing simulation operation;
acquiring image frame information of the target video in the display area, and loading a corresponding target simulation operation page in the operation area based on the image frame information; the image frame information comprises timestamp information of an image frame in a target video or pre-configured annotation information of the image frame in the target video; the content in the target video comprises images for describing manual operation skills, namely instrument equipment; the target simulation operation page comprises instrument equipment in a target video and also comprises operation elements for simulating connection or use of the instrument equipment;
and acquiring operation information through the target simulation operation page, and generating a simulation operation image according to the operation information.
2. The method of claim 1, wherein the obtaining image frame information of the target video in the display area and loading a corresponding target simulation operation page in the operation area based on the image frame information comprises:
acquiring image frames of the target video, and extracting a region to be identified from the image frames according to a preset image extraction strategy;
determining a plurality of feature points from the area to be identified;
determining a target image file of the equipment to be displayed from a preset database based on the plurality of feature points;
and loading the target image file in the operation area to obtain the target simulation operation page.
3. The method of claim 2, wherein the feature points carry an ordering tag;
the determining a target image file of the device to be displayed from a preset database based on the plurality of feature points comprises:
acquiring a three-primary-color RGB value of each feature point;
sorting the RGB values of each feature point according to the sorting labels to obtain an RGB value set;
determining a target image file of the equipment to be displayed from a preset database according to the RGB value set; and the information in the preset database is used for describing the corresponding relation among the RGB value set, the information of the equipment to be displayed and the target image file.
4. The method according to claim 1, wherein the obtaining image frame information of the target video in the display area and loading a corresponding target simulation operation page in the operation area based on the image frame information comprises:
acquiring an image frame time stamp of the target video;
determining a target image file of the equipment to be displayed from a preset database according to the image frame timestamp; the information in the preset database is used for describing the corresponding relation among the image frame timestamp, the information of the equipment to be displayed and the target image file;
and loading the target image file in the operation area to obtain the target simulation operation page.
5. The method of claim 4, wherein the step of obtaining the image frame time stamp of the target video is preceded by the step of:
acquiring information of all devices to be displayed in a target video and a time period of each device to be displayed appearing in the target video;
storing the information, the time period and the target image file of each device to be displayed in a preset database in an associated manner; and the target image file is the image file of the equipment to be displayed.
6. The method according to claim 1, wherein after the step of obtaining the operation information through the target simulation operation page and generating the simulation operation image according to the operation information, the method further comprises:
comparing the simulated operation image with a reference image displayed in the display area;
if the simulated operation image is matched with the reference image, determining that the operation represented by the simulated operation image is correct operation;
and if the simulated operation image is not matched with the reference image, determining that the operation represented by the simulated operation image is an error operation.
7. The method of claim 6, wherein the step of determining that the operation characterized by the simulated operation image is an erroneous operation if the simulated operation image matches the reference image further comprises:
recording the times of error operation of the operation represented by the simulated operation image;
when the times reach a preset threshold value, determining a difference characteristic area between the simulated operation image and the reference image;
marking the difference characteristic region in the simulated operation image to obtain a guide image;
and displaying the guide image.
8. A software teaching device, comprising:
the screen splitting unit is used for triggering a preset instruction for simulating the content of the target video by clicking a trigger button of the preset instruction by a user in the process of playing the video resource; if a preset instruction for performing simulation operation on the content of a target video is detected, performing screen splitting operation on an interface displaying the target video to obtain a display area for playing the target video and an operation area for performing simulation operation;
the first loading unit is used for acquiring image frame information of the target video in the display area and loading a corresponding target simulation operation page in the operation area based on the image frame information; the image frame information comprises timestamp information of an image frame in a target video or pre-configured annotation information of the image frame in the target video; the content in the target video comprises images for describing manual operation skills, namely instrument equipment; the target simulation operation page comprises instrument equipment in a target video and also comprises operation elements for simulating connection or use of the instrument equipment;
and the first image generation unit is used for acquiring operation information through the target simulation operation page and generating a simulation operation image according to the operation information.
9. A terminal device, characterized in that the terminal device comprises a memory, a processor and a computer program stored in the memory and executable on the terminal device, the processor implementing the steps of the method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201910560226.XA 2019-06-26 2019-06-26 Software teaching method, device, terminal equipment and medium Active CN110248235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910560226.XA CN110248235B (en) 2019-06-26 2019-06-26 Software teaching method, device, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910560226.XA CN110248235B (en) 2019-06-26 2019-06-26 Software teaching method, device, terminal equipment and medium

Publications (2)

Publication Number Publication Date
CN110248235A CN110248235A (en) 2019-09-17
CN110248235B true CN110248235B (en) 2022-06-17

Family

ID=67889586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910560226.XA Active CN110248235B (en) 2019-06-26 2019-06-26 Software teaching method, device, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN110248235B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047929A (en) * 2019-10-22 2020-04-21 李登峻 Internet teaching method and platform based on big data
CN113518261B (en) * 2020-12-25 2023-09-22 腾讯科技(深圳)有限公司 Guiding video playing method, guiding video playing device, computer equipment and storage medium
CN113079405B (en) * 2021-03-26 2023-02-17 北京字跳网络技术有限公司 Multimedia resource editing method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929669A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Interactive video generator, player, generating method and playing method
CN106060343A (en) * 2016-06-21 2016-10-26 广州伟度计算机科技有限公司 Micro class assistant implementation system and method used for assisting teaching process
CN106301865A (en) * 2015-06-11 2017-01-04 阿里巴巴集团控股有限公司 It is applied to data processing method and the equipment of service providing device
CN108307222A (en) * 2018-01-25 2018-07-20 青岛海信电器股份有限公司 Smart television and the method that upper content is applied based on access homepage in display equipment
CN108353089A (en) * 2015-08-21 2018-07-31 三星电子株式会社 Device and method for the interaction area monitoring that user can configure
CN108924651A (en) * 2018-06-28 2018-11-30 中国地质大学(武汉) Instructional video intelligent playing system based on training operation identification
CN109147434A (en) * 2018-08-30 2019-01-04 北京葡萄智学科技有限公司 Teaching method and device
CN109697906A (en) * 2017-10-20 2019-04-30 深圳市鹰硕技术有限公司 It is a kind of that teaching method and system are followed based on internet teaching platform

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402872A (en) * 2010-09-09 2012-04-04 上海优盟信息技术有限公司 Remote education system and remote education method
CN105491414B (en) * 2015-11-19 2017-05-17 深圳市鹰硕技术有限公司 Synchronous display method and device of images
KR101892622B1 (en) * 2016-02-24 2018-10-04 주식회사 네비웍스 Realistic education media providing apparatus and realistic education media providing method
CN105844989A (en) * 2016-06-02 2016-08-10 新乡学院 English teaching language learning system
CN107393362A (en) * 2017-09-21 2017-11-24 淄博职业学院 A kind of computer application teaching and training system
CN108961848A (en) * 2018-07-06 2018-12-07 深圳点猫科技有限公司 A kind of method and electronic equipment of the generation DOM element for intelligent tutoring
CN109308181A (en) * 2018-08-23 2019-02-05 深圳点猫科技有限公司 A kind of the split screen operating method and system of the mobile terminal programming convenient for juvenile's operation
CN109035890B (en) * 2018-08-29 2021-04-09 创而新(北京)教育科技有限公司 Remote demonstration teaching method of mobile network
CN109686181A (en) * 2019-01-15 2019-04-26 龚义萍 A kind of teaching programming method, apparatus and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929669A (en) * 2014-04-30 2014-07-16 成都理想境界科技有限公司 Interactive video generator, player, generating method and playing method
CN106301865A (en) * 2015-06-11 2017-01-04 阿里巴巴集团控股有限公司 It is applied to data processing method and the equipment of service providing device
CN108353089A (en) * 2015-08-21 2018-07-31 三星电子株式会社 Device and method for the interaction area monitoring that user can configure
CN106060343A (en) * 2016-06-21 2016-10-26 广州伟度计算机科技有限公司 Micro class assistant implementation system and method used for assisting teaching process
CN109697906A (en) * 2017-10-20 2019-04-30 深圳市鹰硕技术有限公司 It is a kind of that teaching method and system are followed based on internet teaching platform
CN108307222A (en) * 2018-01-25 2018-07-20 青岛海信电器股份有限公司 Smart television and the method that upper content is applied based on access homepage in display equipment
CN108924651A (en) * 2018-06-28 2018-11-30 中国地质大学(武汉) Instructional video intelligent playing system based on training operation identification
CN109147434A (en) * 2018-08-30 2019-01-04 北京葡萄智学科技有限公司 Teaching method and device

Also Published As

Publication number Publication date
CN110248235A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
EP3335131B1 (en) Systems and methods for automatic content verification
CN109803180B (en) Video preview generation method and device, computer equipment and storage medium
JP6681342B2 (en) Behavioral event measurement system and related method
CN110248235B (en) Software teaching method, device, terminal equipment and medium
JP6214547B2 (en) Measuring the rendering time of a web page
US20170255830A1 (en) Method, apparatus, and system for identifying objects in video images and displaying information of same
US10083050B2 (en) User interface usage simulation generation and presentation
CN111124888B (en) Method and device for generating recording script and electronic device
CN108920380A (en) Test method, device, server, equipment and the storage medium of the software compatibility
CN109471805B (en) Resource testing method and device, storage medium and electronic equipment
US10068352B2 (en) Determining visibility of rendered content
CN110309049A (en) Web page contents monitor method, device, computer equipment and storage medium
CN112559341A (en) Picture testing method, device, equipment and storage medium
CN110599520B (en) Open field experiment data analysis method, system and terminal equipment
CN110688602A (en) Method, device and system for testing webpage loading speed
CN111782514A (en) Test data comparison method and device
US10631050B2 (en) Determining and correlating visual context on a user device with user behavior using digital content on the user device
CN110245068A (en) Automated testing method, device and the computer equipment of the H5 page
CN112835807B (en) Interface identification method and device, electronic equipment and storage medium
JP7029557B1 (en) Judgment device, judgment method and judgment program
CN115048302A (en) Front-end compatibility testing method and device, storage medium and electronic equipment
CN110673910B (en) Control method and control device for controlling popup window display in app system
CN110955369B (en) Focus judgment method, device and equipment based on click position and storage medium
CN112559340A (en) Picture testing method, device, equipment and storage medium
US10514779B2 (en) System and method for measuring association between screen resolution and mouse movement speed, recording medium, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Bantian street, Longgang District of Shenzhen City, Guangdong province 518000 yuan and five Avenue sign technology plant No. 1 building 4 floor

Applicant after: SHENZHEN GOLO CHELIAN DATA TECHNOLOGY Co.,Ltd.

Address before: Bantian street, Longgang District of Shenzhen City, Guangdong province 518000 yuan and five Avenue sign technology plant No. 1 building 4 floor

Applicant before: GOLO IOV DATA TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant