CN110337022B - Attention-based video variable-speed playing method and storage medium - Google Patents

Attention-based video variable-speed playing method and storage medium Download PDF

Info

Publication number
CN110337022B
CN110337022B CN201910500634.6A CN201910500634A CN110337022B CN 110337022 B CN110337022 B CN 110337022B CN 201910500634 A CN201910500634 A CN 201910500634A CN 110337022 B CN110337022 B CN 110337022B
Authority
CN
China
Prior art keywords
sample
time point
attentive
speed
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910500634.6A
Other languages
Chinese (zh)
Other versions
CN110337022A (en
Inventor
刘德建
陈丛亮
郭玉湖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Tianquan Educational Technology Ltd
Original Assignee
Fujian Tianquan Educational Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Tianquan Educational Technology Ltd filed Critical Fujian Tianquan Educational Technology Ltd
Priority to CN201910500634.6A priority Critical patent/CN110337022B/en
Publication of CN110337022A publication Critical patent/CN110337022A/en
Application granted granted Critical
Publication of CN110337022B publication Critical patent/CN110337022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a video variable-speed playing method and a storage medium based on attention, which comprises the following steps: judging according to whether the center position of the iris of a sample user is positioned in the range of a screen interval when the sample user watches the same video content, and acquiring sample concentration watching time points of the video content corresponding to each sample user; counting the maximum playing speed of each sample concentration viewing time point in the video content according to the user operation condition and the playing speed of each sample concentration viewing time point; acquiring a special watching time period of video content corresponding to a current user; and controlling the playing speed of a time point corresponding to the sample attentive viewing time point in the attentive viewing time period according to the maximum playing speed. The invention not only accelerates the review efficiency, but also ensures the consistency and integrity of the video content in the logic during review, which is more helpful for understanding.

Description

Attention-based video variable-speed playing method and storage medium
Technical Field
The invention relates to the technical field of video playing, in particular to a video variable-speed playing method and a storage medium based on attention.
Background
At present, when watching videos through intelligent glasses, if the videos are interrupted due to other things or users open the videos with small differences, some video contents, especially important learning contents in video learning scenes, can be missed frequently. The prior art does not support to quickly search/locate the part which is not watched by the user with attentiveness and provide the part for the user to watch again; further failing to support an improvement in review efficiency. Therefore, there is a need to provide a good solution to this problem, so as to improve the user experience.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the video variable-speed playing method and the storage medium based on the attention degree are provided, and the video playing method and the storage medium can accelerate the playing of the video content which is watched with attentiveness and concentrate on the video playing content which is not watched with attentiveness for the user.
In order to solve the technical problems, the invention adopts the technical scheme that:
the video variable-speed playing method based on the attention degree comprises the following steps:
according to a preset judgment time interval, respectively judging whether the central position of an iris is positioned in a screen interval range when more than two sample users watch the same video content, and acquiring sample concentration watching time points of the video content corresponding to each sample user;
counting the maximum playing speed of each sample concentration viewing time point in the video content according to the user operation condition and the playing speed of each sample concentration viewing time point;
acquiring a special watching time point of the video content corresponding to the current user;
acquiring an attentive watching time period corresponding to the video content according to the attentive watching time point;
and controlling the playing speed of a time point corresponding to the sample attentive viewing time point in the attentive viewing time period according to the maximum playing speed.
The invention provides another technical scheme as follows:
a computer readable storage medium, having stored thereon a computer program, which when executed by a processor of smart glasses, is capable of implementing the steps included in the above-mentioned attention-based video variable-speed playing method.
The invention has the beneficial effects that: counting the watching conditions of a certain number of sample users corresponding to each sample attentive watching time point of the same video to obtain the maximum playing speed of each sample attentive watching time point corresponding to the video; when any user watches the video, the attentive watching time period of the user is obtained, and then the playing speed of the attentive watching time period is controlled according to the maximum playing speed of the sample attentive watching time point contained in the video. The invention provides a novel video playback mode, which realizes playing the viewed part at the highest speed suitable for most people; therefore, the review efficiency is improved, the consistency and integrity of the video content in the logic during review can be ensured, and the comprehension is facilitated.
Drawings
Fig. 1 is a schematic flow chart of a video variable-speed playing method based on attention according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of each display corner and center point preset in the calibration process according to the first embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a position relationship between the center position of the iris and the screen when the sample user views the display corner A in FIG. 2;
FIG. 4 is a schematic diagram illustrating the relationship between the center position of the iris and the screen when the sample user views the corner B shown in FIG. 2;
FIG. 5 is a schematic diagram illustrating a position relationship between the center position of the iris and the screen when the sample user views the corner C shown in FIG. 2;
FIG. 6 is a schematic diagram illustrating a position relationship between the center position of the iris and the screen when the sample user views the corner D shown in FIG. 2;
FIG. 7 is a schematic diagram of the relationship between the center position of the iris and the position of the screen when the center point E in FIG. 2 is viewed by the sample user;
fig. 8 is a schematic diagram illustrating a position relationship between a range of a screen interval and a screen, which is obtained according to fig. 3 to 7 and is watched by a sample user with attentiveness.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
The most key concept of the invention is as follows: and counting to obtain the maximum playing speed of each sample attentive watching time point of the representative video, and accordingly controlling the playing speed of any user in the attentive watching time period during review.
Referring to fig. 1, the present invention provides a video variable-speed playing method based on attention, including:
according to a preset judgment time interval, respectively judging whether the central position of an iris is positioned in a screen interval range when more than two sample users watch the same video content, and acquiring sample concentration watching time points of the video content corresponding to each sample user;
counting the maximum playing speed of each sample concentration viewing time point in the video content according to the user operation condition and the playing speed of each sample concentration viewing time point;
acquiring a special watching time point of the video content corresponding to the current user;
acquiring an attentive watching time period corresponding to the video content according to the attentive watching time point;
and controlling the playing speed of a time point corresponding to the sample attentive viewing time point in the attentive viewing time period according to the maximum playing speed.
From the above description, the beneficial effects of the present invention are: the playing speed of any user attentive viewing time period is controlled according to the maximum playing speed of each sample attentive viewing time point of the representative video, so that the content which is attentive to viewing is played at the highest speed and the video content which is attentive to viewing is more attentive under the condition that the user only needs to know the content. The variable speed playing process of the complete video enables the user to have continuity in logic and has the function of improving the review efficiency.
Further, still include:
acquiring an inattentive watching time point of the video content corresponding to the current user and an inattentive watching time period corresponding to the inattentive watching time point;
playing the inattentive viewing time period at a normal multiple.
From the above description, it can be seen that the process of providing the variable-speed playing of the complete video content enables the user to concentrate more on the content which has not been watched with great concentration before, and to know the consecutive content, which is helpful for better understanding of the video content.
Further, according to a preset judgment time interval, whether the iris center position is located within a range of a screen interval when more than two sample users watch the same video content is respectively judged, and a sample attentive watching time point of each sample user corresponding to the video content is obtained, specifically: presetting display corner points and a central point of a screen;
the method comprises the steps that the central positions of irises corresponding to more than two sample users when the users watch each display corner point and central point one by one are respectively obtained through a camera of intelligent glasses;
acquiring a screen interval range which is watched with special attention and corresponds to each sample user according to the iris center position corresponding to each sample user;
judging whether the central position of the iris of each sample user is positioned in the range of a screen interval in the process of watching the same video content according to a preset judgment time interval, and if so, recording a corresponding video playing time point as a sample concentration watching time point of the corresponding sample user;
and acquiring all sample attentive watching time points of the video content corresponding to each sample user.
According to the description, the range of the screen interval which is watched with concentration is determined based on the position relation between the center position of the iris and the corner points and the central point of the display when the screen is watched with concentration; and then, the central position of the iris is tracked based on the screen interval range, the time of attentive watching is determined, and the accuracy is higher.
Further, the step of counting the maximum playing speed of each sample attentive viewing time point in the video content according to the user operation condition and the playing speed of each sample attentive viewing time point specifically includes: acquiring all sample users corresponding to a sample attentive viewing time point;
according to the backspacing times, the pause times and the playing speed, counting the effective speed of each sample user in all the sample users corresponding to the sample attentive viewing time point;
and calculating the average value of the effective speeds, and taking the average value as the maximum playing speed of the sample attentive viewing time point.
According to the description, statistics is carried out according to the summarized most conventional and most representative playing conditions, and the statistical result is guaranteed to have effectiveness and representativeness at the same time; more importantly, the maximum playing speed obtained through statistics is obtained by corresponding to the playing condition of most people under the condition of attentively watching, and the maximum playing speed can ensure that the user can accelerate playing on the premise of understanding the content and knowing the content, so that the playback efficiency is improved without influencing the playback effect.
Further, the obtaining of the attentive watching time point of the video content corresponding to the current user specifically includes:
respectively acquiring iris center positions corresponding to the current user when the user watches each display corner point and the center point through a camera of the intelligent glasses;
acquiring a screen interval range which is watched with attention according to the iris center position;
judging whether the current iris center position of the current user is located in the range of the screen interval of the current user according to the preset judgment time interval; if so, recording the corresponding video playing time point as the attentive watching time point corresponding to the current user.
As can be seen from the above description, the time point of the current user, which is not watched with attention, is obtained based on the relationship between the iris center position and the screen interval range, and the accuracy is high.
Further, the obtaining of the attentive viewing time period corresponding to the video content according to the attentive viewing time point specifically includes:
and combining the continuous attentive watching time points to obtain at least one attentive watching time period corresponding to the video content.
As can be seen from the above description, the respective attentive viewing periods are obtained by integrating the continuous attentive viewing time points, which facilitates the subsequent analysis.
Further, the controlling the playing speed of the time point corresponding to the sample attentive viewing time point in the attentive viewing time period according to the maximum playing speed specifically includes:
acquiring a sample attentive viewing time point contained in an attentive viewing time period;
and when the attentive watching time period is played back, controlling the playing speed of the corresponding time point according to the maximum playing speed corresponding to each contained sample attentive watching time point.
As can be seen from the above description, a scheme for specifically controlling the playing speed of the time point is provided, and the feasibility of the scheme is improved.
Further, the iris central point that corresponds when watching each demonstration corner point and central point one by one of the sample user that obtains more than two respectively through the camera of intelligent glasses specifically is:
setting the screen as a pure color, and setting the display corners and the center point one by one as another pure color;
and when each display corner and the center point are of another pure color, respectively acquiring the eye image of the sample user through the infrared camera, and calculating the corresponding iris center position.
From the above description, it can be known that the calibration point is highlighted through the chromatic aberration, so that the user can be ensured to be more attentive to the calibration point in the calibration process, and the accuracy of the screen interval range which is obtained through calibration and is watched with great attention is improved.
Further, the display corner points correspond to four corners of a maximum inscribed rectangle of the screen.
From the above description, it can be known that the four corners of the maximum inscribed rectangle of the screen are used as calibration points, which can be applied to glasses of any shape, and the screen interval range obtained by calibration is ensured to be obtained on the basis of maximizing the screen display area, thereby improving the effectiveness and accuracy of the screen interval range which is observed with great care.
The invention provides another technical scheme as follows:
a computer readable storage medium, having stored thereon a computer program, which when executed by a processor of smart glasses, is capable of implementing the steps included in the above-mentioned attention-based video variable-speed playing method.
As can be understood from the above description, those skilled in the art can understand that all or part of the processes in the above technical solutions can be implemented by instructing related hardware through a computer program, where the program can be stored in a computer-readable storage medium, and when executed, the program can include the processes of the above methods. And the execution of the process can also obtain the beneficial effects which can be realized by the video variable-speed playing method based on the attention degree.
The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Example one
Referring to fig. 1 to 8, the present embodiment provides a video playing method, which can count the maximum playing speed corresponding to the attentive viewing time point of each sample in the video and the attentive viewing time point of any user, and accordingly provide a variable-speed playing mode when the user reviews the video, so as to quickly play the attentive viewing part, know the content of the attentive viewing part, and concentrate on the unantrated viewing part, thereby greatly improving the reviewing efficiency, and ensuring the continuity of the content in the reviewing process logically, which is more helpful for understanding.
The method of the embodiment may include the steps of:
s1: according to a preset judgment time interval, whether the central position of the iris is located in the range of a screen interval when more than two sample users watch the same video content is judged respectively, and sample attentive watching time points of the video content corresponding to each sample user are obtained.
Specifically, this step includes the following substeps:
s11: presetting display corner points and a central point of a screen;
specifically, for a quadrilateral screen, four corners of the screen can be directly set as display corner points; aiming at polygonal, circular, elliptical or other special-shaped screens, the corresponding display corner points can be customized according to the screen frame; the preset standard of the display corners is that the area range formed by connecting all the display corners can be as close to the maximum display area range of the screen as possible. The area range surrounded by all the display corners is the calibration area, and the calibration result is more accurate as the calibration area is closer to the maximum display area of the screen.
In the following, a general elliptical screen is taken as an example, and a maximum inscribed rectangle is directly preset, that is, four corners of a quadrangle with a maximum area in the screen range are unfolded as display corners for detailed description.
As shown in fig. 2, four display corners A, B, C and D preset by this step, and a center point E.
S12: the iris center positions corresponding to more than two sample users when watching the display corner points and the central point one by one are respectively obtained through the camera of the intelligent glasses.
That is, the iris center positions a1, B1, C1 and D1 and E1 of the user corresponding to the sample user when the sample user watches the preset display corner points and center points A, B, C, D and E one by one are respectively obtained through the camera on the smart glasses worn by the sample user.
In a specific example, the description will be given by taking the iris center position corresponding to one of the sample users as an example, and the obtaining manner of the other sample users is the same, and will not be repeated here.
The specific example includes the following steps:
s121: acquiring an eye image of a sample user (ensuring that the eye image cannot be influenced by light shielding) through an infrared camera on intelligent glasses worn by the sample user, acquiring a lower eyelid position and recording;
s122: setting the screen to a pure color, such as full white;
s123: setting each display corner and center point one by one as another pure color, such as black;
by the chromatic aberration, the eye sight of the sample user can be made to focus more on the calibration points (the respective display corner and center points). This is because, under the condition that the smart glasses screen is transparent, the screen can be made opaque with the white background, and there is not the interference of variegated, and the calibration point sets up other pure colours, can highlight the calibration point.
S124: and when each display corner and center point are set to another pure color one by one, respectively acquiring the eye images of the user through the infrared camera, and calculating the corresponding iris center position.
Specifically, when the displayed corner A is set to be black, the eye image of the sample user is obtained through the camera, then the iris boundary is extracted, and the corresponding iris center position A1 is calculated; and obtaining the central positions of the irises corresponding to other calibration points in the same way.
As shown in fig. 3 to 7, when the sample user views the display corners A, B, C, D and E, the center position of the iris (indicated by the arrow in the figure) corresponds to the ellipse, which is the exposed portion of the white of the eye, and the center position of the iris.
S125: according to the above steps S121 to S124, the center positions of the irises corresponding to all the sample users are obtained.
Here, assuming that there are N sample users, where N is a natural number greater than 2, in step S26, iris center positions (a2, B2, C2, D2, E2) corresponding to the second sample user are sequentially obtained, and so on, so as to obtain 1 to N groups of iris center positions (An, Bn, Cn, Dn, En).
S13: acquiring a screen interval range which is watched with special attention and corresponds to each sample user according to the iris center position corresponding to each sample user;
specifically, for each sample user, the range of the screen interval (An, Bn, Cn, Dn) viewed with attentiveness of the rectangle shown in fig. 8 and the center position En are obtained according to the center position (An, Bn, Cn, Dn, En) of the iris.
Based on the result obtained in the calibration step, the iris center position of each sample user in the video watching process can be monitored to obtain each corresponding time point which is not watched with attentiveness. The method comprises the following specific steps:
s14: and judging whether the central position of the iris of each sample user is positioned in the range of the screen interval in the process of watching the same video content according to a preset judgment time interval, and if so, recording the corresponding video playing time point as the sample concentration watching time point of the corresponding sample user.
The time interval supports customization, and the range is within one second, because generally, within 1 second, the time interval is greater than one frame time of the video (10 frames per second is 0.1 second), so that the user attention of each frame of video picture can be monitored, and any frame of video picture cannot be missed. For example, it is preset that 10 seconds are sampled once, the first 9 seconds are no attentive to watch, and the 10 th second is watched, and the attentive watching is also determined. In practical application, the method can be used for presuming and setting according to the sampled data and the equipment performance, and the higher the sampling frequency is, the higher the accuracy is; preferably 0.3-0.8 s. For the intelligent glasses with weak equipment performance, the sampling is more accurate once if the sampling time is 0.1S, but the sampling times are more, the resources such as a CPU (central processing unit) and a memory of the equipment are occupied, and the video playing is influenced, so that the sampling time can be set to be 0.5S, namely a better balance point.
Namely, when the user watches the video, whether the iris center position of the user is located in the range of the calibrated A1B1C1D1 interval is calculated in real time, and the monitoring result is obtained.
In particular, assuming that the preset time interval is 0.5s, this step can be implemented by the following sub-steps:
s141: acquiring the iris center position W of the user through a camera every 0.5 s;
s142: judging whether the current iris center position W is in a screen interval range A1B1C1D1 which is watched by the user with concentration;
if not, recording the video playing time point corresponding to the acquisition time of the S141 as an inattentive watching time point; if so, recording the corresponding video playing time point as an attentive watching time point. For example, the time point of 00:00:17 when the video is played is monitored, which is a time point when the user does not pay attention to watching.
Through the method, the time points of the video playing time axis corresponding to all the attentive watching moments of each sample user in the watching process can be obtained. The obtained attentive watching time point can be used as a basis for analyzing the concentration degree condition of the sample user on the video content.
It should be noted that, the sample user as described above may also be a user who has historically viewed the same video content, without specially selecting a group of people to calibrate, thereby significantly improving the implementation efficiency of the scheme.
S2: and counting the maximum playing speed of each sample attentive viewing time point in the video content according to the user operation condition and the playing speed of each sample attentive viewing time point.
Specifically, the step may include:
s21: acquiring all sample users corresponding to a sample attentive viewing time point;
because the sample attentive viewing time points of a plurality of sample users corresponding to the same video are collected at the same time, the situation that the sample attentive viewing time points are repeated necessarily exists, namely the plurality of sample users corresponding to the same time point are all in an attentive viewing state. Through the step, the function of counting all the sample users which are attentively watched at the point corresponding to the attentively watching time point of each sample is achieved.
S22: and according to the backspacing times, the pause times and the playing speed, counting the effective speed of each sample user in all the sample users corresponding to the sample attentive viewing time point.
Wherein, the rollback time is the repeated playing time in the time range.
By this step, it is calculated that this sample is focused on the effective speed of the corresponding individual sample user at the point in time.
S23: and calculating the average value of the effective speeds, and taking the average value as the maximum playing speed of the sample attentive viewing time point.
The step predicts the maximum playing speed of the time point of the corresponding video through statistical calculation according to the playing situation of all users at the time point of the sample attentive to watching. The purpose of presetting the maximum playing speed of each sample on the video for watching the time point with great concentration is to play the watched video content at the highest speed under the condition that the user can understand the content, so that the content continuity can be kept, and the review efficiency can be improved.
S3: and acquiring the attentive watching time point of the video content corresponding to the current user.
In this step, the attentive viewing time point corresponding to the same video content of the current user is obtained according to the manner of obtaining the sample attentive viewing time point corresponding to the sample user in the above S1, and the specific manner is not repeated here.
S4: and acquiring each attentive watching time period corresponding to the video content according to the attentive watching time point.
Specifically, the attentive viewing time period corresponding to at least one of the video contents of the user may be obtained by combining consecutive attentive viewing time points.
S5: and controlling the playing speed of a time point corresponding to the sample attentive viewing time point in the attentive viewing time period according to the maximum playing speed.
That is, when each of the attentive viewing time periods is played, when the time point corresponding to the sample attentive viewing time point is reached, the playback is performed at the maximum playback speed (acquired at S2) corresponding to the sample attentive viewing time point.
It can be understood that, according to the watching situation of the sample user, the maximum playing speed of each sample attentive watching time point on the video is counted, when watching back, the corresponding time points are also attentive watching time points of the current user, the video content can be played at the maximum playing speed, but the current user can generally know the video content, so that the 'invalid content' can be ignored when watching back, the 'valid content' can be more concentrated on, the watching back efficiency can be improved, and the logic smoothness can be ensured.
S6: acquiring an inattentive watching time point of the video content corresponding to the current user and an inattentive watching time period corresponding to the inattentive watching time point;
s7: playing the inattentive viewing time period at a normal multiple.
By controlling the normal playing of the 'effective content', the user can be more concentrated on the method, and the effectiveness of review is ensured.
Example two
The embodiment corresponds to the first embodiment, and is further expanded, and specifically, with respect to steps S1-S2 in the first embodiment, another specific example is provided:
required information of the following steps is gathered according to the hash value of the video.
2.1 multiple persons watch the same video, and whether each person watches attentively at each time point is determined.
2.2 if the video is the time point of attentive watching, recording the current video playing speed, recording the backspacing times and pause times of the current position, uploading to the server, otherwise, not recording (because the time of non-attentive watching has no reference value)
2.3 according to the current playing speed of the person at the time point, whether the person pauses or backs up, calculating the effective speed corresponding to the time point.
If pause or rewind exists, the playing speed is x1, otherwise, the playing speed is the actual playing speed, and the following table 1 is a statistical basis table of all users corresponding to the current attentive viewing time points in this embodiment.
Number of rollback Number of pauses Playing speed Whether to pause or rewind Effective velocity
1 0 x1 0 x1
1 0 x2 0 x1
0 0 x2 0 x2
0 0 x1.5 0 x1.5
TABLE 1
And 2.4, obtaining the conservative maximum speed of playing at the current attentive watching time point by averaging the corresponding effective speeds of each person.
2.5 when other people watch the video again and play the video at the time point corresponding to the current attentive watching time point, the video is played at the conservative maximum speed calculated above, and the user accelerates the part which is watched attentively.
EXAMPLE III
This embodiment corresponds to the first and second embodiments, and provides a specific application scenario:
the small pieces watch a section of teaching video, and the time of the whole section of video is 30 seconds.
Calibrating the positions of the irises of the user and the position of a watching screen to obtain the relative positions A1, B1, C1 and D1 of the irises of the eyes corresponding to four fixed point positions ABCD and a central point position E of a watching video; a1, B1, C1 and D1 are represented by coordinates (0,0), (10,0), (10,0), (10, 10); by these coordinates, a range of zones that the user is attentive to view is framed.
Judging the small piece as an inattentive video to watch in the time of playing the video at 00:00: 00-00: 00: 05; attentive viewing at 00:00:05 to 00:00: 30;
this video was viewed by 100 people, of which 00:00:05 to 00:00:10 (actually, 5 seconds of data are combined in this simplification, one record per second) and 10 people were attentively viewed at this point in time, with a conservative maximum speed of 1.45 (summing the effective speeds and dividing by 10 people).
Figure BDA0002090099140000121
TABLE 2
At play, the conservative mode: playing at x1 normal speed because the video is not watched with attention at 00:00: 00-00: 00: 05; when the conservative maximum speed of 00:00:05 to 00:00:10 is 1.45, the seconds are played in 1.45 seconds, the remaining 00:00:10 to 00:00:30 is the attentive viewing time, and because data is not acquired, 1-fold playing is used.
A fast mode: playing at x1 normal speed because the video is not watched with attention at 00:00: 00-00: 00: 05; 00:00:05 to 00:00:30 all play, for example, x3 at the maximum speed supported by the player.
Example four
This embodiment corresponds to the first to third embodiments, and provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when being executed by a processor of smart glasses, can implement the steps included in the attention-based video variable-speed playing method according to any one of the first to third embodiments. The detailed steps are not repeated here, and refer to the descriptions of the first to third embodiments in detail.
In summary, the attention-based video variable-speed playing method and the storage medium provided by the invention provide a brand-new video review mode, which can accelerate review efficiency, ensure the consistency and integrity of video content in logic during review, and are more helpful for understanding; the effectiveness and the efficiency of the user review are improved; the playing of the video review can be more intelligent; the user experience can be greatly optimized.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (8)

1. The video variable-speed playing method based on the attention degree is characterized by comprising the following steps:
according to a preset judgment time interval, respectively judging whether the central position of an iris is positioned in a screen interval range when more than two sample users watch the same video content, and acquiring sample concentration watching time points of the video content corresponding to each sample user;
counting the maximum playing speed of each sample concentration viewing time point in the video content according to the user operation condition and the playing speed of each sample concentration viewing time point;
acquiring a special watching time point of the video content corresponding to the current user;
acquiring an attentive watching time period corresponding to the video content according to the attentive watching time point;
controlling the playing speed of a time point corresponding to the sample attentive viewing time point in the attentive viewing time period according to the maximum playing speed;
according to the preset judgment time interval, whether the central position of the iris of more than two sample users is located in the range of the screen interval when watching the same video content is judged, and the sample concentration watching time point of each sample user corresponding to the video content is obtained, specifically:
presetting display corner points and a central point of a screen;
the camera through intelligent glasses acquires the iris central point that more than two sample users correspond when watching each demonstration corner point and central point one by one respectively and specifically does:
setting the screen as a pure color, and setting the display corners and the center point one by one as another pure color;
when each display corner and the center point are of another pure color, respectively acquiring an eye image of a sample user through an infrared camera, and calculating the corresponding iris center position;
acquiring a screen interval range which is watched with special attention and corresponds to each sample user according to the iris center position corresponding to each sample user;
judging whether the central position of the iris of each sample user is positioned in the range of a screen interval in the process of watching the same video content according to a preset judgment time interval, and if so, recording a corresponding video playing time point as a sample concentration watching time point of the corresponding sample user;
and acquiring all sample attentive watching time points of the video content corresponding to each sample user.
2. The attention-based video variable-speed playing method according to claim 1, further comprising:
acquiring an inattentive watching time point of the video content corresponding to the current user and an inattentive watching time period corresponding to the inattentive watching time point;
playing the inattentive viewing time period at a normal multiple.
3. The attention-based video variable-speed playing method according to claim 1, wherein the calculating the maximum playing speed of each sample attentive viewing time point in the video content according to the user operation condition and the playing speed of each sample attentive viewing time point specifically comprises:
acquiring all sample users corresponding to a sample attentive viewing time point;
according to the backspacing times, the pause times and the playing speed, counting the effective speed of each sample user in all the sample users corresponding to the sample attentive viewing time point;
and calculating the average value of the effective speeds, and taking the average value as the maximum playing speed of the sample attentive viewing time point.
4. The attention-based video variable-speed playing method according to claim 1, wherein the obtaining of the attentive viewing time point of the video content corresponding to the current user specifically comprises:
respectively acquiring iris center positions corresponding to the current user when the user watches each display corner point and the center point through a camera of the intelligent glasses;
acquiring a screen interval range which is watched with attention according to the iris center position;
judging whether the current iris center position of the current user is located in the range of the screen interval of the current user according to the preset judgment time interval; if so, recording the corresponding video playing time point as the attentive watching time point corresponding to the current user.
5. The attention-based video variable-speed playing method according to claim 1, wherein the obtaining of the attentive viewing time period corresponding to the video content according to the attentive viewing time point specifically comprises:
and combining the continuous attentive watching time points to obtain at least one attentive watching time period corresponding to the video content.
6. The attention-based video variable-speed playing method according to claim 1, wherein the controlling of the playing speed at the time point corresponding to the sample attention-based viewing time point in the attention-based viewing time period according to the maximum playing speed includes:
acquiring a sample attentive viewing time point contained in an attentive viewing time period;
and when the attentive watching time period is played back, controlling the playing speed of the corresponding time point according to the maximum playing speed corresponding to each contained sample attentive watching time point.
7. The attention-based video variable-speed playing method according to claim 1, wherein the display corner points correspond to four corners of a maximum inscribed rectangle of the screen.
8. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor of smart glasses, is capable of implementing the steps included in the attention-based video variable-speed playing method according to any one of claims 1 to 7.
CN201910500634.6A 2019-06-11 2019-06-11 Attention-based video variable-speed playing method and storage medium Active CN110337022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910500634.6A CN110337022B (en) 2019-06-11 2019-06-11 Attention-based video variable-speed playing method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910500634.6A CN110337022B (en) 2019-06-11 2019-06-11 Attention-based video variable-speed playing method and storage medium

Publications (2)

Publication Number Publication Date
CN110337022A CN110337022A (en) 2019-10-15
CN110337022B true CN110337022B (en) 2022-04-12

Family

ID=68140949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910500634.6A Active CN110337022B (en) 2019-06-11 2019-06-11 Attention-based video variable-speed playing method and storage medium

Country Status (1)

Country Link
CN (1) CN110337022B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193938B (en) * 2020-01-14 2021-07-13 腾讯科技(深圳)有限公司 Video data processing method, device and computer readable storage medium
CN114390347B (en) * 2021-12-06 2024-04-26 上海工程技术大学 Control method of wide-screen media playing speed control system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1533784A2 (en) * 2003-11-20 2005-05-25 Sony Corporation Playback mode control device and method
CN101808229A (en) * 2009-02-16 2010-08-18 杭州恒生数字设备科技有限公司 Video stream rapid-playback system based on feature tag
CN104735385A (en) * 2015-03-31 2015-06-24 小米科技有限责任公司 Playing control method and device and electronic equipment
CN107247733A (en) * 2017-05-05 2017-10-13 中广热点云科技有限公司 A kind of video segment viewing temperature analysis method and system
WO2017208121A1 (en) * 2016-06-01 2017-12-07 Worm App Ltd Slow motion video playback method for computing devices with touch interfaces
CN107566898A (en) * 2017-09-18 2018-01-09 广东小天才科技有限公司 Video playing control method and device and terminal equipment
CN107888948A (en) * 2017-11-07 2018-04-06 北京小米移动软件有限公司 Determine method and device, the electronic equipment of video file broadcasting speed
CN108319371A (en) * 2018-02-11 2018-07-24 广东欧珀移动通信有限公司 Control method for playing back and Related product
CN109068178A (en) * 2018-09-11 2018-12-21 广州智诺科技有限公司 A kind of video broadcasting method and player

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5354401B2 (en) * 2011-09-06 2013-11-27 カシオ計算機株式会社 Movie playback device, movie playback method and program
US9325958B2 (en) * 2012-08-16 2016-04-26 Eric Blayney Broadcasting and detection system and method
CN106851405A (en) * 2016-12-13 2017-06-13 合网络技术(北京)有限公司 Video broadcasting method and device based on oblique viewing angle detection
CN108235123B (en) * 2016-12-15 2020-09-22 阿里巴巴(中国)有限公司 Video playing method and device
CN107463255A (en) * 2017-07-31 2017-12-12 努比亚技术有限公司 A kind of video broadcasting method, terminal and computer-readable recording medium
CN107484021A (en) * 2017-09-27 2017-12-15 广东小天才科技有限公司 Video playing method, system and terminal equipment
CN108184168B (en) * 2018-01-11 2019-12-27 广东小天才科技有限公司 Playing control method of terminal equipment and terminal equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1533784A2 (en) * 2003-11-20 2005-05-25 Sony Corporation Playback mode control device and method
CN101808229A (en) * 2009-02-16 2010-08-18 杭州恒生数字设备科技有限公司 Video stream rapid-playback system based on feature tag
CN104735385A (en) * 2015-03-31 2015-06-24 小米科技有限责任公司 Playing control method and device and electronic equipment
WO2017208121A1 (en) * 2016-06-01 2017-12-07 Worm App Ltd Slow motion video playback method for computing devices with touch interfaces
CN107247733A (en) * 2017-05-05 2017-10-13 中广热点云科技有限公司 A kind of video segment viewing temperature analysis method and system
CN107566898A (en) * 2017-09-18 2018-01-09 广东小天才科技有限公司 Video playing control method and device and terminal equipment
CN107888948A (en) * 2017-11-07 2018-04-06 北京小米移动软件有限公司 Determine method and device, the electronic equipment of video file broadcasting speed
CN108319371A (en) * 2018-02-11 2018-07-24 广东欧珀移动通信有限公司 Control method for playing back and Related product
CN109068178A (en) * 2018-09-11 2018-12-21 广州智诺科技有限公司 A kind of video broadcasting method and player

Also Published As

Publication number Publication date
CN110337022A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN109783178B (en) Color adjusting method, device, equipment and medium for interface component
Goldstein et al. Where people look when watching movies: Do all viewers look at the same place?
US20180041796A1 (en) Method and device for displaying information on video image
Jain et al. Gaze-driven video re-editing
Osberger et al. Automatic detection of regions of interest in complex video sequences
CN110337022B (en) Attention-based video variable-speed playing method and storage medium
Greene et al. Under high perceptual load, observers look but do not see
WO2006091825A2 (en) System and method for quantifying and mapping visual salience
US9852329B2 (en) Calculation of a characteristic of a hotspot in an event
Röhrbein et al. How does image noise affect actual and predicted human gaze allocation in assessing image quality?
CN109803100A (en) A kind of ghost method that adaptively disappears
CN110324641B (en) Method and device for keeping interest target moment display in panoramic video
CN110337032A (en) Video broadcasting method, storage medium based on attention rate
Alers et al. Studying the effect of optimizing image quality in salient regions at the expense of background content
US10209523B1 (en) Apparatus, system, and method for blur reduction for head-mounted displays
Marchant et al. Are you seeing what I'm seeing? An eye-tracking evaluation of dynamic scenes
CN110324694B (en) Video playing method and storage medium
CN109788311B (en) Character replacement method, electronic device, and storage medium
CN111857336B (en) Head-mounted device, rendering method thereof, and storage medium
WO2023069047A1 (en) A face recognition system to identify the person on the screen
JP4815949B2 (en) Multi-display device and display device
Nemoto et al. Impact of ultra high definition on visual attention
CN110286753B (en) Video attention judging method and storage medium
CN111866584A (en) Automatic video content replacement system
Smith et al. Eye movements and event segmentation: Eye movements reveal age-related differences in event model updating.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant