CN110933425A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN110933425A
CN110933425A CN201911117757.8A CN201911117757A CN110933425A CN 110933425 A CN110933425 A CN 110933425A CN 201911117757 A CN201911117757 A CN 201911117757A CN 110933425 A CN110933425 A CN 110933425A
Authority
CN
China
Prior art keywords
current page
reference frame
operation message
data processing
display state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911117757.8A
Other languages
Chinese (zh)
Inventor
范志刚
高鹏
张晓莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wanxiang Electronics Technology Co Ltd
Original Assignee
Xian Wanxiang Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Wanxiang Electronics Technology Co Ltd filed Critical Xian Wanxiang Electronics Technology Co Ltd
Priority to CN201911117757.8A priority Critical patent/CN110933425A/en
Publication of CN110933425A publication Critical patent/CN110933425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/142Detection of scene cut or scene change
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a data processing method and apparatus, relating to the technical field of computer data processing, wherein the method comprises: acquiring an operation message; the operation message is used for indicating the operation on the current page; and determining whether the scene switching occurs in the current page according to the operation message and a preset rule. The method and the device can solve the problem that the scene switching speed is low when the scene switching is judged in the background technology.

Description

Data processing method and device
Technical Field
The present disclosure relates to the field of computer data processing, and in particular, to a data processing method and apparatus.
Background
The encoding and decoding process of the existing computer picture is shown in fig. 1, and each frame is encoded by referring to the previous frame or the most recent previous I frame. When a user switches scenes, a current frame is regarded as scene switching, namely the previous frame or an I frame cannot be used as the reference of the current frame, so that the current frame is used as a new I frame for coding.
Disclosure of Invention
The embodiment of the disclosure provides a data processing method and device, and the data processing method and device can solve the problem that the scene switching speed is low when the scene switching is judged in the background technology. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a data processing method, including:
the operation message is used for indicating the operation on the current page;
and determining whether the scene switching occurs in the current page according to the operation message and a preset rule.
In one embodiment, before obtaining the operation message, the method further includes: detecting the display state of the current page; the page display state comprises at least one of: a multi-tab display state, a multi-window display state, a background desktop display state, or a tab and window hybrid display state.
In one embodiment, determining whether a scene cut occurs on a current page according to the operation message and a preset rule includes:
judging whether the operation of the operation message on the current page is a left mouse button click taskbar or a touch click taskbar;
and if the operation of the operation message on the current page is that the task bar is clicked by the left mouse button or the task bar is clicked by touch, determining that the scene switching occurs on the current page.
In one embodiment, the display state of the current page is a multi-tag display state, and the method further includes:
if the operation of the operation message on the current page is not the left mouse click on the task bar or the touch click on the task bar, judging whether the operation of the operation message on the current page is the left mouse click on the label bar or the touch click on the label bar;
and if the operation of the operation message on the current page is that the tab bar is clicked by the left mouse button or the tab bar is clicked by touch, determining that the scene switching occurs on the current page.
In one embodiment, the display state of the current page is a multi-window display state, and the method further includes:
if the operation of the operation message on the current page is not the left mouse click on the task bar or the touch click on the task bar, judging whether the operation of the operation message on the current page is the left mouse click on the window display area or the touch click on the window display area; and the operation of the operation message on the current page is that the left mouse button clicks the window display area or touches and clicks the window display area, and the scene switching of the current page is determined.
In one embodiment, the display state of the current page is a mixed display state of a tab and a window, and the method further includes:
if the operation of the operation message on the current page is not the left mouse click on the task bar or the touch click on the task bar, judging whether the operation of the operation message on the current page is the left mouse click on the window display area or the label position in the label bar or whether the operation of the operation message on the current page is the touch click on the window display area or the label position in the label bar;
and if the operation of the operation message on the current page is that the left mouse button clicks the display area of the window or the label position in the label column, or the operation of the operation message on the current page is that the display area of the window or the label position in the label column is touched and clicked, determining that the scene switching occurs on the current page.
In one embodiment, the display state of the current page is a background desktop display state, and the method further includes:
and if the operation of the operation message on the current page is not the left mouse click on the task bar and is not the touch click on the task bar, determining that the scene switching does not occur on the current page.
In one embodiment, the method further comprises:
and when the scene switching is determined to occur currently, selecting a reference frame meeting a preset condition according to a preset rule to perform interframe coding on the current page.
In one embodiment, selecting a reference frame meeting a preset condition according to a preset rule to perform inter-frame coding on a current page includes:
acquiring a reference frame queue; a plurality of original reference frames are stored in the reference frame queue;
respectively counting the number of the unchanged macro blocks of the current image frame relative to a plurality of original reference frames in a reference frame queue;
selecting a target reference frame from the reference frame queue according to the statistical result;
and performing interframe coding on the current image frame according to the target reference frame.
In one embodiment, selecting a target reference frame in the reference frame queue according to the statistical result comprises:
and selecting the original reference frame with the maximum number of the unchanged macro blocks indicated by the statistical result as the target reference frame.
In one embodiment, selecting a target reference frame in the reference frame queue according to the statistical result comprises:
identifying the original reference frame with the maximum number of the unchanged macro blocks indicated by the statistical result as a potential target reference frame;
and judging whether the number of the invariant macro blocks is greater than a preset threshold value, and selecting the potential target reference frame as a target reference frame when the number of the invariant macro blocks is greater than the preset threshold value.
In one embodiment, separately counting the number of the current image frame relative to a plurality of original reference frame invariant macro blocks in a reference frame queue comprises:
extracting Y components of partial pixel points from corresponding macro blocks in a plurality of original reference frames in a current image frame and a reference frame queue, and generating Y component thumbnails of each frame in the current image frame and the reference frame queue; the method comprises the steps that a current image frame and a plurality of original reference frames in a reference frame queue are stored in a YUV data form;
and comparing the Y component thumbnail of the current frame with the Y component thumbnails of all frames in the reference frame queue macroblock by macroblock, and calculating the number of the unchanged macroblocks of the current frame relative to a plurality of original reference frames in the reference frame queue.
According to a second aspect of the embodiments of the present disclosure, there is provided a data processing apparatus including:
the acquisition module is used for acquiring the operation message; the operation message is used for indicating the operation on the current page;
and the determining module is used for determining whether the scene switching occurs in the current page according to the operation message and a preset rule.
In one embodiment, the apparatus further comprises: the detection module is used for detecting the display state of the current page before the operation message is acquired; the page display state comprises at least one of: a multi-tab display state, a multi-window display state, a background desktop display state, or a tab and window hybrid display state.
In one embodiment, the determining module includes:
the first judgment submodule is used for judging whether the operation of the operation message on the current page is a left mouse button click taskbar or a touch click taskbar;
and the first determining submodule is used for determining that the scene switching occurs on the current page if the operation of the operation message on the current page is that the task bar is clicked by a left mouse button or the task bar is clicked by touch.
In one embodiment, the display state of the current page is a multi-tag display state, and the apparatus further includes:
the second judgment sub-module is used for judging whether the operation of the operation message on the current page is the left mouse click on the tab bar or the touch click on the tab bar if the operation of the operation message on the current page is not the left mouse click on the tab bar or the touch click on the tab bar;
and the second determining submodule is used for determining that the scene switching of the current page occurs if the operation of the operation message on the current page is that the tab bar is clicked by the left mouse button or the tab bar is clicked by touching.
In one embodiment, the display state of the current page is a multi-window display state, and the determining module includes:
a third judging sub-module, configured to, if the operation performed on the current page by the operation message is not a left-click mouse taskbar or a touch-click mouse taskbar, judge whether the operation performed on the current page by the operation message is a left-click mouse window display area or a touch-click window display area;
and the third determining sub-module is used for determining that the operation of the operation message on the current page is that the window display area is clicked by a left mouse button or the window display area is clicked by touch, and determining that the scene switching occurs on the current page.
In one embodiment, the display state of the current page is a mixed display state of a tab and a window, and the determining module includes:
a fourth judging submodule, configured to, if the operation performed on the current page by the operation message is not a left mouse click on the taskbar or a touch click on the taskbar, judge whether the operation performed on the current page by the operation message is a left mouse click on the window display area or the label position in the label bar, or judge whether the operation performed on the current page by the operation message is a touch click on the window display area or the label position in the label bar;
and the fourth determining submodule is used for determining that the scene switching occurs on the current page if the operation of the operation message on the current page is that the left mouse button clicks the label position in the window display area or the label column, or the operation of the operation message on the current page is that the operation message touches and clicks the label position in the window display area or the label column.
In one embodiment, the display state of the current page is a background desktop display state, and the determining module is further configured to: and if the operation of the operation message on the current page is not the left mouse click on the task bar and is not the touch click on the task bar, determining that the scene switching does not occur on the current page.
In one embodiment, the data processing apparatus further includes:
and the coding module is used for selecting a reference frame meeting preset conditions according to a preset rule to perform interframe coding on the current page when the scene switching is determined to occur currently.
In one embodiment, the encoding module comprises:
the queue submodule is used for acquiring a reference frame queue; a plurality of original reference frames are stored in the reference frame queue;
the counting submodule is used for respectively counting the number of the unchanged macro blocks of the current image frame relative to a plurality of original reference frames in a reference frame queue;
the selection submodule is used for selecting a target reference frame from the reference frame queue according to the statistical result;
and the coding sub-module is used for carrying out interframe coding on the current image frame according to the target reference frame.
And selecting the original reference frame with the maximum number of the unchanged macro blocks indicated by the statistical result as the target reference frame.
Identifying the original reference frame with the maximum number of the unchanged macro blocks indicated by the statistical result as a potential target reference frame;
and judging whether the number of the invariant macro blocks is greater than a preset threshold value, and selecting the potential target reference frame as a target reference frame when the number of the invariant macro blocks is greater than the preset threshold value.
In one embodiment, the statistics submodule is specifically configured to:
extracting Y components of partial pixel points from corresponding macro blocks in a plurality of original reference frames in a current image frame and a reference frame queue, and generating Y component thumbnails of each frame in the current image frame and the reference frame queue; the method comprises the steps that a current image frame and a plurality of original reference frames in a reference frame queue are stored in a YUV data form;
and comparing the Y component thumbnail of the current frame with the Y component thumbnails of all frames in the reference frame queue macroblock by macroblock, and calculating the number of the unchanged macroblocks of the current frame relative to a plurality of original reference frames in the reference frame queue.
In the background art, the problem of determining whether scene switching occurs is that the comparison speed is slow, which is a way of comparing two adjacent frames with pixel points or macro blocks to determine whether scene switching occurs. In an office scene, a computer picture is mostly changed along with user operation, so that the user operation has certain functions of predicting and judging whether a current frame picture is switched, and compared with an image comparison mode, the efficiency of a mode of judging according to the user operation is higher.
The technical effects of the disclosure are as follows:
the method provides a scene switching judgment process in different user page display states, and greatly shortens the speed and accuracy of scene switching judgment;
when a computer user frequently switches a plurality of scenes, the frequency of I frame coding is reduced through a coding mode of a plurality of reference frames, so that the code stream is reduced;
when the preferred reference frame is selected, the time for selecting the optimal reference frame from the reference frame sequence can be greatly reduced by adopting a thumbnail comparison mode.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of a data processing method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart for determining whether a scene change occurs according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a data processing method provided by an embodiment of the present disclosure;
fig. 4 is a flowchart for determining whether a scene change occurs according to an embodiment of the present disclosure;
fig. 5 is a flowchart for determining whether a scene change occurs according to an embodiment of the present disclosure;
fig. 6 is a flowchart for determining whether a scene change occurs according to an embodiment of the present disclosure;
fig. 7 is a flowchart for determining whether a scene change occurs according to an embodiment of the present disclosure;
fig. 8 is a flowchart of a data processing method provided by an embodiment of the present disclosure;
fig. 9 is a flowchart of inter-frame coding provided by an embodiment of the present disclosure;
fig. 10 is a flowchart of a data processing method provided by an embodiment of the present disclosure;
FIG. 11 is a flowchart illustrating a comparison between a current frame and an original reference frame in a reference frame queue according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of pixel point extraction according to an embodiment of the disclosure;
FIG. 13 is a diagram of a data processing device architecture provided by an embodiment of the present disclosure;
FIG. 14 is a diagram of a data processing device architecture provided by an embodiment of the present disclosure;
FIG. 15 is a diagram of a data processing device architecture provided by an embodiment of the present disclosure;
FIG. 16 is a diagram of a data processing device architecture provided by an embodiment of the present disclosure;
FIG. 17 is a diagram of a data processing device architecture provided by an embodiment of the present disclosure;
FIG. 18 is a diagram of a data processing device architecture provided by an embodiment of the present disclosure;
FIG. 19 is a diagram of a data processing device architecture provided by an embodiment of the present disclosure;
fig. 20 is a diagram of a data processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
An embodiment of the present disclosure provides a data processing method, as shown in fig. 1, the data processing method includes the following steps:
step 101, obtaining an operation message; the operation message is used for indicating the operation on the current page;
wherein, the operation message can be a keyboard and mouse event message; the keyboard and mouse event message at least comprises a keyboard and mouse event type and a position parameter of a mouse; the operation message may be an operation of the touch screen by the user.
And step 102, determining whether the scene switching occurs in the current page according to the operation message and a preset rule.
Specifically, the method comprises the following steps:
judging whether the operation of the operation message on the current page is a left mouse button click taskbar or a touch click taskbar;
and if the operation of the operation message on the current page is that the task bar is clicked by the left mouse button or the task bar is clicked by touch, determining that the scene switching occurs on the current page.
And if the operation message is a keyboard and mouse event message, determining whether the current page is subjected to scene switching according to the position parameter of the mouse when the keyboard and mouse event type indicates that the left mouse button is subjected to click action.
Optionally, before step 101, the method may further include:
step 100, detecting the display state of the current page;
the present disclosure applies to desktop virtualization scenarios. In a desktop virtualization scene, all desktop images (computer images) received by a client are processing results of a server, that is, the client only needs to send a keyboard and mouse message to the server, and the server executes the message locally and then returns the generated images to the client, so that a user can achieve the same processing effect as local operation.
Wherein the page display state comprises at least one of: a multi-tab display state, a multi-window display state, a background desktop display state, or a tab and window hybrid display state.
The multi-label display state is: the same software opens the display forms of multiple pages or documents, such as a browser multi-page or office multi-document mode. In this display state, each page or document generates a tag, and the tag is generally used to display the name of the current page or document and to activate (switch from the hidden state to the display state) or close the current page or document by the user;
the multi-window display state is: two or more software windows are displayed on the desktop at the same time;
the background desktop display state is: and not starting any software or minimizing all the software, wherein the desktop display image is the background of the desktop of the user computer.
The window and label mixed display state is as follows: one or more tabs and one or more windows are displayed simultaneously on the desktop.
The method comprises the steps that a user sends a keyboard and mouse message or a touch operation message to a server, the keyboard and mouse message comprises a keyboard and mouse event type and a coordinate parameter, whether mouse left-key clicking action occurs or not can be determined through the keyboard and mouse event type information, the position of the currently occurring mouse left-key clicking action can be located through the coordinate parameter, the touch operation message comprises the position of the touch operation, and the server can locate windows and/or labels of programs currently running, a task bar and the positions of application program icons in the task bar, so that specific objects which are aimed at when the mouse left-key clicking action occurs or the touch operation occurs can be comprehensively judged according to the information, and whether scene switching occurs or not is analyzed.
In one embodiment, as shown in fig. 2, when the type of the keyboard and mouse event indicates that a left mouse button is clicked, determining whether a scene change occurs on a current page according to a position parameter of the mouse includes:
step 1021, judging whether the mouse clicking position is in the taskbar;
step 1022, if the mouse click position is in the taskbar, judging whether the mouse unit position is a program icon position;
and 1023, determining the scene switching of the current page by taking the unit position of the mouse as the position of the program icon.
Fig. 3 is a data processing method provided by an embodiment of the present disclosure, where the data processing method shown in fig. 3 includes:
step 301, detecting the display state of the current page;
step 302, obtaining an operation message; the operation message is used for indicating the operation on the current page;
step 303, judging whether the operation of the operation message on the current page is a left mouse button click taskbar or a touch click taskbar;
and if the operation of the current page by the operation message is that the taskbar is clicked by the left mouse button or the taskbar is clicked by touch, executing step 306.
Step 304, judging the display state of the current page;
if the display state of the current page is the multi-label display state, executing step 305; the display state of the current page is a multi-window display state, and step 307 is executed; the display state of the current page is a label and window mixed display state, and step 308 is executed; the display state of the current page is the background desktop display state, and step 309 is executed.
305, judging whether the operation of the operation message on the current page is that the tab bar is clicked by the left mouse button or the tab bar is clicked by touch;
if the operation of the operation message on the current page is that the tab bar is clicked by the left mouse button or the tab bar is clicked by touch, executing step 306;
and step 306, determining scene switching of the current page.
Step 307, judging whether the operation of the operation message on the current page is a left mouse button click window display area or a touch click window display area;
and the operation of the operation message on the current page is that the window display area is clicked by the left mouse button or the window display area is clicked by touch, and step 306 is executed.
Step 308, judging whether the operation of the operation message on the current page is to click the window display area or the label position in the label bar by the left mouse button or to touch and click the window display area or the label position in the label bar;
if the operation of the operation message on the current page is that the left mouse button clicks the display area of the window or the label position in the label bar, or the operation of the operation message on the current page is that the display area of the window or the label position in the label bar is touched and clicked, step 306 is executed.
In the following, the scene switching judgment flows in different user page display states are respectively introduced.
1) Multi-label display status
Referring to fig. 4, in this state, whether a scene change occurs is determined by:
step 201a, continuously detecting mouse actions;
step 202a, judging whether a left mouse button clicking action occurs or not;
if yes, turning to step 203a, if not, determining that scene switching does not occur currently, and simultaneously turning to step 201a to continue to detect mouse actions;
step 203a, judging whether the mouse clicking position is in the taskbar; if yes, go to step 204 a; if not, go to step 206 a;
step 204a, further judging whether the unit position of the mouse is the position of the program icon, if so, turning to step 207a to determine that the scene switching currently occurs; if not, determining that the scene switching does not occur currently, and meanwhile, continuing to detect the mouse action in step 201 a;
step 205a, judging whether the mouse clicking position is in a label bar;
if yes, go to step 206 a; if not, determining that the scene switching does not occur currently, and meanwhile, turning to step 201a to continue to detect the mouse action;
step 206a, judging whether the mouse clicking position is the label position;
if yes, go to step 207 a; if not, determining that the scene switching does not occur currently, and simultaneously turning to step 201a to continue detecting the mouse action.
And step 207a, determining that the scene switching currently occurs.
2) Multi-window display status
Referring to fig. 5, in this state, whether a scene change occurs is determined by:
step 201b, continuously detecting the mouse action;
202b, judging whether a left mouse click action occurs?
If yes, turning to step 203b, if not, determining that scene switching does not occur currently, and simultaneously turning to step 201b to continue to detect mouse actions;
step 203b, judging whether the mouse clicking position is in the taskbar;
if yes, go to step 204 b; if not, go to step 206 b;
step 204b, further judging whether the unit position of the mouse is the position of the program icon, if so, turning to step 206b, and determining that the scene switching currently occurs; if not, determining that the scene switching does not occur currently, and meanwhile, continuing to detect the mouse action in step 201 b;
step 205b, judging whether the mouse clicking position is a window display area;
if yes, go to step 206 b; if not, determining that the scene switching does not occur currently, and meanwhile, turning to step 201b to continue to detect the mouse action;
and step 206b, determining that the scene switching currently occurs.
3) Background desktop display status
Referring to fig. 6, in this state, whether a scene change occurs is determined by:
step 201c, continuously detecting the mouse action;
202c, judging whether a left mouse click action occurs? If yes, turning to step 203c, if not, determining that scene switching does not occur currently, and simultaneously turning to step 201c to continue to detect mouse actions;
step 203c, judging whether the mouse clicking position is in the taskbar; if yes, go to step 204 c; if not, go to step 206 c;
step 204c, further judging whether the mouse clicking position is the program icon position, if so, turning to step 205c, and determining that the scene switching currently occurs; if not, determining that the scene switching does not occur currently, and meanwhile, continuing to detect the mouse action in step 201 c;
step 205c, determining that scene switching currently occurs;
4) tab and window hybrid display state
Referring to fig. 7, in this state, whether a scene change occurs is determined by:
step 201d, continuously detecting the mouse action;
step 202d, determine whether a left mouse click occurs?
If yes, turning to step 203d, if not, determining that no scene switching occurs, and simultaneously turning to step 201d to continue detecting the mouse action;
step 203d, judging whether the mouse clicking position is in the taskbar; if yes, go to step 204 d; if not, go to step 205 d;
step 204d, further judging whether the mouse clicking position is the program icon position;
if yes, go to step 206d to determine that scene switching currently occurs; if not, determining that the scene switching does not occur currently, and meanwhile, continuing to detect the mouse action in step 201 d;
step 205d, judging whether the mouse clicking position is the label position in the label bar or any window display area;
if yes, go to step 206 d; if not, determining that the scene switching does not occur, and simultaneously turning to step 201d to continue to detect the mouse action;
and step 206d, determining that the scene switching currently occurs.
An embodiment of the present disclosure provides a data processing method, as shown in fig. 8, the data processing method includes the following steps:
step 801, acquiring a keyboard and mouse event message; the keyboard and mouse event message at least comprises a keyboard and mouse event type and a position parameter of a mouse;
and step 802, when the type of the keyboard and mouse event indicates that the left mouse button is clicked, determining whether the current page is switched according to the position parameter of the mouse.
And 803, when the scene switching is determined to occur currently, selecting a reference frame meeting preset conditions according to a preset rule to perform interframe coding on the current page.
In one embodiment, as shown in fig. 9, selecting a reference frame meeting a preset condition according to a preset rule to perform inter-frame coding on a current page includes:
step 8031, acquiring a reference frame queue; a plurality of original reference frames are stored in the reference frame queue;
step 8032, counting the number of the original reference frame invariant macro blocks in the current image frame and the reference frame queue respectively;
step 8033, according to the statistical result, selecting a target reference frame from the reference frame queue;
specifically, the original reference frame with the largest number of the unchanged macroblocks indicated by the statistical result is selected as the target reference frame.
In one embodiment, the original reference frame with the most number of the unchanged macroblocks indicated by the statistical result is identified as the potential target reference frame;
and judging whether the number of the invariant macro blocks is greater than a preset threshold value, and selecting the potential target reference frame as a target reference frame when the number of the invariant macro blocks is greater than the preset threshold value.
Step 8034, inter-coding the current image frame according to the target reference frame.
An embodiment of the present disclosure provides a data processing method, as shown in fig. 10, the data processing method includes the following steps:
and judging whether scene switching occurs according to user operation, if the scene switching does not occur, directly referring to the previous frame of coding, then carrying out macro block identification, updating a reference frame queue, then carrying out coding and code stream aggregation according to the type identified by the macro block, and ending the process.
If a scene change occurs, the step of comparing the current frame with the original reference frame in the reference frame queue is performed, as shown in fig. 11, specifically as follows:
s1, comparing the current frame with each reference frame in the reference frame queue, and respectively counting the number of multiple groups of invariable macro blocks;
it should be noted that each frame of image is stored in a plannar mode YUV data form, and specifically, the plannar mode YUV data means that the YUV data are stored in three different arrays respectively.
Since the process of comparing the current frame with the reference frame queue pixel by pixel is time-consuming, and the time complexity thereof is greatly related to the number of the reference frame queues, it is necessary to select an appropriate value according to the scene for selecting the number of frames in the reference frame queues (the patent takes 8 as an example for explanation).
Similarly, in comparison, only the Y component of each pixel is compared.
Because the current frame needs to be compared with each frame in the reference frame queue, in order to increase the comparison speed, downsampling processing is performed on each frame in the reference frame queue and the Y component of the current frame in the present disclosure, the downsampling processing specifically refers to: for each macroblock, only a certain number of pixels are extracted, for example, a macroblock of 16X16 only extracts 4X4, that is, Y components of 16 pixels are extracted from the original macroblock, and a new thumbnail with a size of 4X4 is formed. In this way, Y-component thumbnails for the current frame and each frame in the reference frame queue can be generated separately.
Specifically, the rules for generating the thumbnail images of the respective frame images are the same, that is, the pixel point extraction rules of the macro blocks at the corresponding positions in the respective frame images must be the same, and when the same frame image is targeted, the pixel point extraction rules of each macro block may be different. For example, the extraction rule may be: one for every 4 pixels in the rows and columns. For simplicity, the pixel extraction rule of each macroblock may be set to be the same, for example, the pixel extraction in all macroblocks may be performed with reference to fig. 12:
fig. 12 is only an example, and when actually extracting, a specific extraction rule may be set as needed.
After the thumbnails are generated, the Y component thumbnail of the current frame is compared with the Y component thumbnails of each frame in the reference frame queue macroblock by macroblock, specifically, the macroblocks at the corresponding positions are compared between the two thumbnails for comparison, and the number of the same macroblocks (that is, the number of the unchanged macroblocks) is calculated.
And when the macro blocks are compared, comparing the Y components of the pixel points at the corresponding positions, if the Y components of all the pixel points are correspondingly the same, determining the macro blocks as unchanged, otherwise, determining the macro blocks as changed macro blocks.
S2, determining the reference frame with the largest number of unchanged macro blocks as a potential optimal reference frame;
s3, judging whether the number of the invariant macro blocks obtained by comparing the potential optimal reference frame with the current frame is larger than or equal to a preset threshold value, and if so, determining the potential optimal reference frame as the final optimal reference frame.
Specifically, the number of the invariant macro blocks of the potential optimal reference frame relative to the current frame is compared with a set threshold, if the number of the invariant macro blocks does not exceed the threshold, the current frame is considered to be unable to find a proper reference frame in the reference frame queue, I-frame coding must be performed on the current frame, and a flag find _ best _ ref _ flag is recorded as false; otherwise, considering that a proper reference frame is found in the reference frame queue, recording the flag find _ best _ ref _ flag as true, and in the subsequent steps, taking the finally determined optimal reference frame as the reference frame of the current frame to encode the current frame.
The macro block identification of the current frame in fig. 10 is explained as follows:
in the process, the macro block is generally classified into an invariant macro block, a left-copy macro block, an upper-copy macro block, a text macro block, a picture macro block and the like, wherein the invariant macro block is a result obtained by comparing corresponding positions of a current frame and a reference frame, the left-copy macro block is a result obtained by comparing the current macro block with a left macro block in a same manner, and the upper-copy macro block is a result obtained by comparing the current macro block with an upper macro block in a same manner.
The update reference frame queue in FIG. 10 is illustrated as follows:
as long as the current frame is determined to be an I frame, unconditionally adding the current frame to the reference frame queue with the highest priority level (namely, the current frame is located at the tail of the queue), and then updating the reference frame queue according to the scene _ cut _ flag obtained in the step 102; if the scene _ cut _ flag is false, only the reference frame corresponding to the current frame needs to be replaced by the current frame; if the scene _ cut _ flag is true and the find _ best _ ref _ flag is true, the priority of the reference frame is increased in the reference frame queue. And if the queue is full, moving the reference frame at the head of the queue out of the reference frame queue, sequentially moving the rest of the reference frames forwards, and placing the new reference frame at the tail of the queue.
And respectively encoding the macro blocks of different categories by adopting different encoding modes, such as a text encoder for a text macro block, a JPEG encoder for a picture macro block and the like.
The code stream aggregation is illustrated in fig. 10 as follows:
and packaging the coding results of different macro blocks into a code stream, adding a reference frame in a current frame reference frame queue into the code stream, adding a mark for judging whether the current frame is used for updating the reference frame queue, and if the mark is received, updating the reference frame queue by the current frame at a decoding end, namely adding the current frame into the reference frame queue, thereby realizing the synchronous dynamic adjustment of the reference frame queue at the decoding end.
The embodiment of the present disclosure provides a data processing apparatus, such as the data processing apparatus 130 shown in fig. 13, including an obtaining module 1301 and a determining module 1302;
the obtaining module 1301 is configured to obtain an operation message; the operation message is used for indicating the operation on the current page;
the determining module 1302 is configured to determine whether a scene of a current page is switched according to the operation message and a preset rule.
The data processing apparatus 140 shown in fig. 14 includes an obtaining module 1401, a determining module 1402, and a detecting module 1403, where the detecting module 1403 is configured to detect a display state of a current page before determining whether a current page is switched according to the keystroke event message; wherein the page display state comprises at least one of: a multi-tab display state, a multi-window display state, a background desktop display state, or a tab and window hybrid display state.
The embodiment of the present disclosure provides a data processing apparatus, such as the data processing apparatus 150 shown in fig. 15, including an obtaining module 1501 and a determining module 1502; the determination module 1502 includes:
a first judging sub-module 15021 configured to judge whether the operation of the operation message on the current page is a left mouse click taskbar or a touch click taskbar;
a first determining sub-module 15022, configured to determine that a scene of the current page is switched, in an embodiment, if the operation performed on the current page by the operation message is a left mouse click taskbar or a touch click taskbar, and the display state of the current page is a background desktop display state, where the first determining sub-module 15022 is further configured to: and if the mouse clicking position is not in the taskbar, determining that the scene switching does not occur currently.
The present disclosure provides a data processing apparatus, where a display state of a current page is a multi-tag display state, and the data processing apparatus 160 shown in fig. 16 includes an obtaining module 1601 and a determining module 1602; the determining module 1602 includes:
a second determining sub-module 16021, configured to determine whether the operation of the operation message on the current page is a left-click mouse tab bar or a touch-click mouse tab bar if the operation of the operation message on the current page is not a left-click mouse tab bar or a touch-click mouse tab bar
The second determining sub-module 16022 is configured to determine that the scene switching occurs on the current page if the operation performed on the current page by the operation message is a left mouse click on the tab bar or a touch click on the tab bar.
The present disclosure provides a data processing apparatus, where a display state of a current page is a multi-window display state, and a data processing apparatus 170 shown in fig. 17 includes an obtaining module 1701 and a determining module 1702; the determining module 1702 includes:
a third determining sub-module 17021, configured to determine, if the operation performed on the current page by the operation message is not a left-click mouse taskbar or a touch-click mouse taskbar, whether the operation performed on the current page by the operation message is a left-click mouse window display area or a touch-click window display area, where the third determining sub-module 17022 is configured to determine that the operation performed on the current page by the operation message is a left-click mouse window display area or a touch-click window display area, and determine that the current page is subjected to scene switching.
The embodiment of the present disclosure provides a data processing apparatus, where a display state of a current page is a tag and window mixed display state, and a data processing apparatus 180 shown in fig. 18 includes an obtaining module 1801 and a determining module 1802; the determination module 1802 includes:
a fourth determining sub-module 18021, configured to determine, if the operation performed on the current page by the operation message is not a left-click mouse to click on the taskbar or a touch click on the taskbar, whether the operation performed on the current page by the operation message is a left-click mouse to click on a window display area or a label position in the label bar, or whether the operation performed on the current page by the operation message is a touch click on the window display area or the label position in the label bar;
a fourth determining sub-module 18022, configured to determine that a scene of the current page is switched if the operation performed on the current page by the operation message is a left click of a mouse on a window display area or a label position in a label bar, or the operation performed on the current page by the operation message is a touch click on the window display area or the label position in the label bar.
The embodiment of the present disclosure provides a data processing apparatus, for example, a data processing apparatus 190 shown in fig. 19 includes an obtaining module 1901, a determining module 1902, and a coding module 1903, where the coding module 1903 is configured to select, according to a preset rule, a reference frame meeting a preset condition to perform inter-frame coding on a current page when it is determined that a scene switch occurs currently.
The embodiment of the present disclosure provides a data processing apparatus, such as a data processing apparatus 200 shown in fig. 20, including an obtaining module 2001, a determining module 2002, and an encoding module 2003, where the encoding module 2003 includes:
a queue submodule 20031 for acquiring a reference frame queue; a plurality of original reference frames are stored in the reference frame queue;
a counting submodule 20032, configured to count the number of multiple original reference frame invariant macro blocks in the current image frame and the reference frame queue, respectively;
optionally, the statistics submodule 20031 is specifically configured to: extracting Y components of partial pixel points from corresponding macro blocks in a plurality of original reference frames in a current image frame and a reference frame queue, and generating Y component thumbnails of each frame in the current image frame and the reference frame queue; the method comprises the steps that a current image frame and a plurality of original reference frames in a reference frame queue are stored in a YUV data form;
and comparing the Y component thumbnail of the current frame with the Y component thumbnails of all frames in the reference frame queue macroblock by macroblock, and calculating the number of the unchanged macroblocks.
A selecting submodule 20033, configured to select a target reference frame from the reference frame queue according to the statistical result;
optionally, the original reference frame with the largest number of unchanged macroblocks indicated by the statistical result is selected as the target reference frame.
Optionally, the original reference frame with the largest number of unchanged macroblocks indicated by the statistical result is identified as a potential target reference frame; and judging whether the number of the invariant macro blocks is greater than a preset threshold value, and selecting the potential target reference frame as a target reference frame when the number of the invariant macro blocks is greater than the preset threshold value.
The encoding sub-module 20034 is configured to perform inter-frame encoding on the current image frame according to the target reference frame.
Based on the data processing method described in the embodiment corresponding to fig. 1, an embodiment of the present disclosure further provides a computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be a Read Only Memory (ROM), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. The storage medium stores computer instructions for executing the data processing method described in the embodiment corresponding to fig. 1, which is not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (13)

1. A method of data processing, the method comprising:
acquiring an operation message; the operation message is used for indicating the operation on the current page;
and determining whether the scene switching occurs in the current page according to the operation message and a preset rule.
2. The data processing method of claim 1, wherein prior to obtaining the operation message, the method further comprises: detecting the display state of the current page;
the page display state comprises at least one of: a multi-tab display state, a multi-window display state, a background desktop display state, or a tab and window hybrid display state.
3. The data processing method according to any one of claims 1 or 2, wherein the determining whether the scene switching occurs on the current page according to the operation message and a preset rule comprises:
judging whether the operation of the operation message on the current page is a left mouse button click taskbar or a touch click taskbar;
and if the operation of the operation message on the current page is that the task bar is clicked by the left mouse button or the task bar is clicked by touch, determining that the scene switching occurs on the current page.
4. The data processing method of claim 3, wherein the display state of the current page is a multi-tab display state, the method further comprising:
if the operation of the operation message on the current page is not the left mouse click on the task bar or the touch click on the task bar, judging whether the operation of the operation message on the current page is the left mouse click on the label bar or the touch click on the label bar;
and if the operation of the operation message on the current page is that the tab bar is clicked by the left mouse button or the tab bar is clicked by touch, determining that the scene switching occurs on the current page.
5. The data processing method of claim 3, wherein the display state of the current page is a multi-window display state, the method further comprising:
if the operation of the operation message on the current page is not the left mouse click on the task bar or the touch click on the task bar, judging whether the operation of the operation message on the current page is the left mouse click on the window display area or the touch click on the window display area; and the operation of the operation message on the current page is that the left mouse button clicks the window display area or touches and clicks the window display area, and the scene switching of the current page is determined.
6. The data processing method of claim 3, wherein the display state of the current page is a tab and window mixed display state, the method further comprising:
if the operation of the operation message on the current page is not the left mouse click on the task bar or the touch click on the task bar, judging whether the operation of the operation message on the current page is the left mouse click on the window display area or the label position in the label bar or whether the operation of the operation message on the current page is the touch click on the window display area or the label position in the label bar;
and if the operation of the operation message on the current page is that the left mouse button clicks the display area of the window or the label position in the label column, or the operation of the operation message on the current page is that the display area of the window or the label position in the label column is touched and clicked, determining that the scene switching occurs on the current page.
7. The data processing method of claim 3, wherein the display state of the current page is a background desktop display state, the method further comprising:
and if the operation of the operation message on the current page is not the left mouse click on the task bar and is not the touch click on the task bar, determining that the scene switching does not occur on the current page.
8. The data processing method according to any one of claims 4 to 7, characterized in that the method further comprises:
and when the scene switching is determined to occur currently, selecting a reference frame meeting a preset condition according to a preset rule to perform interframe coding on the current page.
9. The data processing method of claim 8, wherein the selecting the reference frame meeting the preset condition according to the preset rule to perform inter-frame coding on the current page comprises:
acquiring a reference frame queue; a plurality of original reference frames are stored in the reference frame queue;
respectively counting the number of the unchanged macro blocks of the current image frame relative to a plurality of original reference frames in a reference frame queue;
selecting a target reference frame from the reference frame queue according to the statistical result;
and performing interframe coding on the current image frame according to the target reference frame.
10. The data processing method of claim 9, wherein the selecting a target reference frame in the reference frame queue according to the statistical result comprises:
and selecting the original reference frame with the maximum number of the unchanged macro blocks indicated by the statistical result as the target reference frame.
11. The data processing method of claim 9, wherein selecting a target reference frame in the reference frame queue according to the statistical result comprises:
identifying the original reference frame with the maximum number of the unchanged macro blocks indicated by the statistical result as a potential target reference frame;
and judging whether the number of the invariant macro blocks is greater than a preset threshold value, and selecting the potential target reference frame as a target reference frame when the number of the invariant macro blocks is greater than the preset threshold value.
12. The data processing method of claim 9, wherein the separately counting the number of the current image frame relative to the plurality of original reference frame invariant macroblocks in the reference frame queue comprises:
extracting Y components of partial pixel points from corresponding macro blocks in a plurality of original reference frames in a current image frame and a reference frame queue, and generating Y component thumbnails of each frame in the current image frame and the reference frame queue; the method comprises the steps that a current image frame and a plurality of original reference frames in a reference frame queue are stored in a YUV data form;
and comparing the Y component thumbnail of the current frame with the Y component thumbnails of all frames in the reference frame queue macroblock by macroblock, and calculating the number of the unchanged macroblocks of the current frame relative to a plurality of original reference frames in the reference frame queue.
13. A data processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the operation message; the operation message is used for indicating the operation on the current page;
and the determining module is used for determining whether the scene switching occurs in the current page according to the operation message and a preset rule.
CN201911117757.8A 2019-11-15 2019-11-15 Data processing method and device Pending CN110933425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911117757.8A CN110933425A (en) 2019-11-15 2019-11-15 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911117757.8A CN110933425A (en) 2019-11-15 2019-11-15 Data processing method and device

Publications (1)

Publication Number Publication Date
CN110933425A true CN110933425A (en) 2020-03-27

Family

ID=69853063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911117757.8A Pending CN110933425A (en) 2019-11-15 2019-11-15 Data processing method and device

Country Status (1)

Country Link
CN (1) CN110933425A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363449A (en) * 2020-09-30 2022-04-15 北京字跳网络技术有限公司 Service state switching method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090116281A (en) * 2008-05-07 2009-11-11 중앙대학교 산학협력단 Motion estimation procedure by fast multiple reference frame selection procedure
CN101616310A (en) * 2009-07-17 2009-12-30 清华大学 The target image stabilizing method of binocular vision system of variable visual angle and resolution
US20130148721A1 (en) * 2011-12-07 2013-06-13 Cisco Technology, Inc. Reference Frame Management for Screen Content Video Coding Using Hash or Checksum Functions
US20140003523A1 (en) * 2012-06-30 2014-01-02 Divx, Llc Systems and methods for encoding video using higher rate video sequences
CN103618911A (en) * 2013-10-12 2014-03-05 北京视博云科技有限公司 Video streaming providing method and device based on video attribute information
CN103636212A (en) * 2011-07-01 2014-03-12 苹果公司 Frame encoding selection based on frame similarities and visual quality and interests
CN104469336A (en) * 2013-09-25 2015-03-25 中国科学院深圳先进技术研究院 Coding method for multi-view depth video signals
CN109710343A (en) * 2017-10-25 2019-05-03 北京众纳鑫海网络技术有限公司 Windows switching method, device, equipment and the storage medium of computer desktop

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090116281A (en) * 2008-05-07 2009-11-11 중앙대학교 산학협력단 Motion estimation procedure by fast multiple reference frame selection procedure
CN101616310A (en) * 2009-07-17 2009-12-30 清华大学 The target image stabilizing method of binocular vision system of variable visual angle and resolution
CN103636212A (en) * 2011-07-01 2014-03-12 苹果公司 Frame encoding selection based on frame similarities and visual quality and interests
US20130148721A1 (en) * 2011-12-07 2013-06-13 Cisco Technology, Inc. Reference Frame Management for Screen Content Video Coding Using Hash or Checksum Functions
US20140003523A1 (en) * 2012-06-30 2014-01-02 Divx, Llc Systems and methods for encoding video using higher rate video sequences
CN104469336A (en) * 2013-09-25 2015-03-25 中国科学院深圳先进技术研究院 Coding method for multi-view depth video signals
CN103618911A (en) * 2013-10-12 2014-03-05 北京视博云科技有限公司 Video streaming providing method and device based on video attribute information
CN109710343A (en) * 2017-10-25 2019-05-03 北京众纳鑫海网络技术有限公司 Windows switching method, device, equipment and the storage medium of computer desktop

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363449A (en) * 2020-09-30 2022-04-15 北京字跳网络技术有限公司 Service state switching method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112789650B (en) Detecting translucent image watermarks
EP3488385A1 (en) Method for converting landscape video to portrait mobile layout
CN110692251B (en) Method and system for combining digital video content
WO2021088422A1 (en) Application message notification method and device
CN111383201A (en) Scene-based image processing method and device, intelligent terminal and storage medium
JP5445467B2 (en) Credit information section detection method, credit information section detection device, and credit information section detection program
US20110221927A1 (en) Image processing apparatus, image processing method and program
KR20110021195A (en) Method and apparatus for detecting an important information from a moving picture
US20210304796A1 (en) Data processing method and system, storage medium, and computing device
WO2018093372A1 (en) Media rendering with orientation metadata
CN111783665A (en) Action recognition method and device, storage medium and electronic equipment
US20210127071A1 (en) Method, system and computer program product for object-initiated redaction of surveillance video
CN110933425A (en) Data processing method and device
CN111083481A (en) Image coding method and device
CN110780780B (en) Image processing method and device
CN107872730A (en) The acquisition methods and device of a kind of insertion content in video
US8805102B2 (en) Application based adaptive encoding
CN112954344A (en) Encoding and decoding method, device and system
US20110007972A1 (en) Image processing device, image processing method and computer-readable medium
Lee et al. Beginning frame and edge based name text localization in news interview videos
JP4930364B2 (en) Video character detection method, apparatus, and program
CN117950769A (en) Method for generating interactive image of remote desktop, electronic device and storage medium
WO2006129261A1 (en) Method and device for detecting text
CN115861914A (en) Method for assisting user in searching specific target
CN117745589A (en) Watermark removing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327

RJ01 Rejection of invention patent application after publication