CN111476273B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111476273B
CN111476273B CN202010168017.3A CN202010168017A CN111476273B CN 111476273 B CN111476273 B CN 111476273B CN 202010168017 A CN202010168017 A CN 202010168017A CN 111476273 B CN111476273 B CN 111476273B
Authority
CN
China
Prior art keywords
image frame
image
processed
frames
sensitive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010168017.3A
Other languages
Chinese (zh)
Other versions
CN111476273A (en
Inventor
李超
范志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wanxiang Electronics Technology Co Ltd
Original Assignee
Xian Wanxiang Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Wanxiang Electronics Technology Co Ltd filed Critical Xian Wanxiang Electronics Technology Co Ltd
Priority to CN202010168017.3A priority Critical patent/CN111476273B/en
Publication of CN111476273A publication Critical patent/CN111476273A/en
Application granted granted Critical
Publication of CN111476273B publication Critical patent/CN111476273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image processing method and device, and relates to the technical field of computer images, wherein the method comprises the following steps: acquiring a rendering instruction, wherein the rendering instruction is generated according to an operation message of the terminal equipment; rendering according to the rendering instruction to generate an image frame to be processed; identifying the type of the image frames to be processed, wherein the type of the image frames to be processed comprises a changed image frame and a constant image frame; filtering the image frames to be processed based on the types of the image frames to be processed to obtain target image frames; and sending the target image frame to the terminal equipment. The method and the device can realize that the user cannot watch the sensitive words or the sensitive images on the terminal equipment.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of computer image technology, and in particular, to an image processing method and apparatus.
Background
With rapid development of cloud virtualization technology, the demands of enterprises for virtual desktop cloud systems are further increased. The virtual desktop cloud system comprises a cloud server and a plurality of user terminals (R ends), and the cloud server and the R ends are used for transmitting based on pictures. And installing a system and an application program on the cloud server, wherein the cloud server is used for completing almost all processing tasks, and the R end is a display with a coding function. The cloud server generates a plurality of Virtual Machines (VMs), one VM corresponds to one R terminal, and a user operates the VM through the R terminal. Because the user terminal in the virtual desktop cloud system cannot acquire data information, such as original codes, and has no storage function, important data cannot be lost even if the user terminal is attacked or stolen, and the security is high.
In the scenario that the virtual desktop cloud system is applied to enterprise office, how to manage and limit the office behavior of staff is a problem yet to be solved.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method and device, and the method and device can enable a user to be incapable of watching sensitive words or sensitive images on terminal equipment. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, applied to an image processing apparatus, the method including:
Acquiring a rendering instruction, wherein the rendering instruction is generated according to an operation message of the terminal equipment;
Rendering according to the rendering instruction to generate an image frame to be processed;
Filtering the image frames to be processed based on the types of the image frames to be processed to obtain target image frames;
And sending the target image frame to the terminal equipment.
In one embodiment, filtering the image frame to be processed based on the type of the image frame to be processed to obtain the target image frame includes:
Identifying the type of the image frames to be processed, wherein the type of the image frames to be processed comprises a changed image frame and a constant image frame;
If the image frame is the change image frame, taking the change image frame as a sample image frame, and judging whether the sample image frame is a sensitive image according to a preset image recognition algorithm;
And if the image frame is a sensitive image frame, replacing the change image frame with a preset image frame.
In one embodiment, the method further comprises:
If the image frame is the unchanged image frame, recording the unchanged image frame, and determining the type of the next image frame until the changed image frame is detected;
selecting a sample image frame from every M frames of unchanged image frames, and judging whether the sample image frame is a sensitive image or not according to a preset image recognition algorithm;
and if the image is a sensitive type image, replacing the M frames of unchanged image frames with preset image frames.
In one embodiment, the image frame to be processed includes at least one macroblock, and identifying the type of the image frame to be processed includes:
judging whether the macro block in the image frame to be processed is identical to the pixel point of the macro block in the corresponding position in the previous image frame;
If the image frames are identical, the image frames to be processed are identified as unchanged macro blocks, and if the image frames are not identical, the image frames to be processed are identified as changed macro blocks;
counting the number of the unchanged macro blocks, and if the number of the unchanged macro blocks is not greater than a preset threshold, determining the image frame to be processed as an unchanged image frame; and if the number of the unchanged macro blocks is smaller than a preset threshold value, determining the image frame to be processed as a changed image frame.
In one embodiment, before identifying the type of image frame to be processed, the method further comprises:
determining that the image frame to be processed contains video macro blocks.
In one embodiment, determining whether the image frame to be processed is a sensitive class image according to a preset image recognition algorithm includes:
Extracting keywords/feature values in the sample image frames, and comparing the extracted keywords/feature values with preset sensitive keywords/sensitive feature values;
And if the keyword/characteristic value of the sample image frame contains a preset sensitive keyword/sensitive characteristic value, determining the sample image frame as a sensitive image.
In one embodiment, the rendering instruction is that the server receives at least one operation message sent by the terminal device, distributes virtual machines corresponding to the terminal device according to the account information, sequentially analyzes the at least one operation message through the virtual machines, and generates the at least one operation message according to an analysis result, wherein the at least one operation message comprises the account information.
In one embodiment, the method further comprises:
And counting sensitive images corresponding to each account information according to the account information.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
The system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a rendering instruction which is generated according to an operation message of terminal equipment;
The rendering module is used for rendering according to the rendering instruction and generating an image frame to be processed;
The identification module is used for identifying the type of the image frame to be processed, wherein the type of the image frame to be processed comprises a changed image frame and a constant image frame;
The processing module is used for filtering the image frames to be processed based on the types of the image frames to be processed to obtain target image frames;
and the sending module is used for sending the target image frame to the terminal equipment.
In one embodiment, the processing module includes:
The judging sub-module is used for taking the changed image frame as a sample image frame if the changed image frame is the changed image frame, and judging whether the sample image frame is a sensitive image or not according to a preset image recognition algorithm;
and the first replacing sub-module is used for replacing the changing image frame with a preset image frame.
In one embodiment, the processing module includes:
A recording sub-module, configured to record an unchanged image frame if the unchanged image frame is the unchanged image frame, and determine a type of a next image frame until a changed image frame is detected;
The selecting sub-module is used for selecting a sample image frame from every M frames of unchanged image frames, and judging whether the sample image frame is a sensitive image or not according to a preset image recognition algorithm;
and the second replacing sub-module is used for replacing the M frames of unchanged image frames with preset image frames if the M frames of unchanged image frames are sensitive images.
In one embodiment, the image frame to be processed comprises a macroblock, and the identification module comprises:
The judging sub-module is used for judging whether the macro block in the image frame to be processed is identical to the pixel point of the macro block in the corresponding position in the previous image frame;
The identification sub-module is used for identifying the image frames to be processed as unchanged macro blocks if the image frames are identical, and identifying the image frames to be processed as changed macro blocks if the image frames are not identical;
A statistics sub-module, configured to count the number of the unchanged macro blocks, and determine that the image frame to be processed is a unchanged image frame if the number of the unchanged macro blocks is not greater than a preset threshold; and if the number of the unchanged macro blocks is smaller than a preset threshold value, determining the image frame to be processed as a changed image frame.
In one embodiment, the apparatus further comprises:
and the determining module is used for determining that the image frame to be processed contains video macro blocks before the type of the image frame to be processed is identified.
In one embodiment, the judging submodule includes:
The extraction subunit is used for extracting the keywords/characteristic values in the sample image frames and comparing the extracted keywords/characteristic values with preset sensitive keywords/sensitive characteristic values;
And the determining subunit is used for determining the sample image frame as a sensitive image if the key words/characteristic values of the sample image frame contain preset sensitive key words/sensitive characteristic values.
In one embodiment, the rendering instruction is that the server receives at least one operation message sent by the terminal device, distributes virtual machines corresponding to the terminal device according to the account information, sequentially analyzes the at least one operation message through the virtual machines, and generates the at least one operation message according to an analysis result, wherein the at least one operation message comprises the account information.
In one embodiment, the apparatus further comprises:
and the statistics module is used for counting sensitive images corresponding to each account information according to the account information.
In the method, the rendered image frames can be identified and classified, sensitive images which do not meet the enterprise requirements are replaced, and then the replaced image frames are sent to the terminal equipment, so that staff cannot watch the sensitive images at the terminal equipment, good atmosphere of an office place is ensured, and working efficiency of the staff is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart of a process for filtering a changing image frame provided by an embodiment of the present disclosure;
FIG. 3 is a flowchart of a constant image frame filtering process provided by an embodiment of the present disclosure;
Fig. 4 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a processing module of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a processing module of an image processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an identification module of an image processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
Fig. 9 is a diagram of a judgment submodule structure of an image processing apparatus according to an embodiment of the present disclosure;
fig. 10 is a block diagram of an image processing apparatus provided in an embodiment of the present disclosure;
FIG. 11 is a block diagram of an image processing system application provided in an embodiment of the present disclosure;
Fig. 12 is a flowchart of an image processing method based on the system of fig. 11 provided in an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Some portions of the description which follows are explicitly or implicitly related to algorithms and functional or symbolic representations of operations on data within a computer memory. These algorithmic descriptions and functional or symbolic representations are the means used by those skilled in the image processing arts to more effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities such as electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated.
Unless specifically stated otherwise, as apparent from the following, it is appreciated that throughout the present specification discussions utilizing terms such as "selecting," "rendering," "displaying," "transmitting," "obtaining," "generating," or the like, refer to the action and processes of a computer system, or similar electronic device, that manipulates and transforms data represented as physical quantities within the computer system into other data similarly represented as physical quantities within the computer system or other information storage, transmission or display devices.
The specification also discloses apparatus for performing the method operations. Such a device may be specially constructed for the required purposes, or may comprise a general purpose computer or other device selectively activated or reconfigured by a computer program stored in the computer. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose machines may be used with programs in accordance with the teachings herein. Alternatively, more specific apparatus configurations for performing the required method steps are applicable. The structure of a conventional general-purpose computer will be described in the following description.
Furthermore, the present specification also implicitly discloses a computer program, as the steps of the methods described herein may be implemented by computer code as will be apparent to those skilled in the art. The computer program is not intended to be limited to any particular programming language and its execution. It will be appreciated that a variety of programming languages and codes thereof may be used to implement the teachings of the disclosure as contained herein. Furthermore, the computer program is not intended to be limited to any particular control flow. There are many other kinds of computer programs that can use different control flows without departing from the spirit or scope of the present disclosure.
Moreover, one or more steps of a computer program may be performed in parallel rather than sequentially. Such a computer program may be stored on any computer readable medium. The computer readable medium may include a storage device such as a magnetic or optical disk, memory chip, or other storage device suitable for interfacing with a general purpose computer, etc. The computer readable medium may also include a hard-wired medium such as in an internet system, or a wireless medium. When the computer program is loaded and executed on such a general purpose computer, the computer program effectively creates an apparatus that implements the steps of the preferred method.
The present disclosure may also be implemented as a hardware module. More specifically, a module is a functional hardware unit in a hardware sense designed for use with other components or modules. For example, a module may be implemented using discrete electronic components, or it may form part of an overall electronic circuit such as an Application Specific Integrated Circuit (ASIC). Many other possibilities exist. Those skilled in the art will appreciate that the system may also be implemented as a combination of hardware and software modules.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure, which is applied to an image processing apparatus, the image processing method shown in fig. 1 includes the following steps:
step 101, obtaining a rendering instruction, wherein the rendering instruction is generated according to an operation message of terminal equipment;
102, rendering according to the rendering instruction to generate an image frame to be processed;
Step 103, identifying the type of the image frame to be processed, wherein the type of the image frame to be processed comprises a changed image frame and a constant image frame;
specifically, the image frame to be processed includes at least one macroblock, for example, the image frame to be processed is divided into a plurality of macroblocks, each macroblock includes m×n pixels, and identifying the type of the image frame to be processed includes:
judging whether the macro block in the image frame to be processed is identical to the pixel point of the macro block in the corresponding position in the previous image frame;
If the image frames are identical, the image frames to be processed are identified as unchanged macro blocks, and if the image frames are not identical, the image frames to be processed are identified as changed macro blocks;
For example, whether each pixel point of the macro block X in the image frame to be processed is identical to each pixel point of the macro block Y in the previous image frame is judged, that is, YUV three values of each pixel point in the macro block X and the macro block Y need to be compared, and the YUV three values are all equal to consider that the two pixel points are identical, wherein the position of the macro block Y in the previous image frame is identical to the position of the macro block X in the image frame to be processed; if the pixel point of the macro block X in the image frame to be processed is identical to the pixel point of the macro block Y in the previous image frame, the macro block X can be determined to be a constant unchange macro block; if the pixel point of the macro block X in the image frame to be processed is not identical to the pixel point of the macro block Y in the previous image frame, the macro block X can be determined to be a constant unchange macro block.
Counting the number of the unchanged macro blocks, and if the number of the unchanged macro blocks is not greater than a preset threshold, determining the image frame to be processed as an unchanged image frame; and if the number of the unchanged macro blocks is smaller than a preset threshold value, determining the image frame to be processed as a changed image frame.
Specifically, counting the number of unchange macro blocks in the image frame to be processed, if the number of unchange macro blocks exceeds a threshold value, indicating that the difference between the current image frame and the previous image frame is smaller, and considering that the change of the current image frame on the basis of the previous image frame is smaller; if unchange number of macro blocks does not exceed the threshold, the image frame to be processed may be considered to vary significantly on the basis of the previous image frame, and the identifier X may be used to mark the current image frame.
If the total number of macro blocks of the image frame to be processed is 100 blocks, the threshold may be 80 blocks. The threshold may also be the number of unchange macroblocks in the current image frame, which is a proportion of the total number of macroblocks in the current image frame, e.g., 80%.
104, Filtering the image frame to be processed based on the type of the image frame to be processed to obtain a target image frame;
if the image frame to be processed is a change image frame, as shown in fig. 2, the change image frame is taken as a sample image frame, and the following steps are performed:
step 1041, judging whether the sample image frame is a sensitive image according to a preset image recognition algorithm;
step 1042, if the sample image frame is a sensitive image, replacing the change image frame with a preset image frame.
If the image frame to be processed is a constant image frame, as shown in FIG. 3, the following steps are performed:
Step 104a, recording the unchanged image frame, and determining the type of the next image frame until the changed image frame is detected;
104b, selecting a sample image frame from every M frames of unchanged image frames, and judging whether the sample image frame is a sensitive image or not according to a preset image recognition algorithm;
m is a positive integer, and a sample image frame, such as any frame selected from 30 frames, is selected as the sample image frame.
Step 104c, replacing the M frames of unchanged image frames with preset image frames, if the sample image frames are sensitive images.
Replacing the image frames belonging to the sensitive class image, and marking the replaced image frames with an identifier Z; in particular, it may be replaced with a solid-color picture, or with a picture having a "no display" typeface.
And step 105, transmitting the target image frame to the terminal equipment.
In one embodiment, before identifying the type of image frame to be processed, the method further comprises:
determining that the image frame to be processed contains video macro blocks.
Specifically, whether the macro block of the image frame to be processed includes a video macro block is determined, if so, the image frame to be processed is determined to be a video image frame, and the identifier Y may be used to mark the current image frame.
This step enables the embodiments of the present disclosure to improve processing efficiency when processing video images.
In the above embodiment, determining whether the sample image frame is a sensitive image according to the preset image recognition algorithm includes:
extracting key words/characteristic values in the image frames to be processed, and comparing the extracted key words/characteristic values with preset sensitive key words/sensitive characteristic values;
And if the keyword/characteristic value of the image frame to be processed contains a preset sensitive keyword/sensitive characteristic value, determining that the image frame to be processed is a sensitive image.
In addition, some intelligent algorithms which are mature at present, such as a neural network, can be used for automatically identifying whether the image frames comprise sensitive images.
It should be noted that the sensitive characteristic value may be a characteristic value of a sensitive marker graph in the sensitive image, and the sensitive marker may be sensitive text (such as books, articles, slogans, etc.), a sensitive scene (building), a sensitive character image, etc. If the intelligent algorithm is used, the neural network needs to be trained by using the image containing the sensitive marker when training the neural network, so that the neural network can identify the sensitive image.
Of course, the sensitive marker can also be other display pictures which do not meet the requirements of the company or are irrelevant to the work of the company. For example, for national defense and military units, the sensitive markers may include reaction articles, reaction pictures, reaction leader portraits, etc., and for educational institutions, the sensitive markers may be violence pornography videos, etc.
In one embodiment, the rendering instruction is that the server receives an operation message sent by the terminal device, distributes virtual machines VM corresponding to the terminal device according to the account information, sequentially analyzes the operation message through the virtual machines, and generates the operation message according to the analysis result, wherein the operation message includes the account information.
For example, the terminal device packs and compresses the operation information and sends the operation information to the VM through a network, and the VM analyzes the received operation information one by one according to the injection sequence to obtain a plurality of operation instructions; VM executes a plurality of operation instructions, such as opening a word document, editing, saving, playing video, etc.; and generating a rendering instruction according to the execution result, wherein the rendering instruction is used for generating a picture displayed after executing the operation instruction corresponding to the operation information, for example, the operation instruction is to open a document, and then the rendering instruction is used for generating a page displayed after the document is opened.
In one embodiment, the method further comprises:
And counting sensitive images corresponding to each account information according to the account information.
Fig. 4 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure, the image processing apparatus 40 shown in fig. 4 includes an acquisition module 401, a rendering module 402, an identification module 403, a processing module 404 and a transmission module 405,
An obtaining module 401, configured to obtain a rendering instruction, where the rendering instruction is generated according to an operation message of a terminal device;
a rendering module 402, configured to perform rendering according to the rendering instruction, and generate an image frame to be processed;
An identifying module 403, configured to identify a type of the image frame to be processed, where the type of the image frame to be processed includes a changed image frame and a constant image frame;
A processing module 404, configured to perform filtering processing on the image frame to be processed based on the type of the image frame to be processed, so as to obtain a target image frame;
and the sending module 405 is configured to send the target image frame to the terminal device.
Fig. 5 is a block diagram of a processing module of an image processing apparatus according to an embodiment of the present disclosure, where the processing module 404 shown in fig. 5 includes:
a judging submodule 4041, configured to, if the image frame is a change image frame, determine whether the change image frame is a sensitive image according to a preset image recognition algorithm by using the change image frame as a sample image frame;
a first replacement sub-module 4042 for replacing the change image frame with a preset image frame.
Fig. 6 is a block diagram of a processing module of an image processing apparatus according to an embodiment of the present disclosure, where the processing module 404 shown in fig. 6 includes:
a recording sub-module 4043 for recording the unchanged image frame if it is, and determining the type of the next image frame until a changed image frame is detected;
A selecting submodule 4044, configured to select a sample image frame from every M frames of unchanged image frames, and determine whether the sample image frame is a sensitive image according to a preset image recognition algorithm;
A second replacing sub-module 4045, configured to replace the M frames of unchanged image frames with preset image frames if the M frames of unchanged image frames are sensitive images.
Fig. 7 is a block diagram of an identification module of an image processing apparatus according to an embodiment of the present disclosure, where an identification module 403 shown in fig. 7 includes: :
A judging submodule 4031, configured to judge whether the pixel points of the macroblock in the image frame to be processed and the macroblock in the corresponding position in the previous image frame are completely the same;
an identification submodule 4032, configured to identify the image frame to be processed as a constant macroblock if the image frames are identical, and identify the image frame to be processed as a variable macroblock if the image frames are not identical;
a statistics submodule 4033, configured to count the number of the unchanged macro blocks, and determine that the image frame to be processed is a unchanged image frame if the number of the unchanged macro blocks is not greater than a preset threshold; and if the number of the unchanged macro blocks is smaller than a preset threshold value, determining the image frame to be processed as a changed image frame.
Fig. 8 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure, and the image processing apparatus 40 shown in fig. 8 further includes:
A determining module 406, configured to determine that the image frame to be processed includes a video macroblock before identifying the type of the image frame to be processed.
Fig. 9 is a schematic diagram of a determination submodule of an image processing apparatus according to an embodiment of the present disclosure, where the determination submodule 4031 shown in fig. 9 includes: :
An extraction subunit 11, configured to extract a keyword/feature value in the sample image frame, and compare the extracted keyword/feature value with a preset sensitive keyword/sensitive feature value;
The determining subunit 12 is configured to determine that the sample image frame is a sensitive image if the key/feature value of the sample image frame includes a preset sensitive key/sensitive feature value.
In the above embodiment, the rendering instruction is that the server receives an operation message sent by the terminal device, distributes virtual machines corresponding to the terminal device according to the account information, sequentially analyzes the operation message through the virtual machines, and generates the operation message according to the analysis result, where the operation message includes the account information.
Fig. 10 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure, the image processing apparatus 40 shown in fig. 10 includes an acquisition module 401, a rendering module 402, an identification module 403, a processing module 404, a transmission module 405 and a statistics module 407,
And the statistics module 407 is configured to count sensitive images corresponding to each account information according to the account information.
Fig. 11 is a diagram of an application structure of an image processing system according to an embodiment of the present disclosure, where a server is a cloud server, and includes a plurality of VM modules, each VM module corresponding to an account; the image processing apparatus may include a GPU POOL module (corresponding to the acquisition module and the rendering module in the above embodiment), a management S2 module (corresponding to the identification module and the transmission module in the above embodiment), and a monitoring module (corresponding to the processing module in the above embodiment), where the monitoring module may also exist independently, and the terminal device is an R terminal. The specific execution flow is shown in fig. 12:
in step 601, the user performs an operation on the R terminal, and the R terminal uploads operation information to a VM corresponding to the cloud server.
The operation information includes key events of a keyboard, movement of a mouse or a touch pad, clicking events and the like.
In step 602, the VM parses the operation information, and generates a rendering instruction according to the parsing result.
Step 603, the VM sends a rendering instruction to a GPU POOL module in the GPU POOL system, and the GPU POOL module renders the rendering instruction to obtain an image frame;
The GPU POOL module sends the image frame to the S2 module.
Step 604, the S2 module classifies each image frame in units of macro blocks.
In this step, the S2 module splits the rendered image frame into a plurality of macro blocks, and determines the type of each macro block, where the macro block type includes unchange macro blocks and change macro blocks.
The specific judging process is consistent with the above embodiments, and will not be described again.
Step 605, S2, all the image frames are sent to the monitoring module, and the monitoring module performs filtering processing on the image frames according to the classification.
The image frames herein include identified image frames and unidentified image frames.
In addition, the number of frames and time of the image frames intercepted by each account are recorded according to the VM account, so that the network manager can perform statistical analysis, and the network manager can quickly find out staff with more sensitive images in the checked video or images.
Step 606, the S2 module encodes the processed image frame and sends the encoded data to the R-terminal.
Specifically, since the sensitive type image in the image frame received in S2 has been replaced, and the replaced image is marked as Z, the S module may re-partition the image frame marked as Z, determine the macroblock type of the newly partitioned macroblock, and replace the image frame with the newly determined macroblock type with the original image frame.
And S2, encoding the image frame according to the macroblock type of each macroblock, and transmitting encoded data after encoding to an R end.
And step 607, the R end decodes and displays the received coded data.
In the method, the rendered image frames can be identified and classified, sensitive images which do not meet the enterprise requirements are replaced, and then the replaced image frames are sent to the terminal equipment, so that staff cannot watch the sensitive images at the terminal equipment, good atmosphere of an office place is ensured, and working efficiency of the staff is improved.
Based on the image processing method described in the above embodiment corresponding to fig. 1, the embodiment of the present disclosure further provides a computer readable storage medium, for example, a non-transitory computer readable storage medium may be a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. The storage medium stores computer instructions for executing the image processing method described in the corresponding embodiment of fig. 1, which is not described herein.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (7)

1. An image processing method applied to an image processing apparatus, the method comprising:
Acquiring a rendering instruction, wherein the rendering instruction is generated according to an operation message of the terminal equipment;
Rendering according to the rendering instruction to generate an image frame to be processed;
Identifying the type of the image frames to be processed, wherein the type of the image frames to be processed comprises a changed image frame and a constant image frame;
Filtering the image frames to be processed based on the types of the image frames to be processed to obtain target image frames;
transmitting the target image frame to the terminal device;
The filtering the image frame to be processed based on the type of the image frame to be processed to obtain a target image frame includes:
If the image frame is the change image frame, taking the change image frame as a sample image frame, and judging whether the sample image frame is a sensitive image according to a preset image recognition algorithm; if the image frame is a sensitive image frame, replacing the change image frame with a preset image frame;
If the image frame is the unchanged image frame, recording the unchanged image frame, and determining the type of the next image frame until the changed image frame is detected; selecting a sample image frame from every M frames of unchanged image frames, and judging whether the sample image frame is a sensitive image or not according to a preset image recognition algorithm; and if the image is a sensitive type image, replacing the M frames of unchanged image frames with preset image frames.
2. The method of claim 1, wherein the image frame to be processed comprises at least one macroblock, and wherein identifying the type of the image frame to be processed comprises:
judging whether the macro block in the image frame to be processed is identical to the pixel point of the macro block in the corresponding position in the previous image frame;
If the image frames are identical, the image frames to be processed are identified as unchanged macro blocks, and if the image frames are not identical, the image frames to be processed are identified as changed macro blocks;
Counting the number of the unchanged macro blocks, and if the number of the unchanged macro blocks is larger than a preset threshold, determining the image frame to be processed as an unchanged image frame; and if the number of the unchanged macro blocks is not greater than a preset threshold value, determining the image frame to be processed as a changed image frame.
3. The method of claim 2, wherein prior to identifying the type of image frame to be processed, the method further comprises:
determining that the image frame to be processed contains video macro blocks.
4. The method of claim 3, wherein determining whether the sample image frame is a sensitive class image according to a preset image recognition algorithm comprises:
Extracting keywords/feature values in the sample image frames, and comparing the extracted keywords/feature values with preset sensitive keywords/sensitive feature values;
And if the keyword/characteristic value of the sample image frame contains a preset sensitive keyword/sensitive characteristic value, determining the sample image frame as a sensitive image.
5. The method according to claim 4, wherein the rendering instruction is that the server receives at least one operation message sent by the terminal device, distributes virtual machines corresponding to the terminal device according to account information, sequentially parses the at least one operation message through the virtual machines, and generates the at least one operation message according to a parsing result, wherein the at least one operation message includes the account information.
6. The method of claim 5, wherein the method further comprises:
And counting sensitive images corresponding to each account information according to the account information.
7. An image processing apparatus, characterized in that the apparatus comprises:
The system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a rendering instruction which is generated according to an operation message of terminal equipment;
The rendering module is used for rendering according to the rendering instruction and generating an image frame to be processed;
The processing module is used for filtering the image frames to be processed based on the types of the image frames to be processed to obtain target image frames;
A transmitting module, configured to transmit the target image frame to the terminal device;
the processing module comprises:
the identification sub-module is used for identifying the type of the image frame to be processed, wherein the type of the image frame to be processed comprises a changed image frame and a constant image frame;
The judging sub-module is used for taking the changed image frame as a sample image frame if the changed image frame is the changed image frame, and judging whether the sample image frame is a sensitive image or not according to a preset image recognition algorithm;
The first replacing sub-module is used for replacing the change image frame with a preset image frame if the sample image frame judged by the judging sub-module is a sensitive image;
A recording sub-module, configured to record an unchanged image frame if the unchanged image frame is the unchanged image frame, and determine a type of a next image frame until a changed image frame is detected;
The selecting sub-module is used for selecting a sample image frame from every M frames of unchanged image frames, and judging whether the sample image frame is a sensitive image or not according to a preset image recognition algorithm;
And the second replacing sub-module is used for replacing the M frames of unchanged image frames with preset image frames if the selecting sub-module judges that the sample image frames are sensitive images.
CN202010168017.3A 2020-03-11 2020-03-11 Image processing method and device Active CN111476273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010168017.3A CN111476273B (en) 2020-03-11 2020-03-11 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010168017.3A CN111476273B (en) 2020-03-11 2020-03-11 Image processing method and device

Publications (2)

Publication Number Publication Date
CN111476273A CN111476273A (en) 2020-07-31
CN111476273B true CN111476273B (en) 2024-06-07

Family

ID=71747374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010168017.3A Active CN111476273B (en) 2020-03-11 2020-03-11 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111476273B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116173496A (en) * 2021-11-26 2023-05-30 华为技术有限公司 Image frame rendering method and related device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754700A (en) * 1995-06-09 1998-05-19 Intel Corporation Method and apparatus for improving the quality of images for non-real time sensitive applications
US7274368B1 (en) * 2000-07-31 2007-09-25 Silicon Graphics, Inc. System method and computer program product for remote graphics processing
CN106470345A (en) * 2015-08-21 2017-03-01 阿里巴巴集团控股有限公司 Video-encryption transmission method and decryption method, apparatus and system
WO2017056230A1 (en) * 2015-09-30 2017-04-06 楽天株式会社 Information processing device, information processing method, and program for information processing device
CN108965982A (en) * 2018-08-28 2018-12-07 百度在线网络技术(北京)有限公司 Video recording method, device, electronic equipment and readable storage medium storing program for executing
CN109040824A (en) * 2018-08-28 2018-12-18 百度在线网络技术(北京)有限公司 Method for processing video frequency, device, electronic equipment and readable storage medium storing program for executing
CN110298862A (en) * 2018-03-21 2019-10-01 广东欧珀移动通信有限公司 Method for processing video frequency, device, computer readable storage medium and computer equipment
CN110312133A (en) * 2019-06-27 2019-10-08 西安万像电子科技有限公司 Image processing method and device
CN110362375A (en) * 2019-07-11 2019-10-22 广州虎牙科技有限公司 Display methods, device, equipment and the storage medium of desktop data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3340624B1 (en) * 2016-12-20 2019-07-03 Axis AB Encoding a privacy masked image
TWI651662B (en) * 2017-11-23 2019-02-21 財團法人資訊工業策進會 Image annotation method, electronic device and non-transitory computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754700A (en) * 1995-06-09 1998-05-19 Intel Corporation Method and apparatus for improving the quality of images for non-real time sensitive applications
US7274368B1 (en) * 2000-07-31 2007-09-25 Silicon Graphics, Inc. System method and computer program product for remote graphics processing
CN106470345A (en) * 2015-08-21 2017-03-01 阿里巴巴集团控股有限公司 Video-encryption transmission method and decryption method, apparatus and system
WO2017056230A1 (en) * 2015-09-30 2017-04-06 楽天株式会社 Information processing device, information processing method, and program for information processing device
CN110298862A (en) * 2018-03-21 2019-10-01 广东欧珀移动通信有限公司 Method for processing video frequency, device, computer readable storage medium and computer equipment
CN108965982A (en) * 2018-08-28 2018-12-07 百度在线网络技术(北京)有限公司 Video recording method, device, electronic equipment and readable storage medium storing program for executing
CN109040824A (en) * 2018-08-28 2018-12-18 百度在线网络技术(北京)有限公司 Method for processing video frequency, device, electronic equipment and readable storage medium storing program for executing
CN110312133A (en) * 2019-06-27 2019-10-08 西安万像电子科技有限公司 Image processing method and device
CN110362375A (en) * 2019-07-11 2019-10-22 广州虎牙科技有限公司 Display methods, device, equipment and the storage medium of desktop data

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Detecting Dominant Vanishing Points in Natural Scenes with Application to Composition-Sensitive Image Retrieval;Zihan Zhou et.al;《 IEEE Transactions on Multimedia》;20170512;第9卷(第12期);第2651-2665页 *
Video Keyframe Analysis Using a Segment-Based Statistical Metric in a Visually Sensitive Parametric Space;Mona Omidyeganeh;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;第20卷(第10期);第2730-2737页 *
基于人体肤色识别征的敏感视频分类方法;梁鹏 等;《计算机辅助设计与图形学学报》;20160531;第12卷(第13期);第181-200页 *
基于监控视频的信息隐藏与篡改检测技术研究;林海涛;《中国优秀硕士学位论文全文数据库 信息科技辑》;第2020年卷(第02期);第I136-1688页 *

Also Published As

Publication number Publication date
CN111476273A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN112633313B (en) Bad information identification method of network terminal and local area network terminal equipment
CN112215171B (en) Target detection method, device, equipment and computer readable storage medium
CN105718861A (en) Method and device for identifying video streaming data category
CN114564741A (en) Big data privacy protection method based on anonymization analysis and big data processing equipment
CN112348089A (en) Working state identification method, server, storage medium and device
CN111741329B (en) Video processing method, device, equipment and storage medium
CN111445399A (en) Image processing method and system
CN111476273B (en) Image processing method and device
CN115171199A (en) Image processing method, image processing device, computer equipment and storage medium
CN112989098B (en) Automatic retrieval method and device for image infringement entity and electronic equipment
CN115240203A (en) Service data processing method, device, equipment and storage medium
CN115115968A (en) Video quality evaluation method and device and computer readable storage medium
CN115619867B (en) Data processing method, device, equipment and storage medium
CN113507571B (en) Video anti-clipping method, device, equipment and readable storage medium
CN117597702A (en) Scaling-independent watermark extraction
CN108399411B (en) A kind of multi-cam recognition methods and device
CN113128262A (en) Target identification method and device, storage medium and electronic device
CN111339367A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN111447444A (en) Image processing method and device
CN110765919A (en) Interview image display system and method based on face detection
CN113703904B (en) Method and system for improving character definition through cloud computer and readable storage medium
CN111212196B (en) Information processing method and device, electronic equipment and storage medium
CN114005062A (en) Abnormal frame processing method, abnormal frame processing device, server and storage medium
CN113947513A (en) Video watermark processing method, system, electronic device and storage medium
CN111556317A (en) Coding method, device and coding and decoding system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant