CN112380940A - Processing method and device for high-altitude parabolic monitoring image, electronic equipment and storage medium - Google Patents

Processing method and device for high-altitude parabolic monitoring image, electronic equipment and storage medium Download PDF

Info

Publication number
CN112380940A
CN112380940A CN202011223290.8A CN202011223290A CN112380940A CN 112380940 A CN112380940 A CN 112380940A CN 202011223290 A CN202011223290 A CN 202011223290A CN 112380940 A CN112380940 A CN 112380940A
Authority
CN
China
Prior art keywords
image
segmentation
window area
model
fcn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011223290.8A
Other languages
Chinese (zh)
Other versions
CN112380940B (en
Inventor
李�城
周晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Softcom Smart City Technology Co ltd
Original Assignee
Beijing Softcom Smart City Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Softcom Smart City Technology Co ltd filed Critical Beijing Softcom Smart City Technology Co ltd
Priority to CN202011223290.8A priority Critical patent/CN112380940B/en
Publication of CN112380940A publication Critical patent/CN112380940A/en
Application granted granted Critical
Publication of CN112380940B publication Critical patent/CN112380940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a processing method, a device, electronic equipment and a storage medium for a high-altitude parabolic monitoring image, wherein the method comprises the following steps: performing semantic segmentation on the high-altitude parabolic monitoring image to determine a window area of the image; carrying out fuzzy processing on the window area of the image; and displaying the blurred image. The window area in the high-altitude parabolic monitoring image is determined through semantic segmentation, the window area is subjected to fuzzy processing, the indoor picture of a building is prevented from being seen through window glass in the image, and therefore the personal privacy of an indoor user can be protected while the high-altitude parabolic monitoring image is monitored.

Description

Processing method and device for high-altitude parabolic monitoring image, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a method and a device for processing a high-altitude parabolic monitoring image, electronic equipment and a storage medium.
Background
Since the high-altitude parabola threatens human safety, it is necessary to monitor high-altitude buildings. The camera is adopted to monitor the wall surface of the building, so that the position from which the object is cast can be known, and the responsible person can be further determined according to the cast position.
However, in the high-altitude parabolic monitoring picture, due to the fact that the resolution and the definition of the camera are high, images in the interior of a building can appear in the monitoring picture through the window glass, and therefore the individual privacy of residents is invaded. Therefore, the existing high-altitude parabolic monitoring method can determine the person responsible for the high-altitude parabolic, but can infringe the personal privacy of other residents.
Disclosure of Invention
The embodiment of the invention provides a processing method, a device, equipment and a storage medium of a high-altitude parabolic monitoring image, which are used for protecting the privacy of indoor users of a building while monitoring a high-altitude parabolic object.
In a first aspect, an embodiment of the present invention provides a method for processing a high-altitude parabolic monitoring image, including: performing semantic segmentation on the high-altitude parabolic monitoring image to determine a window area of the image;
carrying out fuzzy processing on the window area of the image;
and displaying the blurred image.
In a second aspect, an embodiment of the present invention provides a processing apparatus for high-altitude parabolic monitoring images, including: the window area determining module is used for performing semantic segmentation on the high-altitude parabolic monitoring image to determine the window area of the image;
the window area fuzzy processing module is used for carrying out fuzzy processing on the window area of the image;
and the image display module is used for displaying the image subjected to the fuzzy processing.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the methods of any of the embodiments of the present invention.
In a fourth aspect, the embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method of any of the embodiments of the present invention.
In the embodiment of the invention, the window area in the high-altitude parabolic monitoring image is determined through semantic segmentation, the window area is subjected to fuzzy processing, the indoor picture of a building is prevented from being seen through window glass in the image, and therefore, the high-altitude parabolic monitoring is carried out while the personal privacy of indoor users can be protected.
Drawings
Fig. 1(a) is a flowchart of a processing method of a high-altitude parabolic monitoring image according to an embodiment of the present invention;
FIG. 1(b) is a schematic diagram of the parameter learning principle of the FCN model at the first iteration according to the first embodiment of the present invention;
fig. 1(c) is a schematic diagram of a parameter learning principle of an FCN model with an iteration number greater than one according to an embodiment of the present invention;
fig. 2 is a flowchart of a processing method of a high-altitude parabolic monitoring image according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a high-altitude parabolic monitoring image processing apparatus according to a third embodiment of the present invention;
fig. 4 is a block diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1(a) is a flowchart of a processing method for a high-altitude parabolic monitoring image according to an embodiment of the present invention, where this embodiment is applicable to a situation where privacy of users in a building is protected while monitoring a high-altitude parabolic object, and the method may be executed by a processing apparatus for a high-altitude parabolic monitoring image according to an embodiment of the present invention, where the apparatus may be implemented in a software and/or hardware manner, and the method according to an embodiment of the present invention specifically includes the following steps:
step 101, performing semantic segmentation on the high-altitude parabolic monitoring image to determine a window area of the image.
Optionally, performing semantic segmentation on the high-altitude parabolic monitoring image to determine a window region of the image, which may include: performing semantic segmentation on the high-altitude parabolic monitoring image by adopting an unsupervised learning mode based on a Full Convolution Network (FCN) model; and determining the window area of the image according to the semantic segmentation result.
Specifically, in the embodiment, semantic segmentation is performed on the high-altitude parabolic image in an unsupervised manner based on the FCN model, and the semantic segmentation is to understand the image from the pixel level, that is, pixels belonging to the same class of objects are classified into one class. For example, pixels belonging to a window are classified into one class, and pixels belonging to a wall of a building are classified into one class, so that a window region in the high-altitude parabolic monitoring image can be determined according to a result of semantic segmentation.
Optionally, performing semantic segmentation on the high-altitude parabolic monitoring image by using an unsupervised learning manner based on a full convolution network FCN model, which may include: determining the iteration number of the FCN model; training the FCN model based on a color segmentation algorithm model according to the iteration times; and when the iteration times are determined to be reached, obtaining a final segmentation image of the FCN model, and taking the final segmentation image as a semantic segmentation result.
Specifically, in the embodiment, an unsupervised learning of the FCN model is realized by using a color segmentation algorithm model, that is, a label is added to an image without manual labeling, and parameters of the FCN model are learned by comparing an output result of the color segmentation algorithm model with an output result of the FCN model.
Optionally, training the FCN model based on the color segmentation algorithm model according to the number of iterations may include: during the first iteration, respectively inputting the FCN model and the color segmentation algorithm model to the image, and performing parameter learning on the FCN model through the output image of the FCN model and the color segmentation algorithm model; when the iteration times are more than one, inputting the image into the FCN model after parameter learning, obtaining a first segmentation image through the FCN model after parameter learning, inputting the first segmentation image into the color segmentation algorithm model to obtain a second segmentation image, and performing parameter learning on the FCN model again through the first segmentation image and the second segmentation image.
Optionally, performing parameter learning on the FCN model through an output image of the FCN model and the color segmentation algorithm model may include: extracting the features of the image through an FCN model to obtain a feature map, and processing the feature map by adopting a preset classification function to obtain a third segmentation image; segmenting the image according to the color features through a color segmentation algorithm model to obtain a fourth segmented image; and performing loss calculation on the third segmentation image and the fourth segmentation image by adopting a preset loss function cross entropy, and learning the parameters of the FCN model according to the calculation result.
As shown in fig. 1(b), the schematic diagram of the parameter learning principle of the FCN model at the first iteration is shown. The number of iterations of the FCN model is determined in advance, for example, the number of iterations is set to 10, and in the process of training the FCN model based on the color segmentation algorithm model according to the number of iterations, the source modes of the images obtained by the color segmentation algorithm model are different for different numbers of iterations. Aiming at the first iteration, the FCN model and the color segmentation algorithm model respectively obtain an original high-altitude parabolic monitoring image to be segmented, the FCN model obtains a feature map by extracting features of the original image, wherein the features extracted by the FCN model are not limited to colors, shapes, textures and positions, and the feature map is processed by adopting a preset classification function Argmax to obtain a third segmentation image. And the color segmentation algorithm model only segments the input original image according to the color features to obtain a fourth segmented image. And performing loss calculation on the segmentation result of the FCN model and the segmentation result of the color segmentation algorithm model by adopting a preset loss function cross entropy softmax, and learning the parameters of the FCN model according to the calculation result, thereby completing the first iteration process.
As shown in fig. 1(c), the schematic diagram of the parameter learning principle of the FCN model with iteration number greater than one is shown. When the iteration number is greater than one and the maximum iteration number is not reached, the parameter learning process is similar to the first iteration process, and the difference is that the input of the color segmentation algorithm model is derived from the output of the FCN model. For example, for the second iteration process, the feature extraction is performed on the original image through the FCN model after the parameter learning to obtain a new feature map, then the new feature map is processed by using the preset classification function Argmax to obtain a first segmentation image, at this time, the first segmentation image is input into the color segmentation algorithm model to obtain a second segmentation image, at this time, the loss calculation is performed on the segmentation result of the FCN model and the segmentation result of the color segmentation algorithm model by using the preset loss function cross entropy softmax, and the process of learning the FCN model according to the result of the loss calculation is the same as the first iteration process. Of course, in this embodiment, the second iteration is taken as an example for illustration, and for other iteration processes that are greater than one iteration, the process of FCN model parameter learning is substantially the same as the second iteration process described above, so this embodiment is not described again.
It should be noted that, in the last iteration process, a final segmentation image obtained after the final feature map output by the FCN model is processed by the preset classification function Argmax is used as a semantic segmentation result. Because the building wall area and the window area are obviously distinguished in the semantic segmentation result, the window area of the image can be directly determined according to the semantic segmentation result.
And 102, carrying out fuzzy processing on the window area of the image.
Optionally, the blurring process performed on the window area of the image may include: determining position information of a window area, wherein the position information comprises a boundary position of the window area; obtaining a picture template matched with the window area according to the position information; the picture template covers the window area.
Specifically, in the embodiment, after the window region is accurately determined, the window region of the image may be blurred. Specifically, the position information of the window area is determined, the position information includes the boundary position of the window area, for example, two-dimensional coordinates can be established for an image, the coordinate position of the boundary of the window area is determined, after the coordinate position of each boundary of the window area is obtained, the size and the shape of the range of the window area can be determined, then a picture template which is the same as the range and the shape of the window area can be obtained from a graphic database, or a picture template is directly created according to the range and the shape of the window area, so that the created picture template can cover the window area. The picture template of the present embodiment is blank and does not contain any image information. Of course, the image blurring process includes various methods, for example, directly performing feature conversion on the pixel feature of the window region, and the present embodiment is only an example, and the method of the blurring process is not limited.
And 103, displaying the blurred image.
Specifically, the window area of the image is determined, the final purpose of performing the fuzzy processing on the window area is to monitor the high altitude object without violating the privacy of the resident, so that the electronic device is provided with a human-computer interaction interface, after the fuzzy processing is determined to be completed, a prompt message is sent, whether the processing is completed or not is determined, and after a determination instruction of the user is determined to be received, the display is performed on the human-computer interaction interface. Of course, the user may also manipulate on the human-computer interaction interface to determine the specific position and display size that need to be displayed, so that the user can view the image better, for example, the blurred image is enlarged twice and then displayed in the upper right corner of the interface. It is to be understood that this embodiment is merely an example, and the specific display mode is not limited thereto.
In the embodiment of the invention, the window area in the high-altitude parabolic monitoring image is determined through semantic segmentation, the window area is subjected to fuzzy processing, the indoor picture of a building is prevented from being seen through window glass in the image, and therefore, the high-altitude parabolic monitoring is carried out while the personal privacy of indoor users can be protected.
Example two
Fig. 2 is a flowchart of a processing method of a high-altitude parabolic monitoring image according to a second embodiment of the present invention, where the present embodiment is based on the foregoing embodiment, and after displaying the blurred image, the method further includes: and identifying the image subjected to the fuzzy processing, and giving an alarm prompt when the identification result is determined to contain a preset identifier.
As shown in fig. 2, the method of the embodiment of the present disclosure specifically includes:
step 201, performing semantic segmentation on the high-altitude parabolic monitoring image to determine a window area of the image.
Optionally, performing semantic segmentation on the high-altitude parabolic monitoring image to determine a window region of the image, which may include: performing semantic segmentation on the high-altitude parabolic monitoring image by adopting an unsupervised learning mode based on a Full Convolution Network (FCN) model; and determining the window area of the image according to the semantic segmentation result.
Optionally, performing semantic segmentation on the high-altitude parabolic monitoring image by using an unsupervised learning manner based on a full convolution network FCN model, which may include: determining the iteration number of the FCN model; training the FCN model based on a color segmentation algorithm model according to the iteration times; and when the iteration times are determined to be reached, obtaining a final segmentation image of the FCN model, and taking the final segmentation image as a semantic segmentation result.
Step 202, blurring the window area of the image.
Optionally, the blurring process performed on the window area of the image may include: determining position information of a window area, wherein the position information comprises a boundary position of the window area; obtaining a picture template matched with the window area according to the position information; the picture template covers the window area.
And step 203, displaying the image subjected to the blurring processing.
And 204, identifying the blurred image, and giving an alarm prompt when the identification result is determined to contain a preset identifier.
Specifically, in the present embodiment, after the blurred image is displayed, the blurred image is also recognized. The blurred image can be matched with a preset identifier, the preset identifier can be an indoor significance mark, such as a human face, a tea table or a television, the blurred image is matched with the preset identifier, specifically, matching of characteristics such as color, shape or texture can be performed, and when the matching similarity is determined to exceed a preset threshold value, it is indicated that the building indoor picture can still be seen through the window area in the blurred image. The reason for this may be that although the effective blurring processing is performed, the window area is not accurately determined in the process of performing semantic segmentation; or, although the window area is accurately determined during semantic segmentation, effective fuzzy processing on the window area is not realized, so that the processing result of the high-altitude parabolic monitoring image is poor. When the situation is determined to exist after recognition, an alarm prompt is given, for example, "image processing does not meet the preset requirement, please determine whether to process again", and an alarm can be given in a voice or text mode. The user can operate on the human-computer interaction interface, and when the user determines to receive a determination instruction, the steps 201 to 203 are executed again; if the electronic equipment sends out an alarm prompt, the user considers that the processing effect is not ideal but meets the requirement of the user through checking, and can also choose to deny the instruction, and the current display image is kept unchanged at the moment.
In the embodiment of the invention, the window area in the high-altitude parabolic monitoring image is determined through semantic segmentation, the window area is subjected to fuzzy processing, the indoor picture of a building is prevented from being seen through window glass in the image, and therefore the personal privacy of an indoor user can be protected while the high-altitude parabolic monitoring is carried out. The method comprises the steps that when the situation that the processing effect of the high-altitude parabolic monitoring image is poor is determined through recognition, an alarm prompt is carried out to indicate the processing effect of the current image of a user, when the user determines that the current image processing effect is still accepted according to the alarm prompt, continuous display is carried out, and when the current image processing effect is not accepted, the original image is processed again according to the selection of the user, so that the accuracy of the high-altitude parabolic monitoring image processing is guaranteed under the situation that the actual requirements of the user are met.
EXAMPLE III
Fig. 3 is a processing apparatus for high-altitude parabolic monitoring images according to an embodiment of the present invention, which specifically includes: a window region determination module 310, a window region blurring processing module 320, and an image presentation module 330.
The window area determining module 310 is configured to perform semantic segmentation on the high-altitude parabolic monitoring image to determine a window area of the image;
the window area fuzzy processing module 320 is used for carrying out fuzzy processing on the window area of the image;
and an image display module 330, configured to display the blurred image.
Optionally, the window area determination module includes: the image semantic segmentation submodule is used for performing semantic segmentation on the high-altitude parabolic monitoring image by adopting an unsupervised learning mode based on a full convolution network FCN model;
and the window area determining submodule is used for determining the window area of the image according to the semantic segmentation result.
Optionally, the image semantic segmentation sub-module includes: an iteration number determining subunit, configured to determine an iteration number of the FCN model;
the training subunit is used for training the FCN model based on the color segmentation algorithm model according to the iteration times;
and the segmented image acquisition subunit is used for acquiring a final segmented image of the FCN model when the iteration times are determined to be reached, and taking the final segmented image as a semantic segmentation result.
Optionally, the training subunit is configured to: during the first iteration, respectively inputting the FCN model and the color segmentation algorithm model to the image, and performing parameter learning on the FCN model through the output image of the FCN model and the color segmentation algorithm model;
when the iteration times are more than one, inputting the image into the FCN model after parameter learning, obtaining a first segmentation image through the FCN model after parameter learning, inputting the first segmentation image into the color segmentation algorithm model to obtain a second segmentation image, and performing parameter learning on the FCN model again through the first segmentation image and the second segmentation image.
Optionally, the training subunit is further configured to: extracting the features of the image through an FCN model to obtain a feature map, and processing the feature map by adopting a preset classification function to obtain a third segmentation image;
segmenting the image according to the color features through a color segmentation algorithm model to obtain a fourth segmented image;
and performing loss calculation on the third segmentation image and the fourth segmentation image by adopting a preset loss function cross entropy, and learning the parameters of the FCN model according to the calculation result.
Optionally, the window area blurring processing module is configured to: determining position information of a window area, wherein the position information comprises a boundary position of the window area;
obtaining a picture template matched with the window area according to the position information;
the picture template covers the window area.
Optionally, the apparatus further comprises an alarm prompting module, configured to: identifying the blurred image;
and giving an alarm prompt when the recognition result contains the preset identification.
The device can execute the processing method of the high-altitude parabolic monitoring image provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details not described in detail in this embodiment, reference may be made to the method provided in any embodiment of the present invention.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary electronic device 412 suitable for use in implementing embodiments of the present invention. The electronic device 412 shown in fig. 4 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present invention.
As shown in fig. 4, the electronic device 412 is in the form of a general purpose computing device. The components of the electronic device 412 may include, but are not limited to: one or more processors 412, a memory 428, and a bus 418 that couples the various system components (including the memory 428 and the processor 416).
Bus 418 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 412 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 412 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 428 is used to store instructions. Memory 428 can include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)430 and/or cache memory 432. The electronic device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Memory 428 can include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in memory 428, such program modules 442 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 442 generally perform the functions and/or methodologies of the described embodiments of the invention.
The electronic device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, display 424, etc.), with one or more devices that enable a user to interact with the electronic device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, the electronic device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 420. As shown, network adapter 420 communicates with the other modules of electronic device 412 over bus 418. It should be appreciated that although not shown in FIG. 4, other hardware and/or software modules may be used in conjunction with the electronic device 412, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 416 performs various functional applications and data processing by executing instructions stored in the memory 428, such as performing the following:
performing semantic segmentation on the high-altitude parabolic monitoring image to determine a window area of the image; carrying out fuzzy processing on a window area of the image; and displaying the blurred image.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform a processing method for high-altitude parabolic monitoring images, and the method includes:
performing semantic segmentation on the high-altitude parabolic monitoring image to determine a window area of the image; carrying out fuzzy processing on a window area of the image; and displaying the blurred image.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the processing method of the high altitude parabolic monitoring image provided by any embodiments of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions to enable an electronic device (which may be a personal computer, a server, or a network device) to execute the processing method of the high altitude parabolic monitoring image according to the embodiments of the present invention.
It should be noted that, in the embodiment of the processing apparatus for high-altitude parabolic monitoring images, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A processing method of a high-altitude parabolic monitoring image is characterized by comprising the following steps:
performing semantic segmentation on the high-altitude parabolic monitoring image to determine a window area of the image;
carrying out fuzzy processing on the window area of the image;
and displaying the blurred image.
2. The method of claim 1, wherein the semantically segmenting the high altitude parabolic surveillance image to determine a window region of the image comprises:
performing semantic segmentation on the high-altitude parabolic monitoring image by adopting an unsupervised learning mode based on a Full Convolution Network (FCN) model;
and determining the window area of the image according to the semantic segmentation result.
3. The method according to claim 2, wherein the performing semantic segmentation on the high-altitude parabolic monitoring image by using an unsupervised learning manner based on a Full Convolution Network (FCN) model comprises:
determining the number of iterations of the FCN model;
training the FCN model based on a color segmentation algorithm model according to the iteration times;
and when the iteration times are determined to be reached, obtaining a final segmentation image of the FCN model, and taking the final segmentation image as the semantic segmentation result.
4. The method according to claim 3, wherein said training said FCN model based on a color segmentation algorithm model according to said number of iterations comprises:
during the first iteration, the images are respectively input into the FCN model and the color segmentation algorithm model, and parameter learning is carried out on the FCN model through the output images of the FCN model and the color segmentation algorithm model;
when the iteration number is more than one, inputting the image into the FCN model after parameter learning, obtaining a first segmentation image through the FCN model after parameter learning, inputting the first segmentation image into the color segmentation algorithm model to obtain a second segmentation image, and performing parameter learning on the FCN model again through the first segmentation image and the second segmentation image.
5. The method according to claim 4, wherein said parameter learning of said FCN model by said output images of said FCN model and said color segmentation algorithm model comprises:
extracting the features of the image through the FCN model to obtain a feature map, and processing the feature map by adopting a preset classification function to obtain a third segmentation image;
segmenting the image according to color features through the color segmentation algorithm model to obtain a fourth segmented image;
and performing loss calculation on the third segmentation image and the fourth segmentation image by adopting a preset loss function cross entropy, and learning the parameters of the FCN model according to the calculation result.
6. The method of claim 1, wherein the blurring the window region of the image comprises:
determining position information of the window area, wherein the position information comprises a boundary position of the window area;
obtaining a picture template matched with the window area according to the position information;
and covering the window area with the picture template.
7. The method of claim 1, wherein after the displaying the blurred image, further comprising:
identifying the blurred image;
and giving an alarm prompt when the recognition result is determined to contain the preset identification.
8. An apparatus for processing a high altitude parabolic surveillance image, the apparatus comprising:
the window area determining module is used for performing semantic segmentation on the high-altitude parabolic monitoring image to determine the window area of the image;
the window area fuzzy processing module is used for carrying out fuzzy processing on the window area of the image;
and the image display module is used for displaying the image subjected to the fuzzy processing.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202011223290.8A 2020-11-05 2020-11-05 Processing method and device of high-altitude parabolic monitoring image, electronic equipment and storage medium Active CN112380940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011223290.8A CN112380940B (en) 2020-11-05 2020-11-05 Processing method and device of high-altitude parabolic monitoring image, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011223290.8A CN112380940B (en) 2020-11-05 2020-11-05 Processing method and device of high-altitude parabolic monitoring image, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112380940A true CN112380940A (en) 2021-02-19
CN112380940B CN112380940B (en) 2024-05-24

Family

ID=74579279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011223290.8A Active CN112380940B (en) 2020-11-05 2020-11-05 Processing method and device of high-altitude parabolic monitoring image, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112380940B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658219A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 High-altitude parabolic detection method, device and system, electronic device and storage medium
CN114339367A (en) * 2021-12-29 2022-04-12 杭州海康威视数字技术股份有限公司 Video frame processing method, device and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206097116U (en) * 2015-12-29 2017-04-12 重庆安碧捷科技股份有限公司 Medical treatment privacy filtration system
CN106803943A (en) * 2016-03-31 2017-06-06 小蚁科技(香港)有限公司 Video monitoring system and equipment
US20200327409A1 (en) * 2017-11-16 2020-10-15 Samsung Electronics Co., Ltd. Method and device for hierarchical learning of neural network, based on weakly supervised learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206097116U (en) * 2015-12-29 2017-04-12 重庆安碧捷科技股份有限公司 Medical treatment privacy filtration system
CN106803943A (en) * 2016-03-31 2017-06-06 小蚁科技(香港)有限公司 Video monitoring system and equipment
US20200327409A1 (en) * 2017-11-16 2020-10-15 Samsung Electronics Co., Ltd. Method and device for hierarchical learning of neural network, based on weakly supervised learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ASAKO KANEZAKI: "UNSUPERVISED IMAGE SEGMENTATION BY BACKPROPAGATION", 《ICASSP 2018》, pages 1543 - 1547 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658219A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 High-altitude parabolic detection method, device and system, electronic device and storage medium
CN114339367A (en) * 2021-12-29 2022-04-12 杭州海康威视数字技术股份有限公司 Video frame processing method, device and equipment
CN114339367B (en) * 2021-12-29 2023-06-27 杭州海康威视数字技术股份有限公司 Video frame processing method, device and equipment

Also Published As

Publication number Publication date
CN112380940B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
US20180204052A1 (en) A method and apparatus for human face image processing
CN110189336B (en) Image generation method, system, server and storage medium
CN110874594A (en) Human body surface damage detection method based on semantic segmentation network and related equipment
CN112418216B (en) Text detection method in complex natural scene image
US9721387B2 (en) Systems and methods for implementing augmented reality
US20200410723A1 (en) Image Synthesis Method And Apparatus
EP3709212A1 (en) Image processing method and device for processing image, server and storage medium
CN110675940A (en) Pathological image labeling method and device, computer equipment and storage medium
CN112380940B (en) Processing method and device of high-altitude parabolic monitoring image, electronic equipment and storage medium
CN112270745B (en) Image generation method, device, equipment and storage medium
CN112752158B (en) Video display method and device, electronic equipment and storage medium
CN109934873B (en) Method, device and equipment for acquiring marked image
WO2023045183A1 (en) Image processing
CN116168351B (en) Inspection method and device for power equipment
CN112651953A (en) Image similarity calculation method and device, computer equipment and storage medium
CN113378958A (en) Automatic labeling method, device, equipment, storage medium and computer program product
CN111709762B (en) Information matching degree evaluation method, device, equipment and storage medium
CN112149570B (en) Multi-person living body detection method, device, electronic equipment and storage medium
CN116309494B (en) Method, device, equipment and medium for determining interest point information in electronic map
CN113436251A (en) Pose estimation system and method based on improved YOLO6D algorithm
CN111985400A (en) Face living body identification method, device, equipment and storage medium
CN111062388A (en) Advertisement character recognition method, system, medium and device based on deep learning
CN113269125B (en) Face recognition method, device, equipment and storage medium
CN109141457A (en) Navigate appraisal procedure, device, computer equipment and storage medium
CN115116083A (en) Method, system and storage medium for automatically identifying and correcting electric power graphics primitives of monitoring picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 101, 2nd Floor, Building 3, East District, No. 10 Northwest Wangdong Road, Haidian District, Beijing, 100193

Applicant after: Beijing softong Intelligent Technology Co.,Ltd.

Address before: 100193 202, floor 2, building 16, East District, No. 10, northwest Wangdong Road, Haidian District, Beijing

Applicant before: Beijing Softcom Smart City Technology Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant