CN112329499B - Image processing method, device and equipment - Google Patents

Image processing method, device and equipment Download PDF

Info

Publication number
CN112329499B
CN112329499B CN201910715688.4A CN201910715688A CN112329499B CN 112329499 B CN112329499 B CN 112329499B CN 201910715688 A CN201910715688 A CN 201910715688A CN 112329499 B CN112329499 B CN 112329499B
Authority
CN
China
Prior art keywords
monitoring
target
scene
image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910715688.4A
Other languages
Chinese (zh)
Other versions
CN112329499A (en
Inventor
刘有文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Goldway Intelligent Transportation System Co Ltd
Original Assignee
Shanghai Goldway Intelligent Transportation System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Goldway Intelligent Transportation System Co Ltd filed Critical Shanghai Goldway Intelligent Transportation System Co Ltd
Priority to CN201910715688.4A priority Critical patent/CN112329499B/en
Publication of CN112329499A publication Critical patent/CN112329499A/en
Application granted granted Critical
Publication of CN112329499B publication Critical patent/CN112329499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/26Techniques for post-processing, e.g. correcting the recognition result
    • G06V30/262Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
    • G06V30/274Syntactic or semantic context, e.g. balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)

Abstract

The embodiment of the application provides an image processing method, device and equipment, and relates to the technical field of monitoring. The method is applied to monitoring equipment, and comprises the following steps: and determining an image acquisition scene where the monitoring equipment is located, taking the image acquisition scene as a target scene, generating a configuration file for determining an executable file data stream according to a preset scene or a functional stream cut by a user in a pre-stored monitoring algorithm component library, taking the configuration file as a target monitoring algorithm component, and processing a first image by using the target monitoring algorithm component when the first image to be processed is acquired. The application can reduce the coupling between hardware and software and improve the convenience of use.

Description

Image processing method, device and equipment
Technical Field
The present application relates to the field of monitoring technologies, and in particular, to an image processing method, apparatus, and device.
Background
Currently, cameras are often installed in various scenes, and the scenes are monitored by video images captured by the cameras. For example, a snapshot machine or a ball machine is installed in a highway scene to monitor a running vehicle; for another example, a camera is installed in a home scene to monitor whether illegal invasion, personnel fall and the like occur.
In general, a camera installed in a highway scene is provided with programs related to functions such as license plate recognition and traffic accident detection, and a camera installed in a home scene is provided with programs related to functions such as face recognition and intrusion detection. That is, the programs configured in the cameras are all related to scenes, and the camera installed in the expressway scene cannot detect the situation occurring in the home scene, so that the coupling of hardware and software is strong, resulting in inconvenient use.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and image processing equipment, so that the coupling of hardware and software is reduced, and convenience is improved. The specific technical scheme is as follows:
in a first aspect, there is provided an image processing method, the method being applied to a monitoring device, the method comprising:
Determining an image acquisition scene where the monitoring equipment is located as a target scene;
in a pre-stored monitoring algorithm component library, determining a monitoring algorithm component corresponding to the target scene as a target monitoring algorithm component;
And when a first image to be processed is acquired, processing the first image by utilizing the target monitoring algorithm component.
Optionally, the determining the image acquisition scene where the monitoring device is located, as the target scene, includes:
Collecting a second image to be processed, and identifying a collected scene corresponding to the second image as a target scene through a preset scene identification algorithm; or alternatively
Receiving a selection instruction of an image acquisition scene input by a user, and determining the image acquisition scene selected by the user as a target scene according to the selection instruction.
Optionally, the identifying, by a preset scene identification algorithm, the acquisition scene corresponding to the second image includes:
detecting a target monitoring object contained in the second image through a preset monitoring object detection algorithm, and determining an acquisition scene corresponding to the target monitoring object according to the corresponding relation between the preset monitoring object and the acquisition scene; or alternatively
Detecting target region characteristics of a target monitoring region contained in the second image through a preset region characteristic detection algorithm, and determining an acquisition scene corresponding to the target region characteristics according to the corresponding relation between the preset region characteristics and the acquisition scene; or alternatively
Detecting target semantic features contained in the second image through a preset semantic feature detection algorithm, and determining an acquisition scene corresponding to the target semantic features according to the corresponding relation between the preset semantic features and the acquisition scene.
Optionally, the determining, in a pre-stored monitoring algorithm component library, the monitoring algorithm component corresponding to the target scene as the target monitoring algorithm component includes:
according to the preset corresponding relation between the target scene and the monitoring function, acquiring a target sub-component of each monitoring function corresponding to the target scene from a pre-stored monitoring algorithm component library;
Determining the calling sequence of each target sub-assembly according to the preset execution sequence of each monitoring function;
And forming the target sub-components into monitoring algorithm components corresponding to the target scene according to the calling sequence of the target sub-components to obtain the target monitoring algorithm components.
Optionally, the method further comprises:
Displaying a monitoring function setting interface corresponding to the target scene, wherein the monitoring function setting interface comprises a plurality of monitoring functions to be used;
and when receiving a selection instruction corresponding to the first monitoring function, establishing a corresponding relation between the target scene and the first monitoring function.
In a second aspect, there is provided an image processing apparatus, the apparatus being applied to a monitoring device, the apparatus comprising:
The first determining module is used for determining an image acquisition scene where the monitoring equipment is located as a target scene;
the second determining module is used for determining a monitoring algorithm component corresponding to the target scene in a pre-stored monitoring algorithm component library as a target monitoring algorithm component;
And the processing module is used for processing the first image by utilizing the target monitoring algorithm component when the first image to be processed is acquired.
Optionally, the first determining module is specifically configured to:
Collecting a second image to be processed, and identifying a collected scene corresponding to the second image as a target scene through a preset scene identification algorithm; or alternatively
Receiving a selection instruction of an image acquisition scene input by a user, and determining the image acquisition scene selected by the user as a target scene according to the selection instruction.
Optionally, the first determining module is specifically configured to:
detecting a target monitoring object contained in the second image through a preset monitoring object detection algorithm, and determining an acquisition scene corresponding to the target monitoring object according to the corresponding relation between the preset monitoring object and the acquisition scene; or alternatively
Detecting target region characteristics of a target monitoring region contained in the second image through a preset region characteristic detection algorithm, and determining an acquisition scene corresponding to the target region characteristics according to the corresponding relation between the preset region characteristics and the acquisition scene; or alternatively
Detecting target semantic features contained in the second image through a preset semantic feature detection algorithm, and determining an acquisition scene corresponding to the target semantic features according to the corresponding relation between the preset semantic features and the acquisition scene.
Optionally, the second determining module is specifically configured to:
according to the preset corresponding relation between the target scene and the monitoring function, acquiring a target sub-component of each monitoring function corresponding to the target scene from a pre-stored monitoring algorithm component library;
Determining the calling sequence of each target sub-assembly according to the preset execution sequence of each monitoring function;
And forming the target sub-components into monitoring algorithm components corresponding to the target scene according to the calling sequence of the target sub-components to obtain the target monitoring algorithm components.
Optionally, the apparatus further includes:
The display module is used for displaying a monitoring function setting interface corresponding to the target scene, wherein the monitoring function setting interface comprises a plurality of monitoring functions to be used;
the establishing module is used for establishing the corresponding relation between the target scene and the first monitoring function when receiving a selection instruction corresponding to the first monitoring function.
In a third aspect, a monitoring device is provided, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of the first aspects when executing a program stored on a memory.
In a fourth aspect, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any of the first aspects.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects described above.
By applying the embodiment of the application, the monitoring equipment can determine the image acquisition scene where the monitoring equipment is located as a target scene, and then determine the monitoring algorithm component corresponding to the target scene in the pre-stored monitoring algorithm component library as a target monitoring algorithm component. And when a first image to be processed is acquired, processing the first image by utilizing the target monitoring algorithm component. Therefore, various software functions can be realized by utilizing one hardware device, the coupling between hardware and software is reduced, and the convenience of use is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a flowchart of an example of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a monitoring device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides an image processing method which can be applied to monitoring equipment. The monitoring device may be a video camera, or may be other devices with an image acquisition function. The monitoring device may have a library of monitoring algorithm components stored therein. The monitoring algorithm components corresponding to various scenes can be stored in the monitoring algorithm component library so as to monitor different scenes.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application, where the method specifically includes the following steps.
Step 101, determining an image acquisition scene where the monitoring equipment is located as a target scene.
In practical applications, a technician may install a monitoring device in a certain monitoring area, so that the monitoring device may collect images in the monitoring area in real time, so as to monitor and track a monitored object in the monitoring area. In the embodiment of the application, the monitoring equipment can determine the image acquisition scene where the monitoring equipment is located as the target scene after being started or when the preset period is reached. The target scene can be a highway scene, a traffic light crossing scene, a family scene or an entrance scene, and the like.
Optionally, the manner in which the monitoring device determines the image acquisition scene may be varied, and the two possible processing manners are provided in the embodiments of the present application as follows.
The first mode is to collect a second image to be processed, and identify a collected scene corresponding to the second image through a preset scene identification algorithm to serve as a target scene.
In the embodiment of the application, the monitoring equipment can be pre-stored with a scene recognition algorithm. The monitoring device may acquire an image (which may be referred to as a second image to be processed) in the monitoring area after start-up, or when a preset period is reached. Then, the monitoring equipment can identify the second image through a scene identification algorithm to obtain an acquisition scene corresponding to the second image, and the identified acquisition scene is used as a target scene. Any algorithm capable of identifying a scene (such as a neural network algorithm, a deep learning algorithm, or a machine learning algorithm) in the prior art may be applied to the embodiment of the present application, which is not limited.
Optionally, the embodiment of the present application provides several examples of the execution process of step 102 when different scene recognition algorithms are applied, which is specifically described below.
The first example is that the monitoring device detects the target monitoring object contained in the second image through a preset monitoring object detection algorithm, and determines the acquisition scene corresponding to the target monitoring object according to the corresponding relation between the preset monitoring object and the acquisition scene.
In the embodiment of the application, the monitoring device may store the monitoring object detection algorithm in advance, and after the monitoring device detects the second image, the object feature contained in the second image may be extracted by the monitoring object detection algorithm, so as to determine the target monitoring object contained in the second image (i.e. determine the specific type of the object) according to the extracted object feature, so as to determine the acquisition scene corresponding to the target monitoring object according to the corresponding relationship between the preset monitoring object and the acquisition scene. For example, if the second image includes a red light, it may be determined that the target scene is a traffic light intersection scene; if furniture such as beds, sofas and the like is contained in the second image, the target scene can be determined to be a home scene. Any algorithm capable of detecting a monitored object (such as a neural network algorithm, a deep learning algorithm, or a machine learning algorithm) in the prior art may be applied to the embodiment of the present application, which is not limited.
The second example is that the monitoring device detects the target region characteristics of the target monitoring region contained in the second image through a preset region characteristic detection algorithm, and determines the acquisition scene corresponding to the target region characteristics according to the corresponding relation between the preset region characteristics and the acquisition scene.
In the embodiment of the application, the monitoring device may store the region feature detection algorithm in advance, and after the monitoring device detects the second image, one or more candidate regions may be determined in the second image by using the region feature detection algorithm, so as to extract the region feature (i.e., the target region feature) of each candidate region. Then, the monitoring device can determine the acquisition scene corresponding to the target region feature according to the corresponding relation between the preset region feature and the acquisition scene. Optionally, for the case of determining multiple candidate regions, a target candidate region with the highest importance degree may be determined according to the position of each candidate region, so as to obtain the region feature of the target candidate region, as the target region feature. For example, the image detected by the monitoring device may be divided into regions by using a preset blanking line in the camera as a boundary, the region above the blanking line may be regarded as a secondary region, and the region below the blanking line may be regarded as an important region for feature extraction. Any algorithm capable of detecting the region features in the prior art can be applied to the embodiment of the present application, and the embodiment of the present application is not limited.
The third example is that the monitoring device detects target semantic features contained in the second image through a preset semantic feature detection algorithm, and determines an acquisition scene corresponding to the target semantic features according to the corresponding relation between the preset semantic features and the acquisition scene.
In the embodiment of the application, the monitoring equipment can be pre-stored with a semantic feature detection algorithm, and after the monitoring equipment detects the second image, the second image containing image data can be subjected to global analysis through the semantic feature detection algorithm, so that the semantic feature corresponding to the second image is obtained. Optionally, the monitoring device may also acquire a preset number of frame images (may be referred to as a third image) before the second image and a preset number of frame images (may be referred to as a fourth image) after the second image in the monitoring video, and then detect the second image, the third image and the fourth image through a semantic feature detection algorithm to determine a target semantic feature included in the second image. The monitoring equipment can determine the acquisition scene corresponding to the target semantic features according to the corresponding relation between the preset semantic features and the acquisition scene. For example, if only people, vehicles, grasslands and animals are identified in the image, and no road markings are included, the acquisition scene may be determined to be a cell scene; if only the vehicle, the road marking and the expressway sign are identified to exist in the image and no people or animals are contained, the acquisition scene can be determined to be an expressway scene. Any algorithm capable of detecting a monitored object (such as a neural network algorithm, a deep learning algorithm, or a machine learning algorithm) in the prior art may be applied to the embodiment of the present application, which is not limited.
Through a semantic feature detection algorithm, global information and context information of the second image can be analyzed, rather than analyzing a local area or a specific object contained in the second image, the influence of local noise on scene classification can be reduced, and the robustness of scene classification is improved.
And in the second mode, receiving a selection instruction of the image acquisition scene input by the user, and determining the image acquisition scene selected by the user as a target scene according to the selection instruction.
In the embodiment of the application, an interface for setting a scene can be preset in an application program of the monitoring device. The monitoring device may display the scene setting interface through the display part or the control terminal. The scene setting interface can contain an option of an image acquisition scene, and a user can select the actual image acquisition scene of the monitoring device. The monitoring device can receive a selection instruction of the image acquisition scene, and then determine the image acquisition scene selected by the user as a target scene according to the selection instruction. Optionally, the scene setting interface includes an image acquisition scene, which is a scene that can provide a monitoring algorithm component in the monitoring algorithm component library.
And 102, determining a monitoring algorithm component corresponding to the target scene in a pre-stored monitoring algorithm component library as a target monitoring algorithm component.
In the embodiment of the application, the monitoring algorithm components corresponding to various scenes can be stored in the monitoring algorithm component library, and the monitoring algorithm components can be a pre-trained recognition model (such as a model trained by a neural network algorithm, a deep learning algorithm or a machine learning algorithm and the like) aiming at a specific scene. After the monitoring equipment determines the target scene, the monitoring algorithm component corresponding to the target scene can be searched in a pre-stored monitoring algorithm component library. If the monitoring algorithm component corresponding to the target scene is found, the monitoring algorithm component is the target monitoring algorithm component. And if the monitoring algorithm component corresponding to the target scene is not found, the monitoring equipment is not supported to monitor the target scene. The monitoring device may output alert information to prompt the user that the current scene cannot be identified.
Alternatively, multiple monitoring functions may be provided in each scenario. For example, in a highway scenario, a traveling vehicle may be monitored, and the monitoring function may include: overspeed driving detection of a vehicle, illegal parking of the vehicle, abnormal driving route of the vehicle, abnormal behavior detection of a driver, and random garbage throwing of the vehicle. As another example, in a traffic light intersection scene, pedestrians and vehicles can be monitored, and the monitoring function can include: pedestrians can run red light, vehicles can press lines, vehicles can run in violation, garbage can be scattered, and the traffic can be occupied. As another example, in a home scenario, personnel or specific objects may be monitored, and the monitoring functions may include: theft, smoke detection, fall detection, monitoring of a particular object (such as an elderly person, child, or pet), etc. As another example, in an entrance/exit scenario, a traveling vehicle may be monitored, where the monitoring function may include: license plate recognition, vehicle feature extraction and the like.
Optionally, in any one of the scenarios, for each monitoring function in the scenario, a detection model corresponding to the monitoring function may be trained as a sub-component. The monitoring function device can obtain target sub-components of each monitoring function corresponding to the target scene in a pre-stored monitoring algorithm component library according to the corresponding relation between the preset target scene and the monitoring function, then can determine the calling sequence of each target sub-component according to the preset execution sequence of each monitoring function, and further forms each target sub-component into a monitoring algorithm component corresponding to the target scene according to the calling sequence of each target sub-component to obtain the target monitoring algorithm component.
In the embodiment of the application, the corresponding relation between the target scene and the monitoring function can be prestored in the monitoring equipment, and the monitoring function in the corresponding relation can be all the monitoring functions in the scene or part of the monitoring functions in the scene. The correspondence may be preset by the manufacturer, or may be user-defined, and the specific setting process will be described in detail later. And for each monitoring function in the target scene, the execution sequence corresponding to each monitoring function is stored in the monitoring equipment. For example, in a traffic light intersection scenario, the monitoring functions may include: license plate recognition, red light running of vehicles and line pressing of vehicles, and the execution sequence is license plate recognition-line pressing of vehicles-red light running of vehicles.
The monitoring device may obtain, from a pre-stored monitoring algorithm component library, target sub-components of each monitoring function corresponding to the target scene according to a preset correspondence between the target scene and the monitoring function, and then determine, according to a preset execution sequence of each monitoring function, a calling sequence of each target sub-component, and synthesize each target sub-component into a monitoring algorithm component according to the calling sequence of each target sub-component, so as to obtain a target monitoring algorithm component, where the target monitoring algorithm component is an executable program of the machine. In this way, when the monitoring device executes the target monitoring algorithm component, the monitoring device can perform monitoring processing according to the execution sequence of the preset monitoring functions, so as to obtain a required monitoring result.
Optionally, for the case that the user sets the correspondence between the target scenario and the monitoring function, the processing procedure of the monitoring device may be as follows: displaying a monitoring function setting interface corresponding to the target scene, wherein the monitoring function setting interface comprises a plurality of monitoring functions to be used; when receiving a selection instruction corresponding to the first monitoring function, establishing a corresponding relation between the target scene and the first monitoring function.
In the embodiment of the application, the monitoring equipment can display the monitoring function setting interface corresponding to the target scene through the display component or the control terminal. The monitoring function setting interface comprises monitoring functions which are in a target scene and can provide the monitoring functions of the sub-components in the monitoring algorithm component library.
The user can select a monitoring function (record a first monitoring function) to be used in the target scene in the monitoring function setting interface, the monitoring device can receive a selection instruction corresponding to the first monitoring function, and then the monitoring device can establish a corresponding relationship between the target scene and the first monitoring function, that is, the corresponding relationship between the target scene and the monitoring function can be set by the user. Thus, the user can select a part of the monitoring functions in the target scene to use according to the needs.
And 103, when the first image to be processed is acquired, processing the first image by utilizing a target monitoring algorithm component.
In the embodiment of the application, when the monitoring equipment acquires the first image to be processed, the target monitoring algorithm component can be operated, so that each monitoring function can be executed through the sub-component contained in the target monitoring algorithm component, and the first image is processed. Optionally, when the monitoring device identifies a preset alarm event (for example, when smoke is detected by the smoke detection function), the monitoring device may send alarm information to the terminal device of the user, so that the user knows that the alarm event occurs.
Optionally, the embodiment of the application further provides two examples of the way of generating the target monitoring algorithm component, and the specific contents are as follows.
In one implementation, a sub-component of each monitoring function may be pre-stored in the monitoring device, where the sub-component is an executable file, and an execution sequence (may be referred to as an execution path) of all the monitoring functions in each scenario may also be stored in the monitoring device. After the monitoring device determines the target scene, a configuration file can be generated according to the monitoring function selected by the user. The configuration file may include switch information corresponding to each monitoring function (e.g., whether to use the monitoring function) so as to control an execution path of the executable file. The monitoring device may invoke the executable file according to the execution path represented by the configuration file, thereby obtaining an executable program (i.e., a target monitoring algorithm component).
In another implementation, a program framework for implementing monitoring of each scene may be stored in the monitoring device, where the program framework includes source code for implementing each monitoring function. After the monitoring device determines the target scene and the monitoring function selected by the user, the corresponding source code can be acquired, and then the source code is compiled into an executable program through a compiler, so that the target monitoring algorithm component is obtained. In this implementation, each monitoring function may be divided into modules, and the interfaces included in each module may be designed in a standardized manner. The division of the functional modules based on detection, tracking, single-frame classification and multi-frame classification can basically process the current common scene, and the logic of the functional modules is simplified and the interfaces can be unified due to the development of deep learning. For example, for machine-to-machine detection of different scenarios, the interface of the detection module may be standardized as: the input image, the output of the detected target queue and the memory input needed by the program in running can be unified no matter what scene interface, and the internal processing logic is only the preprocessing and detecting network; the tracking module may be normalized to: input images, input of object queues for detection output, output of object queues and memory required for operation, whatever the object is tracked, can be based on such interfaces and internal logic calculated based on feature distance. Other modules are similar. It is based on such standardized design that a data structure describing interfaces formed for inputs and outputs of a certain functional module can be abstracted, and the format of input data of each interface is constrained through the data structure so as to cope with different scenes and monitoring functions. For example, the same detection module, the resolutions of the input images are different for the road scene and for the indoor scene, and then a data structure can be proposed for restricting and setting the resolution, that is, implementing the input image with the input resolution a for the road scene and the input image with the input resolution B for the indoor scene. In this way, by designing the data structure for each scene, the monitoring device can compile the detection module applicable to each scene only by storing the source code of the detection module.
In addition, other processing manners capable of generating the target monitoring algorithm component can be applied to the embodiment of the present application, and the embodiment of the present application is not limited.
In the embodiment of the application, the monitoring equipment can determine the image acquisition scene where the monitoring equipment is located as a target scene, and then determine the monitoring algorithm component corresponding to the target scene in the pre-stored monitoring algorithm component library as a target monitoring algorithm component. And when a first image to be processed is acquired, processing the first image by utilizing the target monitoring algorithm component. Therefore, various software functions can be realized by utilizing one hardware device, the coupling between hardware and software is reduced, and the convenience of use is improved.
The embodiment of the application also provides an example of an image processing method, as shown in fig. 2, which specifically comprises the following steps.
Step 201, a second image to be processed is acquired.
Step 202, identifying an acquisition scene corresponding to the second image as a target scene through a preset scene identification algorithm.
Step 203, receiving a monitoring function selected by a user to be used in a target scene.
And 204, acquiring source codes for realizing the monitoring functions selected by the user from a pre-stored monitoring algorithm component library, and compiling the source codes into executable programs through a compiler according to the execution sequence of the preset monitoring functions to obtain the target monitoring algorithm component.
In step 205, when a first image to be processed is acquired, the first image is processed by using the target monitoring algorithm component.
Based on the same technical concept, the embodiment of the present application further provides an image processing apparatus, which is applied to a monitoring device, as shown in fig. 3, and includes:
A first determining module 310, configured to determine an image acquisition scene where the monitoring device is located, as a target scene;
The second determining module 320 is configured to determine, in a pre-stored monitoring algorithm component library, a monitoring algorithm component corresponding to the target scene as a target monitoring algorithm component;
and the processing module 330 is configured to process the first image by using the target monitoring algorithm component when the first image to be processed is acquired.
Optionally, the first determining module 310 is specifically configured to:
collecting a second image to be processed, and identifying a collected scene corresponding to the second image as a target scene through a preset scene identification algorithm; or alternatively
Receiving a selection instruction of an image acquisition scene input by a user, and determining the image acquisition scene selected by the user as a target scene according to the selection instruction.
Optionally, the first determining module 310 is specifically configured to:
Detecting a target monitoring object contained in the second image through a preset monitoring object detection algorithm, and determining an acquisition scene corresponding to the target monitoring object according to the corresponding relation between the preset monitoring object and the acquisition scene; or alternatively
Detecting target area characteristics of a target monitoring area contained in the second image through a preset area characteristic detection algorithm, and determining an acquisition scene corresponding to the target area characteristics according to the corresponding relation between the preset area characteristics and the acquisition scene; or alternatively
Detecting target semantic features contained in the second image through a preset semantic feature detection algorithm, and determining an acquisition scene corresponding to the target semantic features according to the corresponding relation between the preset semantic features and the acquisition scene.
Optionally, the second determining module 320 is specifically configured to:
according to the corresponding relation between the preset target scene and the monitoring function, acquiring a target sub-component of each monitoring function corresponding to the target scene from a pre-stored monitoring algorithm component library;
determining the calling sequence of each target sub-assembly according to the preset execution sequence of each monitoring function;
and forming each target sub-assembly into a monitoring algorithm assembly corresponding to the target scene according to the calling sequence of each target sub-assembly to obtain the target monitoring algorithm assembly.
Optionally, as shown in fig. 4, the apparatus further includes:
The display module 340 is configured to display a monitoring function setting interface corresponding to the target scene, where the monitoring function setting interface includes a plurality of monitoring functions to be used;
The establishing module 350 is configured to establish a corresponding relationship between the target scene and the first monitoring function when receiving a selection instruction corresponding to the first monitoring function.
In the embodiment of the application, the monitoring equipment can determine the image acquisition scene where the monitoring equipment is located as a target scene, and then determine the monitoring algorithm component corresponding to the target scene in the pre-stored monitoring algorithm component library as a target monitoring algorithm component. And when a first image to be processed is acquired, processing the first image by utilizing the target monitoring algorithm component. Therefore, various software functions can be realized by utilizing one hardware device, the coupling between hardware and software is reduced, and the convenience of use is improved.
The embodiment of the application also provides a monitoring device, as shown in fig. 5, comprising a processor 501, a communication interface 502, a memory 503 and a communication bus 504, wherein the processor 501, the communication interface 502 and the memory 503 complete communication with each other through the communication bus 504,
A memory 503 for storing a computer program;
The processor 501 is configured to execute the program stored in the memory 503, and implement the following steps:
Determining an image acquisition scene where the monitoring equipment is located as a target scene;
in a pre-stored monitoring algorithm component library, determining a monitoring algorithm component corresponding to the target scene as a target monitoring algorithm component;
And when a first image to be processed is acquired, processing the first image by utilizing the target monitoring algorithm component.
Optionally, the determining the image acquisition scene where the monitoring device is located, as the target scene, includes:
Collecting a second image to be processed, and identifying a collected scene corresponding to the second image as a target scene through a preset scene identification algorithm; or alternatively
Receiving a selection instruction of an image acquisition scene input by a user, and determining the image acquisition scene selected by the user as a target scene according to the selection instruction.
Optionally, the identifying, by a preset scene identification algorithm, the acquisition scene corresponding to the second image includes:
detecting a target monitoring object contained in the second image through a preset monitoring object detection algorithm, and determining an acquisition scene corresponding to the target monitoring object according to the corresponding relation between the preset monitoring object and the acquisition scene; or alternatively
Detecting target region characteristics of a target monitoring region contained in the second image through a preset region characteristic detection algorithm, and determining an acquisition scene corresponding to the target region characteristics according to the corresponding relation between the preset region characteristics and the acquisition scene; or alternatively
Detecting target semantic features contained in the second image through a preset semantic feature detection algorithm, and determining an acquisition scene corresponding to the target semantic features according to the corresponding relation between the preset semantic features and the acquisition scene.
Optionally, the determining, in a pre-stored monitoring algorithm component library, the monitoring algorithm component corresponding to the target scene as the target monitoring algorithm component includes:
according to the preset corresponding relation between the target scene and the monitoring function, acquiring a target sub-component of each monitoring function corresponding to the target scene from a pre-stored monitoring algorithm component library;
Determining the calling sequence of each target sub-assembly according to the preset execution sequence of each monitoring function;
And forming the target sub-components into monitoring algorithm components corresponding to the target scene according to the calling sequence of the target sub-components to obtain the target monitoring algorithm components.
Optionally, the method further comprises:
Displaying a monitoring function setting interface corresponding to the target scene, wherein the monitoring function setting interface comprises a plurality of monitoring functions to be used;
and when receiving a selection instruction corresponding to the first monitoring function, establishing a corresponding relation between the target scene and the first monitoring function.
The communication bus mentioned by the monitoring device may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the monitoring device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Based on the same technical idea, the embodiment of the application further provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and the computer program realizes the steps of the image processing method when being executed by a processor.
Based on the same technical idea, the embodiments of the present application also provide a computer program product containing instructions, which when run on a computer, cause the computer to perform the above-mentioned image processing method.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (8)

1. An image processing method, wherein the method is applied to a monitoring device, the method comprising:
Determining an image acquisition scene where the monitoring equipment is located as a target scene; the determining the image acquisition scene where the monitoring equipment is located as the target scene comprises the following steps: collecting a second image to be processed, and identifying a collected scene corresponding to the second image as a target scene through a preset scene identification algorithm; or receiving a selection instruction of an image acquisition scene input by a user, and determining the image acquisition scene selected by the user as a target scene according to the selection instruction;
In a pre-stored monitoring algorithm component library, determining a monitoring algorithm component corresponding to the target scene as a target monitoring algorithm component; wherein, in the pre-stored monitoring algorithm component library, determining the monitoring algorithm component corresponding to the target scene as the target monitoring algorithm component includes: according to the preset corresponding relation between the target scene and the monitoring function, acquiring a target sub-component of each monitoring function corresponding to the target scene from a pre-stored monitoring algorithm component library; the target sub-assembly of any monitoring function is a detection model corresponding to the monitoring function obtained through training; determining the calling sequence of each target sub-assembly according to the preset execution sequence of each monitoring function; according to the calling sequence of each target sub-assembly, forming each target sub-assembly into a monitoring algorithm assembly corresponding to the target scene to obtain a target monitoring algorithm assembly;
And when a first image to be processed is acquired, processing the first image by utilizing the target monitoring algorithm component.
2. The method according to claim 1, wherein the identifying, by a preset scene identification algorithm, the acquisition scene corresponding to the second image includes:
detecting a target monitoring object contained in the second image through a preset monitoring object detection algorithm, and determining an acquisition scene corresponding to the target monitoring object according to the corresponding relation between the preset monitoring object and the acquisition scene; or alternatively
Detecting target region characteristics of a target monitoring region contained in the second image through a preset region characteristic detection algorithm, and determining an acquisition scene corresponding to the target region characteristics according to the corresponding relation between the preset region characteristics and the acquisition scene; or alternatively
Detecting target semantic features contained in the second image through a preset semantic feature detection algorithm, and determining an acquisition scene corresponding to the target semantic features according to the corresponding relation between the preset semantic features and the acquisition scene.
3. The method according to claim 1, wherein the method further comprises:
Displaying a monitoring function setting interface corresponding to the target scene, wherein the monitoring function setting interface comprises a plurality of monitoring functions to be used;
and when receiving a selection instruction corresponding to the first monitoring function, establishing a corresponding relation between the target scene and the first monitoring function.
4. An image processing apparatus, the apparatus being applied to a monitoring device, the apparatus comprising:
The first determining module is used for determining an image acquisition scene where the monitoring equipment is located as a target scene; the first determining module is specifically configured to: collecting a second image to be processed, and identifying a collected scene corresponding to the second image as a target scene through a preset scene identification algorithm; or receiving a selection instruction of an image acquisition scene input by a user, and determining the image acquisition scene selected by the user as a target scene according to the selection instruction;
the second determining module is used for determining a monitoring algorithm component corresponding to the target scene in a pre-stored monitoring algorithm component library as a target monitoring algorithm component; the second determining module is specifically configured to: according to the preset corresponding relation between the target scene and the monitoring function, acquiring a target sub-component of each monitoring function corresponding to the target scene from a pre-stored monitoring algorithm component library; the target sub-assembly of any monitoring function is a detection model corresponding to the monitoring function obtained through training; determining the calling sequence of each target sub-assembly according to the preset execution sequence of each monitoring function; according to the calling sequence of each target sub-assembly, forming each target sub-assembly into a monitoring algorithm assembly corresponding to the target scene to obtain a target monitoring algorithm assembly;
And the processing module is used for processing the first image by utilizing the target monitoring algorithm component when the first image to be processed is acquired.
5. The apparatus of claim 4, wherein the first determining module is specifically configured to:
detecting a target monitoring object contained in the second image through a preset monitoring object detection algorithm, and determining an acquisition scene corresponding to the target monitoring object according to the corresponding relation between the preset monitoring object and the acquisition scene; or alternatively
Detecting target region characteristics of a target monitoring region contained in the second image through a preset region characteristic detection algorithm, and determining an acquisition scene corresponding to the target region characteristics according to the corresponding relation between the preset region characteristics and the acquisition scene; or alternatively
Detecting target semantic features contained in the second image through a preset semantic feature detection algorithm, and determining an acquisition scene corresponding to the target semantic features according to the corresponding relation between the preset semantic features and the acquisition scene.
6. The apparatus of claim 4, wherein the apparatus further comprises:
The display module is used for displaying a monitoring function setting interface corresponding to the target scene, wherein the monitoring function setting interface comprises a plurality of monitoring functions to be used;
the establishing module is used for establishing the corresponding relation between the target scene and the first monitoring function when receiving a selection instruction corresponding to the first monitoring function.
7. The monitoring equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
A processor for implementing the image processing method of any one of claims 1 to 3 when executing a program stored on a memory.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the image processing method of any of claims 1-3.
CN201910715688.4A 2019-08-05 2019-08-05 Image processing method, device and equipment Active CN112329499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910715688.4A CN112329499B (en) 2019-08-05 2019-08-05 Image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910715688.4A CN112329499B (en) 2019-08-05 2019-08-05 Image processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN112329499A CN112329499A (en) 2021-02-05
CN112329499B true CN112329499B (en) 2024-07-09

Family

ID=74319935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910715688.4A Active CN112329499B (en) 2019-08-05 2019-08-05 Image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN112329499B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065615A (en) * 2021-06-02 2021-07-02 南京甄视智能科技有限公司 Scenario-based edge analysis algorithm issuing method and device and storage medium
CN114782899A (en) * 2022-06-15 2022-07-22 浙江大华技术股份有限公司 Image processing method and device and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815844A (en) * 2018-12-29 2019-05-28 西安天和防务技术股份有限公司 Object detection method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3185180B1 (en) * 2014-10-23 2019-01-02 Axis AB Modification of at least one parameter used by a video processing algorithm for monitoring of a scene
CN106454250A (en) * 2016-11-02 2017-02-22 北京弘恒科技有限公司 Intelligent recognition and early warning processing information platform
CN108717521A (en) * 2018-04-17 2018-10-30 智慧互通科技有限公司 A kind of parking lot order management method and system based on image
CN109886138A (en) * 2019-01-27 2019-06-14 武汉星巡智能科技有限公司 Control method, device and computer readable storage medium based on scene Recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815844A (en) * 2018-12-29 2019-05-28 西安天和防务技术股份有限公司 Object detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112329499A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN106952303B (en) Vehicle distance detection method, device and system
CN111582006A (en) Video analysis method and device
CN109766779B (en) Loitering person identification method and related product
CN105144705B (en) Object monitoring system, object monitoring method, and program for extracting object to be monitored
CN106803083B (en) Pedestrian detection method and device
Battiato et al. On-board monitoring system for road traffic safety analysis
CN110895662A (en) Vehicle overload alarm method and device, electronic equipment and storage medium
US20090060276A1 (en) Method for detecting and/or tracking objects in motion in a scene under surveillance that has interfering factors; apparatus; and computer program
CN104239881A (en) Method and system for automatically finding and registering target in surveillance video
CN112329499B (en) Image processing method, device and equipment
CN114445768A (en) Target identification method and device, electronic equipment and storage medium
CN113496213A (en) Method, device and system for determining target perception data and storage medium
CN112288975A (en) Event early warning method and device
CN111768630A (en) Violation waste image detection method and device and electronic equipment
CN111753587A (en) Method and device for detecting falling to ground
CN111985304A (en) Patrol alarm method, system, terminal equipment and storage medium
CN112597924B (en) Electric bicycle track tracking method, camera device and server
CN114511825A (en) Method, device and equipment for detecting area occupation and storage medium
KR102435435B1 (en) System for searching numbers of vehicle and pedestrian based on artificial intelligence
WO2012074366A2 (en) A system and a method for detecting a loitering event
CN110766949B (en) Violation snapshot method and device
CN110581979B (en) Image acquisition system, method and device
CN110659384A (en) Video structured analysis method and device
US20240137473A1 (en) System and method to efficiently perform data analytics on vehicle sensor data
CN114842530A (en) Object detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant