CN113672252A - Model upgrading method, video monitoring system, electronic equipment and readable storage medium - Google Patents

Model upgrading method, video monitoring system, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113672252A
CN113672252A CN202110839616.8A CN202110839616A CN113672252A CN 113672252 A CN113672252 A CN 113672252A CN 202110839616 A CN202110839616 A CN 202110839616A CN 113672252 A CN113672252 A CN 113672252A
Authority
CN
China
Prior art keywords
model
upgrading
data
image data
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110839616.8A
Other languages
Chinese (zh)
Inventor
谭琳
尤兰婷
周方琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110839616.8A priority Critical patent/CN113672252A/en
Publication of CN113672252A publication Critical patent/CN113672252A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a model upgrading method, a video monitoring system, electronic equipment and a readable storage medium, wherein the model upgrading method comprises the following steps: obtaining a task to be executed, wherein the task to be executed comprises a path address of a video stream; acquiring a corresponding video stream based on the path address, and issuing the video stream to a current first model for analysis to obtain analyzed first image data; training a base model based on the first image data to obtain a second model, and generating model upgrade data based on the second model; and upgrading the current first model by using the model upgrading data to obtain an updated first model. By means of the method, the adaptability of the model applied to the video monitoring system to the actual environment can be improved, and the accuracy of the analysis result of the model is improved.

Description

Model upgrading method, video monitoring system, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of video monitoring technologies, and in particular, to a model upgrade method, a video monitoring system, an electronic device, and a readable storage medium.
Background
With the increasing popularity of video monitoring systems, the use of models to analyze images is becoming the mainstream way, and therefore higher requirements are also put forward on the accuracy of model analysis results.
In the prior art, a model is generally trained by using sample data during an experiment and is set in an application scene for use. However, the sample data is relatively single in source, and when the model trained by using the sample data is analyzed in a specific application scene, the accuracy of the analysis result is deviated along with the time. In view of this, how to improve the adaptability of the model applied in the video monitoring system to the actual environment and improve the accuracy of the model analysis result become problems to be solved urgently.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a model upgrading method, a video monitoring system, an electronic device and a readable storage medium, which can improve the adaptability of a model applied to the video monitoring system to an actual environment and improve the accuracy of a model analysis result.
In order to solve the above technical problem, a first aspect of the present application provides a model upgrading method, including: the method comprises the steps of obtaining a task to be executed, wherein the task to be executed comprises a path address of a video stream; acquiring a corresponding video stream based on the path address, and issuing the video stream to a current first model for analysis so as to obtain analyzed first image data; training a base model based on the first image data to obtain a second model, and generating model upgrade data based on the second model; and upgrading the current first model by using the model upgrading data to obtain the updated first model.
In order to solve the above technical problem, a second aspect of the present application provides a video monitoring system, including: the system comprises a client, a scheduling service module, a main control module, an analysis module and a model training module, wherein the client is used for obtaining a task to be executed, and the task to be executed comprises a path address of a video stream; the scheduling service module is used for distributing the tasks to be executed to the corresponding main control modules; the main control module is used for acquiring a corresponding video stream based on the path address, and sending the video stream to the current first model in the analysis module for analysis so as to obtain analyzed first image data; the model training module is used for training a basic model based on the first image data to obtain a second model and generating model upgrading data based on the second model; the main control module is further configured to upgrade a current first model in the analysis module by using the model upgrade data to obtain the updated first model.
To solve the above technical problem, a third aspect of the present application provides an electronic device, including: a memory and a processor coupled to each other, wherein the memory stores program data, and the processor calls the program data to execute the method of the first aspect.
To solve the above technical problem, a fourth aspect of the present application provides a computer-readable storage medium having stored thereon program data, which when executed by a processor, implements the method of the first aspect.
The beneficial effect of this application is: the task to be executed includes a path address of a video stream, so that the video stream collected in an application scene is directly obtained, the video stream is sent to a current first model for analysis after being obtained based on the path address, the analyzed first image data is obtained after being analyzed based on the video stream in the application scene, the first image data is used as training data to train a basic model, the obtained second model can be more adaptive to the current application scene, model upgrading data is generated based on the second model to upgrade the current model, the updated first model is obtained, adaptability of the model applied to a video monitoring system and an actual environment is improved, and accuracy of an analysis result obtained by analyzing the first model is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram of an embodiment of a model upgrade method of the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a model upgrade method according to the present application;
FIG. 3 is a schematic diagram of an embodiment of a video surveillance system;
FIG. 4 is a schematic topology diagram of an embodiment of a video surveillance system of the present application;
FIG. 5 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a model upgrading method applied to a video surveillance system, including:
s101: and obtaining the task to be executed, wherein the task to be executed comprises a path address of the video stream.
Specifically, a task to be executed is received, the task to be executed includes a path address corresponding to the video stream, and the path address is used for extracting the video stream for analysis.
In an application mode, the video monitoring system is arranged in a specific application scene, the front-end camera device collects video streams in real time, a user generates a task to be executed at a client, a path address of the front-end camera device is integrated in the task to be executed, the task to be executed is transmitted to the scheduling service module through the client, and the task to be executed is issued to the main control module by the scheduling service module.
In another application mode, the video monitoring system is set in a specific application scene, the front-end camera device collects video streams in real time, a user generates a task to be executed at a client, a path address stored in the video streams collected by the front-end camera device is integrated in the task to be executed, and the task to be executed is issued to the main control module through the client.
S102: and acquiring a corresponding video stream based on the path address, and sending the video stream to the current first model for analysis so as to obtain the analyzed first image data.
Specifically, a video stream corresponding to the path address is acquired from the front-end camera device or the storage unit based on the path address, and the video stream is sent to the current first model, so that the first model analyzes the image frames in the video stream to obtain the analyzed first image data.
In an application mode, a video stream is obtained from a front-end camera device based on a path address, and then the video stream is sent to a first model, so that the first model extracts image frames from the video stream and analyzes the image frames to obtain analyzed first image data.
In another application mode, a video stream is acquired from a cloud server stored in the video stream based on the path address, and the video stream is sent to the first model, so that the first model extracts image frames from the video stream and analyzes the image frames to obtain analyzed first image data.
Further, the number of first models may be plural, each first model being used for analyzing one type of object.
In an application scene, all the first models are used for analyzing pedestrians, and the first models of different types are respectively used for analyzing faces, trunks and gaits so as to obtain analysis results aiming at multiple aspects of the pedestrians, so that the analysis results of the pedestrians are more accurate.
In another application scenario, different types of first models are respectively used for analyzing pedestrians, moving entities (such as vehicles and animals) and static entities (such as plants and buildings) in the image frames to obtain analysis results corresponding to different types of targets, so that the analysis results corresponding to the different types of targets are more accurate.
S103: the base model is trained based on the first image data to obtain a second model, and model upgrade data is generated based on the second model.
Specifically, the basic model is trained by using the first image data as training data to obtain a second model, and the second model is verified and packaged into model upgrading data.
In an application mode, after first image data is obtained, the first image data is used as training data to train a basic model, so that a second model after training is obtained, parameter differences between the second model and the first model are obtained after the second model is verified, and then the parameter differences are packaged into model upgrading data, wherein when the first model corresponds to multiple types, the second model matched with the training types is used for generating the model upgrading data corresponding to the types.
In a specific application scene, the basic models corresponding to the first model and the second model are the same, after the first image data output by the current first model is obtained, the first image data are the images which are analyzed by the current first model and are adapted to the current application scene, the first image data are used as new training data to train the basic models, so that the matching between the second model obtained by training and the current application scene is higher, and when the basic models of the first model and the second model are the same, the change of the second model relative to the first model can be the modification of partial parameters in the basic models, so that the generated model upgrading data are small in size, and the model upgrading efficiency is higher.
S104: and upgrading the current first model by using the model upgrading data to obtain an updated first model.
Specifically, the model upgrade data is transmitted to the current first model to upgrade the first model, so that the updated first model is obtained.
Optionally, after obtaining the updated first model, performing multiple iterative updates on the first model based on the above steps, so as to obtain the first model more matched with the current application scenario.
According to the scheme, the obtained task to be executed comprises the path address of the video stream, so that the video stream collected in the application scene is directly obtained, the video stream is obtained based on the path address and then is sent to the current first model to be analyzed, so that the analyzed first image data is obtained, the first image data is obtained based on the analysis of the video stream in the application scene and is used as training data to train the basic model, the obtained second model can be more adaptive to the current application scene, and then the model upgrading data is generated based on the second model to upgrade the current model, so that the updated first model is obtained, the adaptability of the model applied to the video monitoring system and the actual environment is improved, and the precision of an analysis result obtained by analyzing the first model is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another embodiment of the model upgrading method applied to a video surveillance system, including:
s201: and obtaining the task to be executed, wherein the task to be executed comprises a path address of the video stream.
Specifically, a task to be executed is received, wherein the task to be executed comprises a path address of a camera device for collecting video streams or a path address of the video streams stored in a cloud.
Further, the task to be executed comprises a plurality of types of subtasks, and each subtask corresponds to one first model. The subtasks include at least two types of a face recognition task, a human body recognition task and an entity recognition task.
Specifically, the to-be-executed task issued by the user is received at the client, the to-be-executed task may include multiple types of subtasks, and each type of subtask is executed by a corresponding first model, so that the to-be-executed task can be subjected to refinement processing by different first models, and the different first models execute the subtasks of the corresponding types. The human face recognition task is used for extracting a human face frame to perform human face recognition, the human body recognition task is used for extracting a human body frame of a pedestrian to perform gait recognition, and the entity recognition task is used for extracting a non-pedestrian target frame to perform entity recognition.
S202: and extracting all types of subtasks in the task to be executed, and matching the first models corresponding to the types of the subtasks.
Specifically, after the task to be executed is obtained, subtasks in the task to be executed are extracted, subtasks of different types are extracted, and the subtasks are matched with the first model based on the types, so that the subtasks can be submitted to the matched first model for accurate analysis.
In an application mode, the video monitoring system comprises a client, a scheduling service module and a main control module, wherein the client receives a task to be executed and issues the task to be executed to the scheduling service module, the scheduling service module schedules the task to be executed to at least one specified main control module, the main control module extracts subtasks in the task to be executed after obtaining the task to be executed and analyzes the types of the subtasks, so that a first model corresponding to the subtasks is obtained based on the types, when the video monitoring system comprises a plurality of areas, a corresponding main control module is arranged in each area, and the main control module manages at least one first model for analyzing video streams in the areas.
S203: and acquiring a corresponding video stream based on the path address, and sending the video stream to the current first model for analysis so as to obtain the analyzed first image data.
Specifically, a corresponding video stream is obtained through a path address in the task to be executed, and then the video stream is issued to a first model matched with the type of the subtask for analysis, so as to obtain analyzed first image data.
In an application mode, extracting a corresponding video stream from a front-end camera device based on a path address; based on the type of the subtask in the task to be executed, issuing the video stream to a current first model corresponding to the type; image frames are extracted from the video stream and analyzed using the first model to obtain analyzed first image data.
Specifically, a video stream acquired by the front-end camera device is extracted from the front-end camera device according to the path address, a first model matched with the type of the subtask is obtained according to the type of the subtask in the task to be executed, and the video stream is sent to the current first model corresponding to the type.
Further, the video stream is decoded to obtain image frames in the video stream, and the first models of different types respectively execute the subtasks of corresponding types to obtain first image data obtained after the first models of different types are analyzed.
In an application scenario, a video monitoring system comprises a plurality of monitoring areas, after a main control module in each area obtains a task to be executed, video streams collected by front-end cameras in the corresponding monitoring area are obtained based on a path address, so that first models of different types in the monitoring area analyze the video streams corresponding to the monitoring area, first image data output by the first models of different types are obtained, and the first image data output by the first models of different types are emphasized on different analysis objects, so that more accurate and richer analysis results are obtained.
S204: and performing labeling and classification operations on the first image data to obtain a plurality of groups of labeled training image data matched with different types.
Specifically, labeling and classifying operations are performed on first image data output by a first model, so that the first image data are classified based on the types of the subtasks, and labels needing to pay attention during training are labeled on the first image data of different types, so that a plurality of groups of labeled training image data more suitable for training are obtained.
In an application mode, first image data acquired in real time is labeled, the labeled first image data is divided into training image data which are suitable for different types of subtasks and used for training according to the types of the subtasks, the training image data is acquired based on video streams acquired in an application scene after layer-by-layer processing, therefore, the training image data has better adaptability to the application scene, the training image data is matched with the types of the subtasks, and a basic model can be conveniently trained after labeling.
In an application scene, based on the types of subtasks of a face recognition task, a human body recognition task and an entity recognition task, a face image, a human body image and a real object image are sorted and arranged according to the first image data, when the face image is marked, the face image is marked according to the age, the sex, the race, the expression, the wearing of glasses and the position of a face frame, the human body image is marked according to the height, the pace, the trunk width and the wearing clothes, and the real object image is marked according to the height, the width, the static state and the outline, so that a plurality of groups of marked training image data matched with different types are obtained, and the marked training image data are used for training a more accurate model corresponding to the types.
S205: and respectively training the basic model by utilizing a plurality of groups of marked training image data to obtain a plurality of second models matched with the types.
Specifically, each set of labeled training image data is sent to a basic model for training, so as to obtain a second model corresponding to the type of each set of training image data.
In an application mode, the same neural network model is selected as a basic model, multiple groups of marked training image data are respectively sent to respective basic models for training, parameters in each basic model are adjusted according to the types of subtasks, so that a trained second model is more matched with the subtasks of corresponding types, training data based on the second model are acquired from a specific application scene, and the adaptability of the second model and the specific application scene is further improved.
S206: model upgrade data is generated based on the second model.
Specifically, the second model is verified to generate model upgrade data for the upgrade based on the second model.
In an application mode, the difference between a second model and a first model corresponding to the same type is obtained, and model upgrading data are generated based on the difference. When the first model and the second model are based on the same basic model, model upgrading data for upgrading the first model is generated based on the difference of parameters in the second model and the first model of the same type, and then upgrading efficiency is improved in subsequent upgrading.
In another application, based on the second model, the different types of second models are packaged into model upgrade data respectively. And after the second model is verified, packaging the second model into model upgrading data, and further ensuring the upgrading integrity in the subsequent upgrading process and reducing the error probability in the upgrading process.
S207: and upgrading the current first model by using the model upgrading data to obtain an updated first model.
Specifically, the model upgrading data is issued to the current first model according to the type to upgrade the first model, so that the updated first model is obtained.
In an application mode, model upgrading data is generated based on the difference between the second model and the first model, and the first model of the same type is upgraded by using the model upgrading data to obtain an updated first model.
In another application, the model upgrade data is generated based on the second model, and the same type of first model is upgraded by using the model upgrade data to replace the current first model, so as to obtain an updated first model.
Optionally, after the step of upgrading the current first model by using the model upgrading data to obtain the updated first model, the method further includes: acquiring a first time point for updating the first model, and determining a first difference value between the current time point and the first time point; judging whether the first difference value exceeds a first threshold value; if yes, returning to the step of training the basic model based on the first image data to obtain a second model; if not, analyzing the video stream based on the current first model to obtain the analyzed first image data.
Specifically, a first threshold is preset, when the first model is updated, a first time point of the updated first model is recorded, and a first difference between the current time point and the first time point does not exceed the first threshold, the current first model is used for analyzing the video stream, so that analyzed first image data is obtained, and the first image data can be used for target clustering or control alarm. And after a first difference value between the current time point and the first time point exceeds a first threshold value, returning to the step of training the basic model based on the first image data to obtain a second model, further training the basic model by using the first image data as training image data to obtain the second model so as to update the current first model again, and performing cyclic iterative update on the first model within a preset time period to enable the first model to be more matched with a specific application scene.
In this embodiment, the type of the sub-task in the task to be executed is extracted, the corresponding first model is obtained according to the type and is used for analyzing the video stream acquired in the specific application scene to obtain first image data, the first image data is labeled and classified to obtain training image data, the basic model is trained to obtain a second model corresponding to the type, model upgrade data is generated, the current first model is upgraded according to the type by using the model upgrade data, so that the updated first model is more matched with the specific application scene, and the analysis result of the video stream acquired in the current application scene is more accurate.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an embodiment of a video monitoring system 30 according to the present application, including: a client 300, a dispatch service module 302, a master module 304 and analysis module 306, and a model training module 308. The client 300 is configured to obtain a task to be executed, where the task to be executed includes a path address of a video stream; the scheduling service module 302 is configured to allocate the tasks to be executed to the corresponding main control module 304; the main control module 304 is configured to obtain a corresponding video stream based on the path address, and send the video stream to the current first model in the analysis module 306 for analysis, so as to obtain analyzed first image data; the model training module 308 is configured to train the base model based on the first image data to obtain a second model, and generate model upgrade data based on the second model; the main control module 304 is further configured to upgrade the current first model in the analysis module 306 by using the model upgrade data to obtain an updated first model.
Specifically, the video monitoring system 30 includes at least one main control module 304, each main control module 304 is configured to manage at least one analysis module 306, and each analysis module 306 corresponds to one type of the first model. The scheduling service module 302 is configured to schedule the task to be executed, so that the task to be executed is issued to the corresponding main control module 304. The model training module 308 is configured to train, verify, and package the basic model into model upgrade data by using the first image data after the first model analysis.
In an application manner, please refer to fig. 4, where fig. 4 is a topology diagram of an embodiment of the video monitoring system of the present application, where a user generates a task to be executed at a client 300, and then issues an analysis task to be executed to a scheduling service module 302, the scheduling service module 302 then schedules the task to an assigned main control module 304, the main control module 304 pulls a video stream from a front-end camera device based on a path address, and issues the video stream to a current first model in an analysis module 306 for intelligent analysis, so as to obtain first image data.
Further, the analysis module 306 reports the analysis result to the main control module 304, the main control module 304 reports the analysis result to the scheduling service module 302, the scheduling service module 302 reports the analysis result to the client 300 for display, and performs labeling and classification operations on the first image data at the client 300, and sends the labeled and classified first image data as training image data to the model training module 308 for training, so as to train a second model and obtain model upgrade data, the scheduling service module 302 sends the model upgrade data to the corresponding main control module 304, and the main control module 304 sends the model upgrade data to the analysis module 306, so as to upgrade the current first model, and then the analysis module 306 performs intelligent analysis based on the updated first model.
In this embodiment, the task to be executed obtained by the client 300 includes the path address of the video stream, thereby directly acquiring the video stream acquired in the application scene, the main control module 304 acquires the video stream based on the path address and then sends the video stream to the current first model in the analysis module 306 for analysis, so as to obtain the analyzed first image data, wherein, the first image data is obtained after analyzing based on the video stream in the application scene, the model training module 308 trains the basic model by using the first image data as training data, so that the obtained second model can be more adaptive to the current application scene, and then generates model upgrade data based on the second model to upgrade the current model in the analysis module 306, thereby obtaining the updated first model, improving the adaptability of the model applied to the video monitoring system 30 to the actual environment, and improving the accuracy of the analysis result obtained by analyzing with the first model.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of an electronic device 50 of the present application, where the electronic device 50 includes a memory 501 and a processor 502 coupled to each other, where the memory 501 stores program data (not shown), and the processor 502 calls the program data to implement the method in any of the above embodiments.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a computer-readable storage medium 60 of the present application, the computer-readable storage medium 60 stores program data 600, and the program data 600 is executed by a processor to implement the method in any of the above embodiments, and the related contents are described in detail with reference to the above method embodiments, which are not repeated herein.
It should be noted that, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A model upgrading method is applied to a video monitoring system and is characterized by comprising the following steps:
the method comprises the steps of obtaining a task to be executed, wherein the task to be executed comprises a path address of a video stream;
acquiring a corresponding video stream based on the path address, and issuing the video stream to a current first model for analysis so as to obtain analyzed first image data;
training a base model based on the first image data to obtain a second model, and generating model upgrade data based on the second model;
and upgrading the current first model by using the model upgrading data to obtain the updated first model.
2. The model upgrade method according to claim 1,
the task to be executed comprises a plurality of types of subtasks, and each type of subtask corresponds to one first model; the subtasks comprise at least two types of a face recognition task, a human body recognition task and an entity recognition task;
after the step of obtaining the task to be executed, the method comprises the following steps:
and extracting all types of subtasks in the tasks to be executed, and matching the first models corresponding to the types of the subtasks.
3. The model upgrading method according to claim 2, wherein the step of obtaining the corresponding video stream based on the path address, and sending the video stream to the current first model for analysis to obtain the analyzed first image data includes:
extracting a corresponding video stream from the front-end camera device based on the path address;
based on the type of the subtask in the task to be executed, the video stream is issued to the current first model corresponding to the type;
and extracting image frames from the video stream, and analyzing the image frames by using the first model to obtain analyzed first image data.
4. The model upgrade method according to claim 3,
before the step of training a base model based on the first image data to obtain a second model, the method further includes:
labeling and classifying the first image data to obtain a plurality of groups of labeled training image data matched with different types;
the step of training a base model based on the first image data to obtain a second model comprises:
and respectively training the basic model by utilizing a plurality of groups of labeled training image data to obtain a plurality of second models matched with the types.
5. The model upgrade method according to claim 4,
the step of generating model upgrade data based on the second model comprises:
acquiring the difference between the second model and the first model corresponding to the same type, and generating the model upgrading data based on the difference;
the step of upgrading the current first model by using the model upgrading data to obtain the updated first model includes:
and upgrading the first model of the same type by using the model upgrading data to obtain the updated first model.
6. The model upgrade method according to claim 4,
the step of generating model upgrade data based on the second model comprises:
packing different types of second models into the model upgrading data respectively based on the second models;
the step of upgrading the current first model by using the model upgrading data to obtain the updated first model includes:
and upgrading the first model of the same type by using the model upgrading data so as to replace the current first model to obtain the updated first model.
7. The model upgrade method according to claim 1, wherein after the step of upgrading the current first model by using the model upgrade data to obtain the updated first model, the method further comprises:
acquiring a first time point for updating the first model, and determining a first difference value between the current time point and the first time point;
judging whether the first difference value exceeds a first threshold value;
if yes, returning to the step of training a basic model based on the first image data to obtain a second model;
if not, analyzing the video stream based on the current first model to obtain analyzed first image data.
8. A video surveillance system, comprising:
the client is used for obtaining a task to be executed, wherein the task to be executed comprises a path address of a video stream;
the scheduling service module is used for distributing the tasks to be executed to the corresponding main control modules;
the main control module is used for acquiring a corresponding video stream based on the path address, and sending the video stream to a current first model in the analysis module for analysis so as to obtain analyzed first image data;
the model training module is used for training a basic model based on the first image data to obtain a second model and generating model upgrading data based on the second model;
the main control module is further configured to upgrade a current first model in the analysis module by using the model upgrade data to obtain the updated first model.
9. An electronic device, comprising: a memory and a processor coupled to each other, wherein the memory stores program data that the processor calls to perform the method of any of claims 1-7.
10. A computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, carry out the method of any one of claims 1-7.
CN202110839616.8A 2021-07-23 2021-07-23 Model upgrading method, video monitoring system, electronic equipment and readable storage medium Pending CN113672252A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110839616.8A CN113672252A (en) 2021-07-23 2021-07-23 Model upgrading method, video monitoring system, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110839616.8A CN113672252A (en) 2021-07-23 2021-07-23 Model upgrading method, video monitoring system, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113672252A true CN113672252A (en) 2021-11-19

Family

ID=78540026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110839616.8A Pending CN113672252A (en) 2021-07-23 2021-07-23 Model upgrading method, video monitoring system, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113672252A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569911A (en) * 2019-09-11 2019-12-13 深圳绿米联创科技有限公司 Image recognition method, device, system, electronic equipment and storage medium
CN111062479A (en) * 2019-12-19 2020-04-24 北京迈格威科技有限公司 Model rapid upgrading method and device based on neural network
CN111753606A (en) * 2019-07-04 2020-10-09 杭州海康威视数字技术股份有限公司 Intelligent model upgrading method and device
CN112214639A (en) * 2020-10-29 2021-01-12 Oppo广东移动通信有限公司 Video screening method, video screening device and terminal equipment
CN113095434A (en) * 2021-04-27 2021-07-09 深圳市商汤科技有限公司 Target detection method and device, electronic equipment and storage medium
WO2021138855A1 (en) * 2020-01-08 2021-07-15 深圳市欢太科技有限公司 Model training method, video processing method and apparatus, storage medium and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753606A (en) * 2019-07-04 2020-10-09 杭州海康威视数字技术股份有限公司 Intelligent model upgrading method and device
CN110569911A (en) * 2019-09-11 2019-12-13 深圳绿米联创科技有限公司 Image recognition method, device, system, electronic equipment and storage medium
CN111062479A (en) * 2019-12-19 2020-04-24 北京迈格威科技有限公司 Model rapid upgrading method and device based on neural network
WO2021138855A1 (en) * 2020-01-08 2021-07-15 深圳市欢太科技有限公司 Model training method, video processing method and apparatus, storage medium and electronic device
CN112214639A (en) * 2020-10-29 2021-01-12 Oppo广东移动通信有限公司 Video screening method, video screening device and terminal equipment
CN113095434A (en) * 2021-04-27 2021-07-09 深圳市商汤科技有限公司 Target detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10565442B2 (en) Picture recognition method and apparatus, computer device and computer- readable medium
US11537126B2 (en) Method and system for on-the-fly object labeling via cross modality validation in autonomous driving vehicles
US10650340B2 (en) Tracking and/or analyzing facility-related activities
US20230140540A1 (en) Method and system for distributed learning and adaptation in autonomous driving vehicles
US20220350340A1 (en) Method and system for object centric stereo in autonomous driving vehicles
JP2019505029A5 (en)
CN101310308A (en) Reconstruction render farm used in motion capture
CN111105495A (en) Laser radar mapping method and system fusing visual semantic information
CN109840503B (en) Method and device for determining category information
IL257092A (en) A method and system for tracking objects between cameras
CN113592390A (en) Warehousing digital twin method and system based on multi-sensor fusion
CN111291646A (en) People flow statistical method, device, equipment and storage medium
CN113132633A (en) Image processing method, device, equipment and computer readable storage medium
CN109685805A (en) A kind of image partition method and device
Bottino et al. Street viewer: An autonomous vision based traffic tracking system
CN111950507B (en) Data processing and model training method, device, equipment and medium
CN113672252A (en) Model upgrading method, video monitoring system, electronic equipment and readable storage medium
EP2302571A1 (en) Method and apparatus for efficiently configuring a motion simulation device
CN111523472A (en) Active target counting method and device based on machine vision
US20210385426A1 (en) A calibration method for a recording device and a method for an automatic setup of a multi-camera system
CN111860261B (en) Passenger flow value statistical method, device, equipment and medium
CN112801200B (en) Data packet screening method, device, equipment and storage medium
CN109829490B (en) Correction vector searching method, target classification method and device
CN110956644B (en) Motion trail determination method and system
CN115601401A (en) Tracking counting method based on livestock group movement characteristics and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination