CN112036236A - GhostNet-based detection model training method, device and medium - Google Patents

GhostNet-based detection model training method, device and medium Download PDF

Info

Publication number
CN112036236A
CN112036236A CN202010707926.XA CN202010707926A CN112036236A CN 112036236 A CN112036236 A CN 112036236A CN 202010707926 A CN202010707926 A CN 202010707926A CN 112036236 A CN112036236 A CN 112036236A
Authority
CN
China
Prior art keywords
model
ghostnet
detection model
initial
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010707926.XA
Other languages
Chinese (zh)
Other versions
CN112036236B (en
Inventor
冯落落
李锐
金长新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Science Research Institute Co Ltd
Original Assignee
Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Inspur Hi Tech Investment and Development Co Ltd filed Critical Jinan Inspur Hi Tech Investment and Development Co Ltd
Priority to CN202010707926.XA priority Critical patent/CN112036236B/en
Publication of CN112036236A publication Critical patent/CN112036236A/en
Application granted granted Critical
Publication of CN112036236B publication Critical patent/CN112036236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a GhostNet-based detection model training method, equipment and medium, including: pre-building an initial GhostNet model, and training the initial GhostNet model according to an initial first data set to obtain a GhostNet model meeting requirements; determining an initial detection model according to the GhostNet model meeting the requirements and a pre-trained SSD model, wherein the SSD model comprises a VGG16 module, and the GhostNet model comprises a GhostBottleNeck module; and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirements. According to the embodiment of the application, the detection model is built through the GhostNet model and the SSD model, and the size of the detection model is optimized, so that the detection model can be better deployed in external equipment to complete a target detection task.

Description

GhostNet-based detection model training method, device and medium
Technical Field
The application relates to the technical field of computers, in particular to a GhostNet-based detection model training method, equipment and medium.
Background
The target detection is a hot direction of computer vision and digital image processing, is widely applied to the fields of robot navigation, intelligent video monitoring, industrial detection, aerospace and the like, reduces the consumption of human capital through the computer vision, and has important practical significance.
Although the target detection in the prior art has made a major breakthrough, the existing target detection model is large and is not easy to be deployed in the device with a small operation memory.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, device, and medium for training a detection model based on GhostNet, which are used to solve the problem that the existing target detection model is large and is not easy to deploy in a device with a small operating memory.
The embodiment of the application adopts the following technical scheme:
the embodiment of the application provides a GhostNet-based detection model training method, which is characterized by comprising the following steps:
pre-building an initial GhostNet model, and training the initial GhostNet model according to an initial first data set to obtain a GhostNet model meeting requirements;
determining an initial detection model according to the GhostNet model meeting the requirements and a pre-trained SSD model, wherein the SSD model comprises a VGG16 module, and the GhostNet model comprises a GhostBottleNeck module;
and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirements.
It should be noted that, in the embodiment of the present application, the detection model is constructed by the GhostNet model and the SSD model, and the size of the detection model is optimized, so that the detection model can be better deployed in the external device to complete the target detection task.
Further, the Ghost net model comprises a convolutional layer, a pooling layer and a fully-connected layer, wherein the convolutional layer comprises a plurality of Ghost BottleNeck modules.
The structure of the GhostNet model is specifically disclosed above.
Further, the determining an initial detection model according to the qualified GhostNet model and the pre-trained SSD model specifically includes:
removing preset parts of the convolutional layer, the pooling layer and the full-connection layer from the GhostNet model meeting the requirements;
replacing a VGG16 module in the SSD model with the GhostNet model with the preset parts of the convolutional layer, the pooling layer and the full-connection layer removed;
and replacing the convolution of a preset part in the SSD model with a preset value to determine an initial detection model.
It should be noted that, the above specifically discloses the step of determining the initial detection model through the GhostNet model and the SSD model, the model after the VGG16 training is very large, and if the SSD model is directly deployed in the device, the model occupies too much storage space and computational resources of the device. The GhostNet model introduces a DepthWise convolution operation, and can greatly reduce the space and the computing resources occupied by the model. It is exactly this effect that the GhostNet model with the preset parts removed, the pooling layer and the full connection layer replaces the VGG16 module in the SSD model.
Further, the GhostNet model specifically includes: conv2d3x3, multiple Ghost BottleNeck modules, Conv2d1x1, AvgPool 7x7, Conv2d1x1, and a fully connected layer.
The specific structure of the GhostNet model is specifically disclosed above.
Further, the determining an initial detection model according to the qualified GhostNet model and the pre-trained SSD model specifically includes:
removing the Conv2d1x1, the AvgPool 7x7, the Conv2d1x1 and the full connection layer from the qualified GhostNet model;
replacing the VGG16 module in the SSD model with the GhostNet model with the Conv2d1x1, the AvgPool 7x7, the Conv2d1x1 and the full connectivity layer removed;
the convolution with convolution value 3x3x (6x (classes +4)) in the SSD model is replaced with conv3x3x (4x (classes +4)) so that the convolution with convolution value conv3x3x (4x (classes +4)) performs convolution operation on the last layer of the feature map and determines the initial detection model.
It should be noted that the above specifically discloses a specific step of determining an initial detection model by using a GhostNet model and an SSD model.
Further, the first data set is trained as an ImageNet data set, and the second data set is a COCO data set.
It should be noted that the ImageNet data set is a computer vision data set, and an initial GhostNet model can be well trained. The COCO data set is a data set that can be used for image detection. The initial detection model is trained through the data set, so that the training effect of the detection model is better.
Further, after training and training the initial detection model according to the second pre-constructed data set to obtain a detection model meeting the requirement, the method further includes:
inputting a target image to the satisfactory detection model,
and determining the detection result of the target image according to the detection model meeting the requirement.
Further, the method further comprises:
deploying a pre-trained detection model into AR glasses;
and obtaining a target image through a detection model in the AR glasses, and determining a detection result of the target image.
It should be noted that the embodiments of the present specification may deploy the detection model in a device in virtual reality, such as AR glasses. Therefore, various objects can be distinguished in the equipment, so that the AR glasses are more intelligent.
The embodiment of the present application further provides a training device for a detection model based on GhostNet, which is characterized in that the device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
pre-building an initial GhostNet model, and training the initial GhostNet model according to an initial first data set to obtain a GhostNet model meeting requirements;
determining an initial detection model according to the GhostNet model meeting the requirements and a pre-trained SSD model, wherein the SSD model comprises a VGG16 module, and the GhostNet model comprises a GhostBottleNeck module;
and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirements.
The embodiment of the application further provides a training medium for a detection model based on the GhostNet, which stores computer executable instructions, and is characterized in that the computer executable instructions are set as:
pre-building an initial GhostNet model, and training the initial GhostNet model according to an initial first data set to obtain a GhostNet model meeting requirements;
determining an initial detection model according to the GhostNet model meeting the requirements and a pre-trained SSD model, wherein the SSD model comprises a VGG16 module, and the GhostNet model comprises a GhostBottleNeck module;
and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirements.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: according to the embodiment of the application, the detection model is built through the GhostNet model and the SSD model, and the size of the detection model is optimized, so that the detection model can be better deployed in external equipment to complete a target detection task.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a method for training a detection model based on GhostNet according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a GhostNet model provided in an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a Ghost BottleNeck module provided in an embodiment of the present specification;
fig. 4 is a schematic structural diagram of an SSD model provided in an embodiment of the present specification;
FIG. 5 is a schematic diagram of a scenario provided by an embodiment of the present specification;
FIG. 6 is a block diagram of a Ghost module provided in an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of a method for training a detection model based on GhostNet according to a second embodiment of the present specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a method for training a detection model based on GhostNet according to an embodiment of the present disclosure, where the method includes:
and S101, pre-building an initial GhostNet model, training the initial GhostNet model according to an initial first data set, and obtaining the GhostNet model meeting the requirements.
Referring to fig. 2, a schematic structural diagram of a Ghost net model is shown, in which G-bneck is a Ghost bottomeeneck module, and the Ghost net model includes a convolutional layer, a pooling layer, and a fully-connected layer, where the convolutional layer includes a plurality of Ghost bottomeeneck modules. The GhostNet model specifically comprises: conv2d3x3, multiple Ghost BottleNeck modules, Conv2d1x1, AvgPool 7x7, Conv2d1x1, and a fully connected layer. Referring to fig. 3, a schematic structural diagram of the Ghost BottleNeck module is shown. The Ghost Bottleneck module of Stride 2 has the functions of learning features and down sampling. The structure of the Ghost bottompiece module is very similar to the structure of resnet, except that channel is first dimension-up and then dimension-down.
And S102, determining an initial detection model according to the GhostNet model meeting the requirement and a pre-trained SSD model.
In step S102 in the embodiment of this specification, this step may specifically include:
removing preset parts of the convolutional layer, the pooling layer and the full-connection layer from the GhostNet model meeting the requirements;
replacing a VGG16 module in the SSD model with the GhostNet model with the preset parts of the convolutional layer, the pooling layer and the full-connection layer removed;
and replacing the convolution of a preset part in the SSD model with a preset value to determine an initial detection model. The convolution of the preset part in the SSD model is replaced by the preset value so as to adapt to the replaced GhostNet model, and the detection task can be better completed by the detection model.
Specific structure of SSD model referring to fig. 4, SSD model uses different feature maps for classification and regression respectively in order to better identify objects of different sizes. Large feature maps better identify small objects and small feature maps better identify large objects. However, the model trained by the VGG16 is very large, and if the SSD model is directly deployed in a device, the model occupies too much storage space and computational resources of the device. The GhostNet model introduces a DepthWise convolution operation, namely a correlation operation performed by a Ghost BottleNeck module. Specifically, redundancy exists between feature maps in the trained VGG16 network. As shown in the scenario diagram of fig. 5, both connected boxes exhibit similarities between the profiles. Similar feature maps can be obtained without the conventional convolution operation, and can be obtained by using a DepthWise convolution operation. As shown in fig. 6, the structure diagram of the Ghost module is obtained by first obtaining half of the feature map by using a conventional convolution, and then obtaining another part of the feature map by using a DepthWise convolution operation on the half of the convolution. And further, redundancy among the characteristic graphs is greatly reduced, and the calculation amount of the detection model is saved.
Unlike conventional convolution operations, one convolution kernel of the DepthWise convolution is responsible for one channel, and one channel is convolved with only one convolution kernel. The above-mentioned conventional convolution each convolution kernel is to operate each channel of the input picture at the same time. Similarly, for a 5 × 5 pixel, three-channel color input picture, the Depthwise convolution is performed first through a first convolution operation, and unlike the above conventional convolution, the Depthwise convolution is performed entirely in a two-dimensional plane. The number of convolution kernels is the same as the number of channels in the previous layer (channels and convolution kernels are in one-to-one correspondence). So that 3 characteristic maps are generated after the operation of one three-channel image.
Further, according to the GhostNet model meeting the requirements and the pre-trained SSD model, determining an initial detection model, specifically comprising:
removing the Conv2d1x1, the AvgPool 7x7, the Conv2d1x1 and the full connection layer from the qualified GhostNet model;
replacing the VGG16 module in the SSD model with the GhostNet model with the Conv2d1x1, the AvgPool 7x7, the Conv2d1x1 and the full connectivity layer removed;
the convolution with convolution value 3x3x (6x (classes +4)) in the SSD model is replaced with conv3x3x (4x (classes +4)) so that the convolution with convolution value conv3x3x (4x (classes +4)) performs convolution operation on the last layer of the feature map and determines the initial detection model.
It should be noted that the first data set training may be an ImageNet data set. The ImageNet dataset is a computer vision dataset that includes a plurality of pictures and a plurality of Synset indices. Synset is a node in the WordNet hierarchy. The pictures in the ImageNet dataset cover the class of pictures that would be seen in most lives.
And S103, training and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirement.
Corresponding to the first embodiment, fig. 7 is a schematic flowchart of a method for training a detection model based on GhostNet according to the second embodiment of the present specification, where the execution unit of the target detection system in the embodiment of the present specification may execute the following steps, and the method includes:
step S201, an initial GhostNet model is set up in advance, and the initial GhostNet model is trained according to an initial first data set to obtain the GhostNet model meeting the requirements.
Referring to fig. 2, a schematic structural diagram of a Ghost net model is shown, in which G-bneck is a Ghost bottomeeneck module, and the Ghost net model includes a convolutional layer, a pooling layer, and a fully-connected layer, where the convolutional layer includes a plurality of Ghost bottomeeneck modules. The GhostNet model specifically comprises: conv2d3x3, multiple Ghost BottleNeck modules, Conv2d1x1, AvgPool 7x7, Conv2d1x1, and a fully connected layer. Referring to fig. 3, a schematic structural diagram of the Ghost BottleNeck module is shown. The Ghost Bottleneck module of Stride 2 has the functions of learning features and down sampling. The structure of the Ghost bottompiece module is very similar to the structure of resnet, except that channel is first dimension-up and then dimension-down.
And S202, determining an initial detection model according to the GhostNet model meeting the requirement and a pre-trained SSD model.
In step S202 in the embodiment of this specification, this step may specifically include:
removing preset parts of the convolutional layer, the pooling layer and the full-connection layer from the GhostNet model meeting the requirements;
replacing a VGG16 module in the SSD model with the GhostNet model with the preset parts of the convolutional layer, the pooling layer and the full-connection layer removed;
and replacing the convolution of a preset part in the SSD model with a preset value to determine an initial detection model. The convolution of the preset part in the SSD model is replaced by the preset value so as to adapt to the replaced GhostNet model, and the detection task can be better completed by the detection model.
Specific structure of SSD model referring to fig. 4, SSD model uses different feature maps for classification and regression respectively in order to better identify objects of different sizes. Large feature maps better identify small objects and small feature maps better identify large objects. However, the model trained by the VGG16 is very large, and if the SSD model is directly deployed in a device, the model occupies too much storage space and computational resources of the device. The GhostNet model introduces a DepthWise convolution operation, namely a correlation operation performed by a Ghost BottleNeck module. Specifically, redundancy exists between feature maps in the trained VGG16 network. As shown in fig. 5, both connected boxes exhibit similarities between the profiles. Similar feature maps can be obtained without the conventional convolution operation, and can be obtained by using a DepthWise convolution operation. As shown in fig. 6, the structure diagram of the Ghost module is obtained by first obtaining half of the feature map by using a conventional convolution, and then obtaining another part of the feature map by using a DepthWise convolution operation on the half of the convolution. And further, redundancy among the characteristic graphs is greatly reduced, and the calculation amount of the detection model is saved.
Unlike conventional convolution operations, one convolution kernel of the DepthWise convolution is responsible for one channel, and one channel is convolved with only one convolution kernel. The above-mentioned conventional convolution each convolution kernel is to operate each channel of the input picture at the same time. Similarly, for a 5 × 5 pixel, three-channel color input picture, the Depthwise convolution is performed first through a first convolution operation, and unlike the above conventional convolution, the Depthwise convolution is performed entirely in a two-dimensional plane. The number of convolution kernels is the same as the number of channels in the previous layer (channels and convolution kernels are in one-to-one correspondence). So that 3 characteristic maps are generated after the operation of one three-channel image.
Further, according to the GhostNet model meeting the requirements and the pre-trained SSD model, determining an initial detection model, specifically comprising:
removing the Conv2d1x1, the AvgPool 7x7, the Conv2d1x1 and the full connection layer from the qualified GhostNet model;
replacing the VGG16 module in the SSD model with the GhostNet model with the Conv2d1x1, the AvgPool 7x7, the Conv2d1x1 and the full connectivity layer removed;
the convolution with convolution value 3x3x (6x (classes +4)) in the SSD model is replaced with conv3x3x (4x (classes +4)) so that the convolution with convolution value conv3x3x (4x (classes +4)) performs convolution operation on the last layer of the feature map and determines the initial detection model.
It should be noted that the first data set training may be an ImageNet data set. The ImageNet dataset is a computer vision dataset that includes a plurality of pictures and a plurality of Synset indices. Synset is a node in the WordNet hierarchy. The pictures in the ImageNet dataset cover the class of pictures that would be seen in most lives.
And step S203, training and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirement.
In step S203 of the embodiment of the present specification, the second data set may be a COCO data set. COCO, collectively referred to as Common Objects in Context, is a data set that can be used for image detection. The initial detection model is trained through the data set, so that the training effect of the model is better.
Step S204, inputting the target image into a detection model meeting the requirement.
And S205, determining the detection result of the target image according to the detection model.
Further, the executing step of the embodiment of the present specification may further include:
deploying a pre-trained detection model into AR glasses;
and obtaining a target image through a detection model in the AR glasses, and determining a detection result of the target image. So that the detection model can be better deployed in the AR glasses.
Embodiments of the present description may deploy the detection model in a device in virtual reality, such as AR glasses. Therefore, various objects can be distinguished in the equipment, so that the AR glasses are more intelligent.
The embodiment of the present application further provides a training device for a detection model based on GhostNet, which is characterized in that the device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
pre-building an initial GhostNet model, and training the initial GhostNet model according to an initial first data set to obtain a GhostNet model meeting requirements;
determining an initial detection model according to the GhostNet model meeting the requirements and a pre-trained SSD model, wherein the SSD model comprises a VGG16 module, and the GhostNet model comprises a GhostBottleNeck module;
and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirements.
The embodiment of the application further provides a training medium for a detection model based on the GhostNet, which stores computer executable instructions, and is characterized in that the computer executable instructions are set as:
pre-building an initial GhostNet model, and training the initial GhostNet model according to an initial first data set to obtain a GhostNet model meeting requirements;
determining an initial detection model according to the GhostNet model meeting the requirements and a pre-trained SSD model, wherein the SSD model comprises a VGG16 module, and the GhostNet model comprises a GhostBottleNeck module;
and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirements.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A GhostNet-based detection model training method is characterized by comprising the following steps:
pre-building an initial GhostNet model, and training the initial GhostNet model according to the initial first data set to obtain a GhostNet model meeting the requirements;
determining an initial detection model according to the GhostNet model meeting the requirements and a pre-trained SSD model, wherein the SSD model comprises a VGG16 module, and the GhostNet model comprises a GhostBottleNeck module;
and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirements.
2. The method for training a GhostNet-based detection model according to claim 1, wherein the GhostNet model comprises a convolutional layer, a pooling layer and a fully-connected layer, wherein the convolutional layer comprises a plurality of GhostBottleNeck modules.
3. The method for training a detection model based on GhostNet according to claim 2, wherein the determining an initial detection model according to the qualified GhostNet model and a pre-trained SSD model specifically comprises:
removing preset parts of the convolutional layer, the pooling layer and the full-connection layer from the GhostNet model meeting the requirements;
replacing a VGG16 module in the SSD model with the GhostNet model with the preset parts of the convolutional layer, the pooling layer and the full-connection layer removed;
and replacing the convolution of a preset part in the SSD model with a preset value to determine an initial detection model.
4. The GhostNet-based detection model training method of claim 3, wherein the GhostNet model specifically comprises: conv2d3x3, multiple Ghost BottleNeck modules, Conv2d1x1, AvgPool 7x7, Conv2d1x1, and a fully connected layer.
5. The method for training a detection model based on GhostNet according to claim 4, wherein the determining an initial detection model according to the qualified GhostNet model and a pre-trained SSD model specifically comprises:
removing the Conv2d1x1, the AvgPool 7x7, the Conv2d1x1 and the full connection layer from the qualified GhostNet model;
replacing the VGG16 module in the SSD model with the GhostNet model with the Conv2d1x1, the AvgPool 7x7, the Conv2d1x1 and the full connectivity layer removed;
the convolution with convolution value 3x3x (6x (classes +4)) in the SSD model is replaced with conv3x3x (4x (classes +4)) so that the convolution with convolution value conv3x3x (4x (classes +4)) performs convolution operation on the last layer of the feature map and determines the initial detection model.
6. The method for training a GhostNet-based detection model according to claim 1, wherein the first dataset is trained as an ImageNet dataset and the second dataset is a COCO dataset.
7. The method for training a GhostNet-based detection model according to claim 1, wherein after training the initial detection model according to the second pre-constructed data set and obtaining a satisfactory detection model, the method further comprises:
inputting a target image to the satisfactory detection model,
and determining the detection result of the target image according to the detection model meeting the requirement.
8. The method for training a GhostNet-based detection model according to claim 1, further comprising:
deploying a pre-trained detection model into AR glasses;
and obtaining a target image through a detection model in the AR glasses, and determining a detection result of the target image.
9. A GhostNet-based training device for a detection model, the device comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
pre-building an initial GhostNet model, and training the initial GhostNet model according to an initial first data set to obtain a GhostNet model meeting requirements;
determining an initial detection model according to the GhostNet model meeting the requirements and a pre-trained SSD model, wherein the SSD model comprises a VGG16 module, and the GhostNet model comprises a GhostBottleNeck module;
and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirements.
10. A training medium for a GhostNet-based detection model, having stored thereon computer-executable instructions configured to:
pre-building an initial GhostNet model, and training the initial GhostNet model according to an initial first data set to obtain a GhostNet model meeting requirements;
determining an initial detection model according to the GhostNet model meeting the requirements and a pre-trained SSD model, wherein the SSD model comprises a VGG16 module, and the GhostNet model comprises a GhostBottleNeck module;
and training the initial detection model according to a pre-constructed second data set to obtain a detection model meeting the requirements.
CN202010707926.XA 2020-07-22 2020-07-22 Image detection method, device and medium based on GhostNet Active CN112036236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010707926.XA CN112036236B (en) 2020-07-22 2020-07-22 Image detection method, device and medium based on GhostNet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010707926.XA CN112036236B (en) 2020-07-22 2020-07-22 Image detection method, device and medium based on GhostNet

Publications (2)

Publication Number Publication Date
CN112036236A true CN112036236A (en) 2020-12-04
CN112036236B CN112036236B (en) 2023-07-14

Family

ID=73581906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010707926.XA Active CN112036236B (en) 2020-07-22 2020-07-22 Image detection method, device and medium based on GhostNet

Country Status (1)

Country Link
CN (1) CN112036236B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381076A (en) * 2021-01-18 2021-02-19 西南石油大学 Method for preprocessing picture in video significance detection task
CN113011365A (en) * 2021-03-31 2021-06-22 中国科学院光电技术研究所 Target detection method combined with lightweight network
CN113421222A (en) * 2021-05-21 2021-09-21 西安科技大学 Lightweight coal gangue target detection method
CN113421230A (en) * 2021-06-08 2021-09-21 浙江理工大学 Vehicle-mounted liquid crystal display light guide plate defect visual detection method based on target detection network
CN114120046A (en) * 2022-01-25 2022-03-01 武汉理工大学 Lightweight engineering structure crack identification method and system based on phantom convolution
CN114202731A (en) * 2022-02-15 2022-03-18 南京天创电子技术有限公司 Multi-state knob switch identification method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170053019A1 (en) * 2015-08-17 2017-02-23 Critical Informatics, Inc. System to organize search and display unstructured data
CN106971230A (en) * 2017-05-10 2017-07-21 中国石油大学(北京) First break pickup method and device based on deep learning
CN108985169A (en) * 2018-06-15 2018-12-11 浙江工业大学 Across the door operation detection method in shop based on deep learning target detection and dynamic background modeling
CN110705338A (en) * 2018-07-10 2020-01-17 浙江宇视科技有限公司 Vehicle detection method and device and monitoring equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170053019A1 (en) * 2015-08-17 2017-02-23 Critical Informatics, Inc. System to organize search and display unstructured data
CN106971230A (en) * 2017-05-10 2017-07-21 中国石油大学(北京) First break pickup method and device based on deep learning
CN108985169A (en) * 2018-06-15 2018-12-11 浙江工业大学 Across the door operation detection method in shop based on deep learning target detection and dynamic background modeling
CN110705338A (en) * 2018-07-10 2020-01-17 浙江宇视科技有限公司 Vehicle detection method and device and monitoring equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAI HAN ET AL.: "GhostNet: More Features from Cheap Operations", 《ARXIV:1911.11907V2》 *
WEI LIU ET AL.: "SSD: Single Shot MultiBox Detector", 《ARXIV:1512.02325V5》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381076A (en) * 2021-01-18 2021-02-19 西南石油大学 Method for preprocessing picture in video significance detection task
CN113011365A (en) * 2021-03-31 2021-06-22 中国科学院光电技术研究所 Target detection method combined with lightweight network
CN113421222A (en) * 2021-05-21 2021-09-21 西安科技大学 Lightweight coal gangue target detection method
CN113421230A (en) * 2021-06-08 2021-09-21 浙江理工大学 Vehicle-mounted liquid crystal display light guide plate defect visual detection method based on target detection network
CN113421230B (en) * 2021-06-08 2023-10-20 浙江理工大学 Visual detection method for defects of vehicle-mounted liquid crystal display light guide plate based on target detection network
CN114120046A (en) * 2022-01-25 2022-03-01 武汉理工大学 Lightweight engineering structure crack identification method and system based on phantom convolution
CN114202731A (en) * 2022-02-15 2022-03-18 南京天创电子技术有限公司 Multi-state knob switch identification method

Also Published As

Publication number Publication date
CN112036236B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN112036236B (en) Image detection method, device and medium based on GhostNet
JP6793838B2 (en) Blockchain-based data processing methods and equipment
CN109034183B (en) Target detection method, device and equipment
CN111540035B (en) Particle rendering method, device and equipment
CN109947643B (en) A/B test-based experimental scheme configuration method, device and equipment
CN111652351A (en) Deployment method, device and medium of neural network model
CN112308113A (en) Target identification method, device and medium based on semi-supervision
CN110806847A (en) Distributed multi-screen display method, device, equipment and system
CN113011483A (en) Method and device for model training and business processing
CN115981870A (en) Data processing method and device, storage medium and electronic equipment
CN111191090B (en) Method, device, equipment and storage medium for determining service data presentation graph type
CN112528614A (en) Table editing method and device and electronic equipment
CN108563412B (en) Numerical value change display method and device
CN112949642B (en) Character generation method and device, storage medium and electronic equipment
CN111898615A (en) Feature extraction method, device, equipment and medium of object detection model
CN112036434A (en) Training method of target detection model
CN114283268A (en) Three-dimensional model processing method, device, equipment and medium
CN113360154A (en) Page construction method, device, equipment and readable medium
CN111596946A (en) Recommendation method, device and medium for intelligent contracts of block chains
CN107645541B (en) Data storage method and device and server
CN111984247A (en) Service processing method and device and electronic equipment
CN115098271B (en) Multithreading data processing method, device, equipment and medium
CN115017915B (en) Model training and task execution method and device
CN116167437B (en) Chip management system, method, device and storage medium
CN107015792B (en) Method and equipment for realizing chart unified animation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230613

Address after: 250101 building S02, 1036 Chaochao Road, high tech Zone, Jinan City, Shandong Province

Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd.

Address before: Floor 6, Chaochao Road, Shandong Province

Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant