CN109543697A - A kind of RGBD images steganalysis method based on deep learning - Google Patents
A kind of RGBD images steganalysis method based on deep learning Download PDFInfo
- Publication number
- CN109543697A CN109543697A CN201811372149.7A CN201811372149A CN109543697A CN 109543697 A CN109543697 A CN 109543697A CN 201811372149 A CN201811372149 A CN 201811372149A CN 109543697 A CN109543697 A CN 109543697A
- Authority
- CN
- China
- Prior art keywords
- feature
- image
- network
- rgb
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000013135 deep learning Methods 0.000 title claims abstract description 10
- 238000001514 detection method Methods 0.000 claims abstract description 14
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 238000012549 training Methods 0.000 claims abstract description 6
- 239000000284 extract Substances 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 210000000988 bone and bone Anatomy 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The RGBD images steganalysis method based on deep learning that the invention discloses a kind of comprising following steps: 1, the low-dimensional feature of RGB image and depth image using sparse self-encoding encoder is obtained;2, obtained high dimensional feature is merged, obtains fusion feature by the high dimensional feature that the RGB image and depth image are further extracted using convolutional network;3, fused feature is sent to training in classifier, adjusting parameter makes classifying quality reach best, by the above-mentioned network shifting value adjusted into target detection frame, and entire target detection frame is trained, trained model is used to carry out target identification.The present invention takes full advantage of the RGB feature and depth characteristic of RGBD image, and network model is small, convenient to transplant on embedded platform.
Description
Technical field
The invention belongs to field of image processings, and in particular to a kind of images steganalysis side RGBD based on deep learning
Method.
Background technique
Target identification refer to using computer vision and image processing techniques to the certain objects in image carry out identification and
Classification, i.e., first by the training of computer, then to judge the image processing techniques of unknown picture.The field is in depth at present
Huge progress is had been achieved under the promotion of learning art, and is widely used in military affairs, public safety, agricultural, industrial production
In.In military aspect, unmanned plane etc. has all used target identification technology;In terms of public safety, airport security etc. is also sharp extensively
Use target identification technology;Agriculturally, crop species identification etc. is more and more industrially turned by target identification automatically
Industry, the Intelligent assembly method of also all having used target identification of components;In daily life, even more from not target identification skill
Art, such as present research hotspot smart home, it is just utilized recognition of face, fingerprint recognition, speech recognition and smell and knows
The technologies such as not.Existing target identification technology depends on RGB image mostly, but since RGB image and gray level image include letter
The limitation such as limitation of breath, has not been able to satisfy the high request in the industrial application in the present age for object identification accuracy rate gradually.
With the extensive use of kinect camera, the picture of such camera shooting is even more to contain depth information, is able to record high quality
The synchronization video comprising RGB image and depth (Depth) image (RGB-D image), the shape feature, right that can reflect object
Depth information, which extracts, utilizes the accuracy that can greatly improve identification.RGB image and depth image are to being all a kind of each other
Effective supplement, it can be seen that, the object identification based on RGB-D image can effectively improve the recognition accuracy of object.
It include at present working as RGBD image using the method main method that deep learning carries out target identification to RGBD image
Make to be handled in four-way image feeding neural network;And respectively first with sparse self-encoding encoder to RGB and depth information into
Row feature extraction, then the identical RNN network of the two characteristic use is extracted into high dimensional feature and is merged.These methods are all ignored
The otherness of content between RGB information and depth information, two kinds of information is handled using identical network structure, causes
It cannot utmostly reflect their characteristic to the feature of RGB information and extraction of depth information.And the net that these methods use
Network structure is also more complicated, causes the model generated to occupy biggish memory space, is unfavorable for model in embedded platform
On transplanted.
Summary of the invention
The purpose of the present invention is to provide a kind of recognition accuracy, the relatively simple RGBD image object of higher and model is known
Other method.
To achieve the above object, the invention adopts the following technical scheme:
A kind of RGBD images steganalysis method based on deep learning comprising following steps:
Step 1, the low-dimensional feature that RGB image and depth image are obtained using sparse self-encoding encoder;
Step 2, the high dimensional feature that the RGB image and depth image are further extracted using convolutional network, by what is obtained
High dimensional feature merges, and obtains fusion feature;
Fused feature is sent to training in classifier by step 3, and adjusting parameter reaches classifying quality most preferably, will
The above-mentioned network shifting value adjusted is trained into target detection frame, and to entire target detection frame, will be trained
Model is for carrying out target identification.
In the step 1, the sparse self-encoding encoder includes input layer, middle layer and output layer, study input and defeated
Mapping between out saves the weight of input layer and middle layer, and the form which is converted into convolution kernel is used for convolution extraction figure
The low-dimensional feature of picture.
In the step 2, the feature for RGB image and depth image is respectively assigned to a random weight, the weight
It can adaptively be obtained by trained mode, the knot after the feature of the RGB image and depth image is multiplied with weight respectively
Fruit obtains the fusion feature along with random a pair biases.
After adopting the above technical scheme, compared with the background technology, the present invention, having the advantages that
1, the relatively current RGBD image classification method accuracy of classification accuracy rate of the invention has further raising,
It can be used under complex situations similar in object color identifying corresponding target.
2, the present invention solves the RGB information and depth that image is made full use of how under a lesser network frame
Information improves the classification accuracy rate of RGBD image and the accuracy rate of target detection, while trained network model is relatively small,
It is transplanted to that the memory space occupied under embedded platform is less, facilitates popularization of the recognition methods under embedded platform and answers
With.
Detailed description of the invention
Fig. 1 is the design diagram of sparse self-encoding encoder;
Fig. 2 is the structural schematic diagram of sorter network;
Fig. 3 is that RGB image and depth image merge schematic diagram;
Fig. 4 is the structural schematic diagram of target detection network frame.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
Embodiment
The RGBD images steganalysis method based on deep learning that the invention discloses a kind of comprising following steps:
Step 1, data characteristics are extracted.Learn itself using sparse self-encoding encoder and arrive the mapping of itself, and is converted into convolution
Core extracts the low-dimensional feature of RGB image and depth image respectively.Low-dimensional spy can be more efficiently extracted using sparse self-encoding encoder
Sign is conducive to subsequent convolution and extracts important information.
Please refer to the design diagram of sparse coding device shown in FIG. 1, sparse self-encoding encoder is by input layer, middle layer and defeated
Layer forms out.The purpose of sparse self-encoding encoder is to learn the mapping for itself arriving itself, i.e., so that the output of output layer to the greatest extent may be used
The input of input layer can be equal to, while the neuron number of middle layer is designed as 100, than the neuron number of input layer and output layer
Mesh 64 is more, this ensure that the feature that middle layer obtains has sparsity, is more advantageous to and extracts low-dimensional feature.By to thousands of
Picture learn it is this itself arrive itself mapping, the final parameter w retained from input layer to middle layer, and by the Parameter Switch
At the form of convolution kernel, the convolution kernel port number after converting is former by the convolution nuclear convolution in this way into the neuron number of middle layer
Beginning image is more advantageous to the low-dimensional feature for extracting image.
The building of step 2, converged network.The low-dimensional feature extracted to step 1 adjusts separately RGB figure sorter network and depth
Figure sorter network convolution sum pond layer parameter, improves the accuracy respectively classified as far as possible, obtains respective high dimensional feature, and right
Both features are merged, i.e., the two features are respectively multiplied by coefficient a and b, and are added to obtain fusion feature F.This
Sample can learn to obtain their adaptive weighting, give full play to their characteristic superiority.
Please refer to the structural schematic diagram of sorter network shown in Fig. 2 comprising RGB image feature extraction network portion, depth
Spend image characteristics extraction part, Fusion Features and classified part.
RGB image feature extraction network portion includes a sparse self-encoding encoder and three convolutional layers.It is sparse to encode certainly
The effect of device is to extract the low-dimensional feature of image, and three convolutional layers are then the layer-by-layer high dimensional features for extracting RGB image, is rolled up at three
It further include a regularization layer and an activation primitive layer after lamination, it is possible to prevente effectively from the risk of over-fitting.
Depth image characteristic extraction part includes a sparse self-encoding encoder and a convolutional layer, here first will be original
Depth map pixel expand to 0 to 255, depth map contrast can be made bigger in this way, feature becomes apparent from.Then using it is sparse from
Encoder extracts the low-dimensional feature of depth map, then along with a convolutional layer further extracts the high dimensional feature of depth map, this
In added activation primitive and regularization equally to avoid over-fitting.
The fusion of network and classified part first do RGB image high dimensional feature obtained above and depth image high dimensional feature
Fusion, then classifies.When finally training, above three part is integrally trained, by the result and reality of classification prediction
Border label is compared, and obtains classification accuracy rate, and the parameter for adjusting network makes classifying quality reach best.
RGB image and depth image as shown in Figure 3 merges schematic diagram, special for RGB image feature and depth image first
Sign is respectively assigned to random initial weight an a and b, and random a pair of of biasing c and d.These two pair initial weight be with partially
Setting can constantly be changed by the study of whole network, then that the RGB image feature and depth image that are assigned to weight respectively is special
Sign is added, then respectively obtains fusion feature plus the biasing c and d learnt.The weight being respectively assigned to can make network exist
The weight of fusion feature is accounted in trained process from main modulation RGB feature and depth characteristic, guarantees that fusion feature can be preferably simultaneous
Care for the color attribute and shape attribute of RGBD image.Finally, fused characteristics of image is sent into convolutional layer, more higher-dimension is obtained
Fusion feature F.
Fusion feature F is sent into classifier by step 3, the parameter of classifier is adjusted, so that classification accuracy rate is best.
Again using the converged network adjusted as the skeleton of target detection frame, labeled data set is trained, is instructed
The model perfected can be used to target identification.In this way by originally the sorter network of Optimum Classification accuracy is transplanted to target detection frame
In frame, the accuracy rate of target detection can effectively improve.
The structural schematic diagram of target detection network frame as shown in Figure 4, the target detection frame is by extracted region network
It is formed with back bone network.Wherein extracted region network is used to determine the approximate region where target, and the part is via faster-
Rcnn self-defining does not need to make any change here.And back bone network is then used to determine that the object in target area is
Classification, back bone network here is the converged network of above-mentioned RGB image and depth image, then by tag along sort and formulation
Good label compares to obtain Classification Loss, and to Classification Loss and itself defined extracted region loss has been instructed together
Practice.By the model obtained after training, the region of respective objects and information in detection RGBD image can be used directly to.By
Few in back bone network parameter, structure is simple, so memory shared by finally obtained model is also smaller, it is convenient and suitable embedded flat
It is transplanted under platform.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto,
In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by anyone skilled in the art,
It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of protection of the claims
Subject to.
Claims (3)
1. a kind of RGBD images steganalysis method based on deep learning, it is characterised in that: the following steps are included:
Step 1, the low-dimensional feature that RGB image and depth image are obtained using sparse self-encoding encoder;
Step 2, the high dimensional feature that the RGB image and depth image are further extracted using convolutional network, the higher-dimension that will be obtained
Feature merges, and obtains fusion feature;
Fused feature is sent in classifier and trains by step 3, and adjusting parameter reaches classifying quality most preferably, will be above-mentioned
The network shifting value adjusted is trained into target detection frame, and to entire target detection frame, by trained model
For carrying out target identification.
2. a kind of RGBD images steganalysis method based on deep learning as described in claim 1, it is characterised in that: in institute
It states in step 1, the sparse self-encoding encoder includes input layer, middle layer and output layer, learns the mapping between outputting and inputting
The form that the weight is converted into convolution kernel is used for the low-dimensional spy that convolution extracts image by the weight for saving input layer and middle layer
Sign.
3. a kind of RGBD images steganalysis method based on deep learning as claimed in claim 2, it is characterised in that: in institute
It states in step 2, is respectively assigned to a random weight for the feature of RGB image and depth image, which can pass through training
Mode adaptively obtains, after the feature of the RGB image and depth image is multiplied with weight respectively as a result, along with random
A pair of of biasing, obtain the fusion feature.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811372149.7A CN109543697A (en) | 2018-11-16 | 2018-11-16 | A kind of RGBD images steganalysis method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811372149.7A CN109543697A (en) | 2018-11-16 | 2018-11-16 | A kind of RGBD images steganalysis method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109543697A true CN109543697A (en) | 2019-03-29 |
Family
ID=65847915
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811372149.7A Pending CN109543697A (en) | 2018-11-16 | 2018-11-16 | A kind of RGBD images steganalysis method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109543697A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084809A (en) * | 2019-05-06 | 2019-08-02 | 成都医云科技有限公司 | Diabetic retinopathy data processing method, device and electronic equipment |
CN110111351A (en) * | 2019-05-10 | 2019-08-09 | 电子科技大学 | Merge the pedestrian contour tracking of RGBD multi-modal information |
CN110286415A (en) * | 2019-07-12 | 2019-09-27 | 广东工业大学 | Safety check contraband detecting method, apparatus, equipment and computer readable storage medium |
CN110929696A (en) * | 2019-12-16 | 2020-03-27 | 中国矿业大学 | Remote sensing image semantic segmentation method based on multi-mode attention and self-adaptive fusion |
CN111401442A (en) * | 2020-03-16 | 2020-07-10 | 中科立业(北京)科技有限公司 | Fruit identification method based on deep learning |
CN111486798A (en) * | 2020-04-20 | 2020-08-04 | 苏州智感电子科技有限公司 | Image ranging method, image ranging system and terminal equipment |
CN111526286A (en) * | 2020-04-20 | 2020-08-11 | 苏州智感电子科技有限公司 | Method and system for controlling motor motion and terminal equipment |
CN111753658A (en) * | 2020-05-20 | 2020-10-09 | 高新兴科技集团股份有限公司 | Post sleep warning method and device and computer equipment |
CN111898671A (en) * | 2020-07-27 | 2020-11-06 | 中国船舶工业综合技术经济研究院 | Target identification method and system based on fusion of laser imager and color camera codes |
CN112016595A (en) * | 2020-08-05 | 2020-12-01 | 清华大学 | Image classification method and device, electronic equipment and readable storage medium |
CN112204567A (en) * | 2019-09-17 | 2021-01-08 | 深圳市大疆创新科技有限公司 | Tree species identification method and device based on machine vision |
CN112380780A (en) * | 2020-11-27 | 2021-02-19 | 中国运载火箭技术研究院 | Symmetric scene grafting method for asymmetric confrontation scene self-game training |
WO2021088300A1 (en) * | 2019-11-09 | 2021-05-14 | 北京工业大学 | Rgb-d multi-mode fusion personnel detection method based on asymmetric double-stream network |
CN113340266A (en) * | 2021-06-02 | 2021-09-03 | 江苏豪杰测绘科技有限公司 | Indoor space surveying and mapping system and method |
CN113592812A (en) * | 2021-07-29 | 2021-11-02 | 华南师范大学 | Sketch picture evaluation method and device |
CN113642466A (en) * | 2019-11-27 | 2021-11-12 | 马上消费金融股份有限公司 | Living body detection and model training method, apparatus and medium |
CN113705578A (en) * | 2021-09-10 | 2021-11-26 | 北京航空航天大学 | Bile duct form identification method and device |
CN114494594A (en) * | 2022-01-18 | 2022-05-13 | 中国人民解放军63919部队 | Astronaut operating equipment state identification method based on deep learning |
CN115909182A (en) * | 2022-08-09 | 2023-04-04 | 哈尔滨市科佳通用机电股份有限公司 | Method for identifying wear fault image of brake pad of motor train unit |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106650655A (en) * | 2016-12-16 | 2017-05-10 | 北京工业大学 | Action detection model based on convolutional neural network |
CN106778810A (en) * | 2016-11-23 | 2017-05-31 | 北京联合大学 | Original image layer fusion method and system based on RGB feature Yu depth characteristic |
CN106874879A (en) * | 2017-02-21 | 2017-06-20 | 华南师范大学 | Handwritten Digit Recognition method based on multiple features fusion and deep learning network extraction |
CN106952220A (en) * | 2017-03-14 | 2017-07-14 | 长沙全度影像科技有限公司 | A kind of panoramic picture fusion method based on deep learning |
CN107369147A (en) * | 2017-07-06 | 2017-11-21 | 江苏师范大学 | Image interfusion method based on self-supervision study |
CN107730553A (en) * | 2017-11-02 | 2018-02-23 | 哈尔滨工业大学 | A kind of Weakly supervised object detecting method based on pseudo- true value search method |
US20180068429A1 (en) * | 2015-04-15 | 2018-03-08 | Institute Of Automation Chinese Academy Of Sciences | Image Steganalysis Based on Deep Learning |
-
2018
- 2018-11-16 CN CN201811372149.7A patent/CN109543697A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180068429A1 (en) * | 2015-04-15 | 2018-03-08 | Institute Of Automation Chinese Academy Of Sciences | Image Steganalysis Based on Deep Learning |
CN106778810A (en) * | 2016-11-23 | 2017-05-31 | 北京联合大学 | Original image layer fusion method and system based on RGB feature Yu depth characteristic |
CN106650655A (en) * | 2016-12-16 | 2017-05-10 | 北京工业大学 | Action detection model based on convolutional neural network |
CN106874879A (en) * | 2017-02-21 | 2017-06-20 | 华南师范大学 | Handwritten Digit Recognition method based on multiple features fusion and deep learning network extraction |
CN106952220A (en) * | 2017-03-14 | 2017-07-14 | 长沙全度影像科技有限公司 | A kind of panoramic picture fusion method based on deep learning |
CN107369147A (en) * | 2017-07-06 | 2017-11-21 | 江苏师范大学 | Image interfusion method based on self-supervision study |
CN107730553A (en) * | 2017-11-02 | 2018-02-23 | 哈尔滨工业大学 | A kind of Weakly supervised object detecting method based on pseudo- true value search method |
Non-Patent Citations (4)
Title |
---|
LIEFENG BO 等: "Unsupervised Feature Learning for RGB-D Based Object Recognition", 《EXPERIMENTAL ROBOTICS》 * |
刘帆 等: "基于双流卷积神经网络的RGB-D图像联合检测", 《激光与光电子学进展》 * |
卢良锋 等: "基于RGB特征与深度特征融合的物体识别算法", 《计算机工程》 * |
卢良锋: "基于RGB-D物体识别的深度学习算法研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084809A (en) * | 2019-05-06 | 2019-08-02 | 成都医云科技有限公司 | Diabetic retinopathy data processing method, device and electronic equipment |
CN110084809B (en) * | 2019-05-06 | 2021-03-16 | 成都医云科技有限公司 | Diabetic retinopathy data processing method and device and electronic equipment |
CN110111351A (en) * | 2019-05-10 | 2019-08-09 | 电子科技大学 | Merge the pedestrian contour tracking of RGBD multi-modal information |
CN110111351B (en) * | 2019-05-10 | 2022-03-25 | 电子科技大学 | Pedestrian contour tracking method fusing RGBD multi-modal information |
CN110286415A (en) * | 2019-07-12 | 2019-09-27 | 广东工业大学 | Safety check contraband detecting method, apparatus, equipment and computer readable storage medium |
CN112204567A (en) * | 2019-09-17 | 2021-01-08 | 深圳市大疆创新科技有限公司 | Tree species identification method and device based on machine vision |
WO2021051268A1 (en) * | 2019-09-17 | 2021-03-25 | 深圳市大疆创新科技有限公司 | Machine vision-based tree type identification method and apparatus |
WO2021088300A1 (en) * | 2019-11-09 | 2021-05-14 | 北京工业大学 | Rgb-d multi-mode fusion personnel detection method based on asymmetric double-stream network |
CN113642466A (en) * | 2019-11-27 | 2021-11-12 | 马上消费金融股份有限公司 | Living body detection and model training method, apparatus and medium |
CN113642466B (en) * | 2019-11-27 | 2022-11-01 | 马上消费金融股份有限公司 | Living body detection and model training method, apparatus and medium |
CN110929696A (en) * | 2019-12-16 | 2020-03-27 | 中国矿业大学 | Remote sensing image semantic segmentation method based on multi-mode attention and self-adaptive fusion |
CN111401442A (en) * | 2020-03-16 | 2020-07-10 | 中科立业(北京)科技有限公司 | Fruit identification method based on deep learning |
CN111526286B (en) * | 2020-04-20 | 2021-11-02 | 苏州智感电子科技有限公司 | Method and system for controlling motor motion and terminal equipment |
CN111486798B (en) * | 2020-04-20 | 2022-08-26 | 苏州智感电子科技有限公司 | Image ranging method, image ranging system and terminal equipment |
CN111486798A (en) * | 2020-04-20 | 2020-08-04 | 苏州智感电子科技有限公司 | Image ranging method, image ranging system and terminal equipment |
CN111526286A (en) * | 2020-04-20 | 2020-08-11 | 苏州智感电子科技有限公司 | Method and system for controlling motor motion and terminal equipment |
CN111753658A (en) * | 2020-05-20 | 2020-10-09 | 高新兴科技集团股份有限公司 | Post sleep warning method and device and computer equipment |
CN111898671A (en) * | 2020-07-27 | 2020-11-06 | 中国船舶工业综合技术经济研究院 | Target identification method and system based on fusion of laser imager and color camera codes |
CN112016595A (en) * | 2020-08-05 | 2020-12-01 | 清华大学 | Image classification method and device, electronic equipment and readable storage medium |
CN112380780A (en) * | 2020-11-27 | 2021-02-19 | 中国运载火箭技术研究院 | Symmetric scene grafting method for asymmetric confrontation scene self-game training |
CN113340266A (en) * | 2021-06-02 | 2021-09-03 | 江苏豪杰测绘科技有限公司 | Indoor space surveying and mapping system and method |
CN113592812A (en) * | 2021-07-29 | 2021-11-02 | 华南师范大学 | Sketch picture evaluation method and device |
CN113705578A (en) * | 2021-09-10 | 2021-11-26 | 北京航空航天大学 | Bile duct form identification method and device |
CN114494594A (en) * | 2022-01-18 | 2022-05-13 | 中国人民解放军63919部队 | Astronaut operating equipment state identification method based on deep learning |
CN114494594B (en) * | 2022-01-18 | 2023-11-28 | 中国人民解放军63919部队 | Deep learning-based astronaut operation equipment state identification method |
CN115909182A (en) * | 2022-08-09 | 2023-04-04 | 哈尔滨市科佳通用机电股份有限公司 | Method for identifying wear fault image of brake pad of motor train unit |
CN115909182B (en) * | 2022-08-09 | 2023-08-08 | 哈尔滨市科佳通用机电股份有限公司 | Method for identifying abrasion fault image of brake pad of motor train unit |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109543697A (en) | A kind of RGBD images steganalysis method based on deep learning | |
Shao et al. | Performance evaluation of deep feature learning for RGB-D image/video classification | |
Dias et al. | Apple flower detection using deep convolutional networks | |
CN110148120B (en) | Intelligent disease identification method and system based on CNN and transfer learning | |
CN112784763B (en) | Expression recognition method and system based on local and overall feature adaptive fusion | |
CN109784280A (en) | Human bodys' response method based on Bi-LSTM-Attention model | |
CN106897673B (en) | Retinex algorithm and convolutional neural network-based pedestrian re-identification method | |
CN110555465B (en) | Weather image identification method based on CNN and multi-feature fusion | |
CN112101175A (en) | Expressway vehicle detection and multi-attribute feature extraction method based on local images | |
CN109002755B (en) | Age estimation model construction method and estimation method based on face image | |
Liu et al. | The method of insulator recognition based on deep learning | |
CN108154102A (en) | A kind of traffic sign recognition method | |
CN106960176B (en) | Pedestrian gender identification method based on transfinite learning machine and color feature fusion | |
CN110991349B (en) | Lightweight vehicle attribute identification method based on metric learning | |
CN109033994A (en) | A kind of facial expression recognizing method based on convolutional neural networks | |
CN108596256B (en) | Object recognition classifier construction method based on RGB-D | |
CN113470076B (en) | Multi-target tracking method for yellow feather chickens in flat raising chicken house | |
CN112164002A (en) | Training method and device for face correction model, electronic equipment and storage medium | |
CN110705379A (en) | Expression recognition method of convolutional neural network based on multi-label learning | |
CN109360179A (en) | A kind of image interfusion method, device and readable storage medium storing program for executing | |
CN109508640A (en) | A kind of crowd's sentiment analysis method, apparatus and storage medium | |
CN111160327B (en) | Expression recognition method based on lightweight convolutional neural network | |
CN117496567A (en) | Facial expression recognition method and system based on feature enhancement | |
CN112598013A (en) | Computer vision processing method based on neural network | |
CN116543338A (en) | Student classroom behavior detection method based on gaze target estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190329 |
|
RJ01 | Rejection of invention patent application after publication |