CN107506736A - Online education video fineness picture intercept method based on deep learning - Google Patents

Online education video fineness picture intercept method based on deep learning Download PDF

Info

Publication number
CN107506736A
CN107506736A CN201710756202.2A CN201710756202A CN107506736A CN 107506736 A CN107506736 A CN 107506736A CN 201710756202 A CN201710756202 A CN 201710756202A CN 107506736 A CN107506736 A CN 107506736A
Authority
CN
China
Prior art keywords
picture
mrow
layer
deep learning
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710756202.2A
Other languages
Chinese (zh)
Inventor
熊利
陈靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dasheng On-Line Technology Co Ltd
Original Assignee
Beijing Dasheng On-Line Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dasheng On-Line Technology Co Ltd filed Critical Beijing Dasheng On-Line Technology Co Ltd
Priority to CN201710756202.2A priority Critical patent/CN107506736A/en
Publication of CN107506736A publication Critical patent/CN107506736A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of online education video fineness picture intercept method based on deep learning, Input Online education video, and input the picture number N for needing to intercept;A pictures therein are intercepted sequentially in time;The picture of interception is sent in deep learning system;Deep learning system is given a mark to picture;According to picture fraction, judge picture whether in preceding top N.Online education video fineness picture intercept method of the invention based on deep learning, by way of deep learning, intercept the exquisite picture being present in online education video;Improve investigation and screen the efficiency of exquisite picture;User is easier to get the exquisite picture in online education video, captures the most exquisite moment that user's online education is attended class, and improves user satisfaction, lifts Consumer's Experience, improves the spreading rate of product;User is facilitated to be propagated and checked.

Description

Online education video fineness picture intercept method based on deep learning
Technical field
The present invention relates to a kind of picture intercept method, is to be related to a kind of online education based on deep learning specifically Video fineness picture intercept method.
Background technology
The interception for picture in video mainly has at present:The first, intercepts picture by video jukebox software, that is, needs User position corresponding to be played to is intercepted, and so to be taken a substantial amount of time, is a pure manual labor;Second, The position specified or the time specified intercept to the picture in video, and the picture so intercepted is some unknown point Picture.
Exquisite picture in one section of video of interception, it is necessary to watch video or largely intercept the picture progress of each point repeatedly Screening, expends substantial amounts of manpower and materials, very improper for needing to intercept multitude of video picture.
The content of the invention
For above-mentioned deficiency of the prior art, the present invention provides a kind of online education video essence based on deep learning U.S. picture intercept method.
The technical solution used in the present invention is:
A kind of online education video fineness picture intercept method based on deep learning, comprises the following steps:
Step 1, start;
Step 2, Input Online education video, and input the picture number N for needing to intercept;
Step 3, a pictures therein are intercepted sequentially in time;
Step 4, the picture of interception is sent in deep learning system;
Step 5, deep learning system is given a mark to picture;
Step 6, according to picture fraction, judge that whether in preceding top N, if in preceding top N, picture is protected for picture Stay, if do not deleted picture in preceding top N;
Step 7, judge whether video intercepts to finish;
It is then to preserve the topN pictures of generation that video intercepting, which finishes, otherwise performs step 2 and intercepts sequentially in time Next pictures;
Step 8, topN exquisite pictures are obtained
Step 9, terminate.
The process that the deep learning system establishes model is as follows:
Establish input layer;
Input layer data are converted into convolution layer data;
Convolutional layer data are converted into pond layer data;
Pond layer data is converted into full connection layer data;
Full articulamentum data are converted to Softmax layer datas.
Input layer is the input of whole neutral net, and in the convolutional neural networks of processing image, input layer represents one The picture element matrix of pictures, picture element matrix are three-dimensional matrice, and the length and width of three-dimensional matrice represent the size of image, three-dimensional matrice Depth represent the color channel of image.
The node matrix equation treated by convolutional layer can become deeper, so it can be seen that by after convolutional layer in figure one The depth of node matrix equation can increase;Convolutional layer is also referred to as filter;
An a unit section node matrix equation in current layer neutral net being converted into next layer of neutral net Dot matrix, unit-node matrix refer to it is one long and it is wide be all 1, but the node matrix equation that depth is unlimited;
Convolutional layer is the propagated forward using filter, i.e., calculates right side unit square by the node in the minor matrix of left side The process of battle array interior joint;
Assuming that useTo represent for i-th of node in output unit node matrix equation, filter input node The weight of (x, y, z), uses biRepresent bias term parameter corresponding to i-th of output node, then i-th of section in unit matrix Point value g (i) be:
aX, y, zFor filter interior joint (x, y, z) value, f is activation primitive;
According to formula, the propagated forward processes of convolution Rotating fields be exactly by by a filter from neutral net current layer The upper left corner be moved to the lower right corner, and calculate what each corresponding unit matrix obtained on the move.
Pond layer neutral net will not change the depth of three-dimensional matrice, reduce the size of matrix;
Pondization operation is that a higher picture of resolution ratio is converted into the relatively low picture of resolution ratio;
The number of last full articulamentum interior joint is further reduced by pond layer;
The process of pond layer propagated forward is completed by the structure of mobile filter;
The filter of pond layer uses maximum or average value computing;
Maximum pond layer is referred to as using the pond layer of maxima operation, is referred to as using the pond layer of average operations For average pond layer.
Full articulamentum, after the processing by convolutional layer and pond layer, what it is in convolutional neural networks is finally by 1 to 2 Individual full articulamentum provides last classification results;
After convolutional layer and the processing of pond layer, the information in image has been abstracted into the higher spy of information content Sign;It is the process of automated graphics feature extraction by convolutional layer and pond layer.
Softmax layers, for classifying;By Softmax layers, the probability that current sample belongs to classification not of the same race can be obtained Distribution situation;
Assuming that original neutral net output is y1, y2..., yn, then the output after being handled by Softmax recurrence For:
The beneficial effect of the present invention compared with the prior art:
Online education video fineness picture intercept method of the invention based on deep learning, by way of deep learning, Intercept the exquisite picture being present in online education video;Improve investigation and screen the efficiency of exquisite picture;Needs are liberated The artificial work for investigating and screening exquisite picture;User is easier to get the exquisite picture in online education video, catch Most exquisite moment that user's online education is attended class, user satisfaction is improved, lift Consumer's Experience, improve the biography of product Broadcast rate;By intercepting exquisite picture in video, without watching whole video file, you can view during entirely attending class Optimal representation, also have great convenience for the user checking for product;Relative to video, the space that picture takes is smaller, in current nothing In the case that line network traffics are very full and expensive, more convenient user is propagated and checked.
After deep learning model is trained, by carrying out picture interception in sequence, exported by deep learning system The fraction of picture is to decide whether to preserve picture, to reach the most exquisite picture intercepted in whole video.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the online education video fineness picture intercept method of the invention based on deep learning;
Fig. 2 is that the deep learning model of the online education video fineness picture intercept method of the invention based on deep learning is built Vertical schematic flow sheet.
Embodiment
Referring to the drawings and embodiment the present invention will be described in detail:
Knowable to accompanying drawing 1-2, a kind of online education video fineness picture intercept method based on deep learning, including following step Suddenly:
Step 1, start;
Step 2, Input Online education video, and input the picture number N for needing to intercept;
Step 3, a pictures therein are intercepted sequentially in time;
Step 4, the picture of interception is sent in deep learning system;
Step 5, deep learning system is given a mark to picture;Full marks are 100 points;
Step 6, according to picture fraction, judge that whether in preceding top N, if in preceding top N, picture is protected for picture Stay, if do not deleted picture in preceding top N;
Step 7, judge whether video intercepts to finish;
Judge whether picture is last that intercepted in online education video;
It is then to preserve the topN pictures of generation that video intercepting, which finishes, otherwise performs step 2 and intercepts sequentially in time Next pictures;
Step 8, topN exquisite pictures are obtained;
Step 9, terminate.
The process that the deep learning system establishes model is as follows:
Establish input layer;
Input layer data are converted into convolution layer data;
Convolutional layer data are converted into pond layer data;
Pond layer data is converted into full connection layer data;
Full articulamentum data are converted to Softmax layer datas.
Input layer is the input of whole neutral net, and in the convolutional neural networks of processing image, input layer represents one The picture element matrix of pictures, picture element matrix are three-dimensional matrice, and the length and width of three-dimensional matrice represent the size of image, three-dimensional matrice Depth represent the color channel of image.
In Fig. 1, for example the depth of black and white picture is 1, and under rgb color pattern, the depth of image is 3.From input Start, the three-dimensional matrice of last layer is converted into next layer of three-dimensional by different neural network structures by convolutional neural networks Battle array, full articulamentum to the last.
Convolutional layer is part mostly important in a convolutional neural networks.It is different with traditional full articulamentum, in convolutional layer The input of each node is a fritter of last layer neutral net, and the size of this fritter has 3 × 3 or 5 × 5.Volume Lamination attempts more in depth to be analyzed each fritter in neutral net and obtains the higher feature of level of abstraction.
The node matrix equation treated by convolutional layer can become deeper, so it can be seen that by after convolutional layer in figure one The depth of node matrix equation can increase;Convolutional layer is also referred to as filter (filter).
An a unit section node matrix equation in current layer neutral net being converted into next layer of neutral net Dot matrix, unit-node matrix refer to it is one long and it is wide be all 1, but the node matrix equation that depth is unlimited.
Convolutional layer 1 and convolutional layer 2, all it is the propagated forward using filter, i.e., by the minor matrix of left side such as Fig. 1 Node calculates the process of right side unit matrix interior joint;
Assuming that useTo represent for i-th of node in output unit node matrix equation, filter input node The weight of (x, y, z), uses biRepresent bias term parameter corresponding to i-th of output node, then i-th of section in unit matrix Point value g (i) be:
aX, y, zFor filter interior joint (x, y, z) value, f is activation primitive;
According to formula, the propagated forward processes of convolution Rotating fields be exactly by by a filter from neutral net current layer The upper left corner be moved to the lower right corner, and calculate what each corresponding unit matrix obtained on the move.
Pond layer neutral net will not change the depth of three-dimensional matrice, reduce the size of matrix;
Pondization operation is that a higher picture of resolution ratio is converted into the relatively low picture of resolution ratio;
The number of last full articulamentum interior joint is further reduced by pond layer;Whole neutral net is reduced so as to reach The purpose of middle parameter;
The effect that calculating speed is also possible to prevent over-fitting problem can both have been accelerated using pond layer.
The process of pond layer propagated forward is completed by the structure of mobile filter;
The filter of pond layer uses maximum or average value computing;
Maximum pond layer (max pooling) is referred to as using the pond layer of maxima operation, uses average operations Pond layer be referred to as average pond layer (average pooling).
As shown in figure 1, full articulamentum, after the processing through excessive wheel convolutional layer and pond layer, in convolutional neural networks Be finally to provide last classification results by 1 to 2 full articulamentums;
After a few wheel convolutional layers and the processing of pond layer, it is higher that the information in image has been abstracted into information content Feature;It is the process of automated graphics feature extraction by convolutional layer and pond layer.
After feature extraction completion, it is still desirable to complete classification task using full connection.
Softmax layers, for classifying;By Softmax layers, the probability that current sample belongs to classification not of the same race can be obtained Distribution situation;
Assuming that original neutral net output is y1, y2..., yn, then the output after being handled by Softmax recurrence For:
It is as follows to model detailed example:
For the original image of one 32 × 32 × 1, the execution according to the convolutional neural networks framework of image classification is as follows:
First layer, convolutional layer 1:
Assuming that the size of first layer convolutional layer is 5 × 5, depth 6, filled without using full 0, step-length 1.The input of gained Size is 32-5+1=28, depth 6.A total of 5 × 5 × 1 × 6+6=156 parameter of this convolutional layer, wherein 6 are Bias term parameter.Because next layer of node matrix equation has 28 × 28 × 6=4704 node, each node and 5 × 5=25 are worked as Preceding node layer is connected, so this layer of convolutional layer shares the connection of 4704 × (25+1)=122304.
The second layer, pond layer 1:
The input of this layer is the output of first layer, is the node matrix equation of one 28 × 28 × 6.The filter of this layer of use Size is 2 × 2, and long and wide step-length is 2.So the output matrix size of this layer is 14 × 14 × 6.
Third layer, convolutional layer 2:
This layer of input matrix size is 14 × 14 × 6, and the filter size used is 5 × 5, depth 16.This layer is not Filled using full 0, step-length is that the output matrix size of 1. layers is 10 × 10 × 6.According to the convolutional layer of standard, this layer should have 5 × 5 × 6 × 16+16=2416 parameter, the connection of 10 × 10 × 16 × (25+1)=41600.
4th layer, pond layer 2:
This layer of input is the output of third layer, is the node matrix equation of one 10 × 10 × 16.The filter of this layer of use Size is 2 × 2, and long and wide step-length is 2.So the output matrix size of this layer is 5 × 5 × 16.
Layer 5, full articulamentum 1:
This layer of input is that matrix size is 5 × 5 × 16.This layer of output node is 120, share 5 × 5 × 16 × 120+120 × 48120 parameter.
Layer 6, full articulamentum 2:
This layer of input node number is 120, and output node number is 84, shares 120 × 84+84=10164 Parameter.
Layer 7, Softmax layers:
It is 84 nodes in input, after being handled by Softmax, output becomes a probability distribution.Then pass through Cross entropy calculates the distance between probability distribution of the probability distribution of prediction and true answer, finds optimal result.
Specific embodiment:
1st, implementing platform;
Windows operating system, (SuSE) Linux OS, IOS and Android.
2nd, function integrated software conditions with micro;
Artificial intelligence system TensorFlow.
3rd, function is realized;
Hand picked 110,000 pictures are input into TensorFlow to be learnt, TFRecord is generated, creates oneself Data source, and reduced model.
Specific implementation process:
Picture is classified, i.e., 100,000 common and 10,000 exquisite.Two classes of TensorFlow are concurrently set, Specify the TFrecord files of output.Using Inception V4 pretrained models, (Inception V4 performances are more for selection It is good, fast convergence rate), python scripts are write by picture and are converted into binary format, and label and image data are packaged, Sequence turns to character string.Python scripts are performed to start training.By TensorBoard to Loss, Global_step and Batch is monitored, and checks training process.After training terminates, training set and test are output to different subdirectories, observation is surveyed The tendency of collection is tried, after test result, which reaches, is expected, the formal model for exporting TFrecord document forms.
An online education video is inputted, while inputs the number N for needing to intercept, intercepts a pictures in sequence, will Picture is input in TensorFlow, and the model made using previous step is identified, and obtaining the fraction of picture, (full marks are 100 points), decided whether to preserve picture by fraction.If the picture fraction is less than preceding topN picture, the picture is abandoned;Such as The fraction of the fruit picture then deletes original topN last picture, retains the picture in preceding topN.Continue executing with next Pictures, untill the picture of whole video is completed in analysis.The final N for getting online education video exquisite picture.
The above described is only a preferred embodiment of the present invention, not the structure of the present invention is made any formal Limitation.Any simple modification, equivalent change and modification that every technical spirit according to the present invention is made to above example, Belong in the range of technical scheme.

Claims (7)

1. a kind of online education video fineness picture intercept method based on deep learning, it is characterised in that comprise the following steps:
Step 1, start;
Step 2, Input Online education video, and input the picture number N for needing to intercept;
Step 3, a pictures therein are intercepted sequentially in time;
Step 4, the picture of interception is sent in deep learning system;
Step 5, deep learning system is given a mark to picture;
Step 6, according to picture fraction, judge that picture whether in preceding top N, if in preceding top N, picture is retained, such as Fruit in preceding top N, does not then delete picture;
Step 7, judge whether video intercepts to finish;
It is then to preserve the topN pictures of generation that video intercepting, which finishes, and otherwise execution step 2 intercepts next sequentially in time Pictures;
Step 8, topN exquisite pictures are obtained;
Step 9, terminate.
2. the online education video fineness picture intercept method based on deep learning according to claim 1, it is characterised in that: The process that the deep learning system establishes model is as follows:
Establish input layer;
Input layer data are converted into convolution layer data;
Convolutional layer data are converted into pond layer data;
Pond layer data is converted into full connection layer data;
Full articulamentum data are converted to Softmax layer datas.
3. the online education video fineness picture intercept method based on deep learning according to claim 2, it is characterised in that:
Input layer is the input of whole neutral net, and in the convolutional neural networks of processing image, input layer represents a figure The picture element matrix of piece, picture element matrix are three-dimensional matrice, and the length and width of three-dimensional matrice represent the size of image, the depth of three-dimensional matrice Degree represents the color channel of image.
4. the online education video fineness picture intercept method based on deep learning according to claim 2, it is characterised in that:
The node matrix equation treated by convolutional layer can become deeper, so it can be seen that by the section after convolutional layer in figure one The depth of dot matrix can increase;Convolutional layer is also referred to as filter;
An a unit-node square node matrix equation in current layer neutral net being converted into next layer of neutral net Battle array, unit-node matrix refer to it is one long and it is wide be all 1, but the node matrix equation that depth is unlimited;
Convolutional layer is the propagated forward using filter, i.e., is calculated by the node in the minor matrix of left side in the unit matrix of right side The process of node;
Assuming that useTo represent for i-th of node in output unit node matrix equation, filter input node (x, y, z) Weight, use biRepresent bias term parameter corresponding to i-th of output node, then i-th of node in unit matrix takes Value g (i) is:
<mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </munderover> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>z</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </msubsup> <msub> <mi>a</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> </mrow> </msub> <mo>&amp;times;</mo> <msubsup> <mi>w</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> </mrow> <mi>i</mi> </msubsup> <mo>+</mo> <msup> <mi>b</mi> <mi>i</mi> </msup> <mo>)</mo> </mrow> </mrow>
aX, y, zFor filter interior joint (x, y, z) value, f is activation primitive;
According to formula, the propagated forward processes of convolution Rotating fields be exactly by by a filter from a left side for neutral net current layer Upper angle is moved to the lower right corner, and calculates what each corresponding unit matrix obtained on the move.
5. the online education video fineness picture intercept method based on deep learning according to claim 2, it is characterised in that:
Pond layer neutral net will not change the depth of three-dimensional matrice, reduce the size of matrix;
Pondization operation is that a higher picture of resolution ratio is converted into the relatively low picture of resolution ratio;
The number of last full articulamentum interior joint is further reduced by pond layer;
The process of pond layer propagated forward is completed by the structure of mobile filter;
The filter of pond layer uses maximum or average value computing;
Maximum pond layer is referred to as using the pond layer of maxima operation, is referred to as putting down using the pond layer of average operations Equal pond layer.
6. the online education video fineness picture intercept method based on deep learning according to claim 2, it is characterised in that:
Full articulamentum, after the processing by convolutional layer and pond layer, what it is in convolutional neural networks is finally complete by 1 to 2 Articulamentum provides last classification results;
After convolutional layer and the processing of pond layer, the information in image has been abstracted into the higher feature of information content; It is the process of automated graphics feature extraction by convolutional layer and pond layer.
7. the online education video fineness picture intercept method based on deep learning according to claim 2, it is characterised in that:
Softmax layers, for classifying;By Softmax layers, the probability distribution that current sample belongs to classification not of the same race can be obtained Situation;
Assuming that original neutral net output is y1, y2..., yn, then by Softmax recurrence handle after output be:
<mrow> <mi>S</mi> <mi>o</mi> <mi>f</mi> <mi>t</mi> <mi>max</mi> <msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>i</mi> </msub> <mo>=</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mfrac> <msup> <mi>e</mi> <mrow> <mi>y</mi> <mi>i</mi> </mrow> </msup> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msup> <mi>e</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> </msup> </mrow> </mfrac> <mo>.</mo> </mrow>
CN201710756202.2A 2017-08-29 2017-08-29 Online education video fineness picture intercept method based on deep learning Pending CN107506736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710756202.2A CN107506736A (en) 2017-08-29 2017-08-29 Online education video fineness picture intercept method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710756202.2A CN107506736A (en) 2017-08-29 2017-08-29 Online education video fineness picture intercept method based on deep learning

Publications (1)

Publication Number Publication Date
CN107506736A true CN107506736A (en) 2017-12-22

Family

ID=60694122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710756202.2A Pending CN107506736A (en) 2017-08-29 2017-08-29 Online education video fineness picture intercept method based on deep learning

Country Status (1)

Country Link
CN (1) CN107506736A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108170811A (en) * 2017-12-29 2018-06-15 北京大生在线科技有限公司 Deep learning sample mask method based on online education big data
CN109285111A (en) * 2018-09-20 2019-01-29 广东工业大学 A kind of method, apparatus, equipment and the computer readable storage medium of font conversion
CN110363245A (en) * 2019-07-17 2019-10-22 上海掌学教育科技有限公司 Excellent picture screening technique, the apparatus and system of Online class

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096605A (en) * 2016-06-02 2016-11-09 史方 A kind of image obscuring area detection method based on degree of depth study and device
CN106503693A (en) * 2016-11-28 2017-03-15 北京字节跳动科技有限公司 The offer method and device of video front cover
CN106897673A (en) * 2017-01-20 2017-06-27 南京邮电大学 A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096605A (en) * 2016-06-02 2016-11-09 史方 A kind of image obscuring area detection method based on degree of depth study and device
CN106503693A (en) * 2016-11-28 2017-03-15 北京字节跳动科技有限公司 The offer method and device of video front cover
CN106897673A (en) * 2017-01-20 2017-06-27 南京邮电大学 A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王华利等: "基于深度卷积神经网络的快速图像分类算法", 《计算机工程与应用》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108170811A (en) * 2017-12-29 2018-06-15 北京大生在线科技有限公司 Deep learning sample mask method based on online education big data
CN108170811B (en) * 2017-12-29 2022-07-15 北京大生在线科技有限公司 Deep learning sample labeling method based on online education big data
CN109285111A (en) * 2018-09-20 2019-01-29 广东工业大学 A kind of method, apparatus, equipment and the computer readable storage medium of font conversion
CN110363245A (en) * 2019-07-17 2019-10-22 上海掌学教育科技有限公司 Excellent picture screening technique, the apparatus and system of Online class

Similar Documents

Publication Publication Date Title
US20210390319A1 (en) Scene change method and system combining instance segmentation and cycle generative adversarial networks
CN110232696A (en) A kind of method of image region segmentation, the method and device of model training
CN109754017A (en) Based on separable three-dimensional residual error network and transfer learning hyperspectral image classification method
CN107066995A (en) A kind of remote sensing images Bridges Detection based on convolutional neural networks
CN104462494B (en) A kind of remote sensing image retrieval method and system based on unsupervised feature learning
CN110852227A (en) Hyperspectral image deep learning classification method, device, equipment and storage medium
CN107506722A (en) One kind is based on depth sparse convolution neutral net face emotion identification method
CN107229904A (en) A kind of object detection and recognition method based on deep learning
CN109784293A (en) Multi-class targets method for checking object, device, electronic equipment, storage medium
CN107016406A (en) The pest and disease damage image generating method of network is resisted based on production
CN107844795A (en) Convolutional neural networks feature extracting method based on principal component analysis
CN106372648A (en) Multi-feature-fusion-convolutional-neural-network-based plankton image classification method
CN107392925A (en) Remote sensing image terrain classification method based on super-pixel coding and convolutional neural networks
CN108256424A (en) A kind of high-resolution remote sensing image method for extracting roads based on deep learning
CN110222773A (en) Based on the asymmetric high spectrum image small sample classification method for decomposing convolutional network
CN107506793A (en) Clothes recognition methods and system based on weak mark image
CN106503729A (en) A kind of generation method of the image convolution feature based on top layer weights
CN107506736A (en) Online education video fineness picture intercept method based on deep learning
CN108009629A (en) A kind of station symbol dividing method based on full convolution station symbol segmentation network
CN107944459A (en) A kind of RGB D object identification methods
CN108010034A (en) Commodity image dividing method and device
CN106910188A (en) The detection method of airfield runway in remote sensing image based on deep learning
CN111597861A (en) System and method for automatically interpreting ground object of remote sensing image
CN108460399A (en) A kind of child building block builds householder method and system
CN109636764A (en) A kind of image style transfer method based on deep learning and conspicuousness detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171222

RJ01 Rejection of invention patent application after publication