CN108431823B - System and method for processing content using convolutional neural network - Google Patents

System and method for processing content using convolutional neural network Download PDF

Info

Publication number
CN108431823B
CN108431823B CN201580085375.5A CN201580085375A CN108431823B CN 108431823 B CN108431823 B CN 108431823B CN 201580085375 A CN201580085375 A CN 201580085375A CN 108431823 B CN108431823 B CN 108431823B
Authority
CN
China
Prior art keywords
dimensional
video frames
layers
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201580085375.5A
Other languages
Chinese (zh)
Other versions
CN108431823A (en
Inventor
巴拉马诺哈尔·帕卢里
杜·勒·洪·特兰
卢博米尔·布德夫
罗伯特·D·弗格斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Inc
Original Assignee
Meta Platforms Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meta Platforms Inc filed Critical Meta Platforms Inc
Publication of CN108431823A publication Critical patent/CN108431823A/en
Application granted granted Critical
Publication of CN108431823B publication Critical patent/CN108431823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

Systems, methods, and non-transitory computer-readable media may obtain a set of video frames at a first resolution. Processing the set of video frames using a convolutional neural network to output one or more signals, the convolutional neural network comprising (i) a set of two-dimensional convolutional layers and (ii) a set of three-dimensional convolutional layers, wherein the processing causes the set of video frames to be reduced to a second resolution. The one or more signals are processed using a set of three-dimensional deconvolution layers of a convolutional neural network. One or more outputs corresponding to the set of video frames are obtained from the convolutional neural network.

Description

System and method for processing content using convolutional neural network
Technical Field
The present technology relates to the field of media processing. More particularly, the present technology relates to techniques for processing video content using convolutional neural networks.
Background
People now often utilize computing devices or systems for a variety of purposes. For example, users may use their computing devices (or systems) to interact with each other, create content, share information, and access information. In some cases, a user of a computing device may capture or record media content, such as video content, with a camera or other image sensor of the computing device.
Disclosure of Invention
Various embodiments of the present disclosure may include systems, methods, and non-transitory computer-readable media configured to obtain a set of video frames at a first resolution. Processing the set of video frames using a convolutional neural network to output one or more signals, the convolutional neural network comprising (i) a set of two-dimensional convolutional layers and (ii) a set of three-dimensional convolutional layers, wherein the processing causes the set of video frames to be reduced to a second resolution; one or more signals are processed using a set of three-dimensional deconvolution layers of a convolutional neural network. One or more outputs corresponding to the set of video frames are obtained from a convolutional neural network.
In one embodiment, the system, method, and non-transitory computer-readable medium are configured to obtain one or more respective feature descriptors for one or more voxels in the set of video frames, wherein each feature descriptor is indicative of an identified scene, object, or action.
In one embodiment, systems, methods, and non-transitory computer readable media are configured to obtain respective optical flows of one or more voxels in the set of video frames, wherein the optical flows of the voxels describe at least a predicted direction and a size of the voxels.
In one embodiment, a system, method, and non-transitory computer readable medium are configured to obtain respective depth measurements for one or more voxels in the set of video frames.
In one embodiment, the system, method, and non-transitory computer readable medium are configured to input at least a portion of a signal generated by the set of three-dimensional convolutional layers to the set of three-dimensional deconvolution layers, the three-dimensional deconvolution layers trained to apply at least one three-dimensional deconvolution operation to a portion of the signal.
In one embodiment, the at least one three-dimensional deconvolution operation is based at least on one or more three-dimensional filters to deconvolve a portion of the signal, and wherein the three-dimensional deconvolution operation causes a representation of the video content to increase in signal size.
In one embodiment, the system, method, and non-transitory computer-readable medium are configured to input a representation of the set of video frames to the set of two-dimensional convolutional layers trained to apply at least one two-dimensional convolution operation to a representation of video content to output a set of first signals, and to input at least a portion of the set of first signals to the set of three-dimensional convolutional layers trained to apply at least one three-dimensional convolution operation to the set of first signals to output a set of second signals.
In one embodiment, the at least one two-dimensional convolution operation convolves the representation of the video content based on at least one or more two-dimensional filters, and wherein the two-dimensional convolution operation causes the representation of the video content to be reduced in signal size.
In one embodiment, the at least one three-dimensional convolution operation convolves the set of first signals based on at least one or more three-dimensional filters, and wherein the three-dimensional convolution operation causes a reduction in signal size of the representation of the video content.
In one embodiment, a group of video frames includes more than two video frames.
It is to be understood that many other features, applications, embodiments and/or variations of the disclosed technology will be apparent from the drawings and the detailed description that follows. Additional and/or alternative implementations of the structures, systems, non-transitory computer-readable media, and methods described herein may be employed without departing from the principles of the disclosed technology.
Embodiments according to the invention are disclosed in particular in the accompanying claims directed to a method, a storage medium, a system and a computer program product, wherein any feature mentioned in one claim category, such as a method, may be claimed in another claim category, such as a system. Dependencies or references in the appended claims are selected for formal reasons only. However, any subject matter resulting from a specific reference to any preceding claim(s) (particularly multiple references) may also be claimed, such that any combination of a claim and its features is disclosed and may be claimed regardless of the dependency selected in the appended claims. The claimed subject matter comprises not only the combination of features as set forth in the appended claims, but also any other combination of features in the claims, wherein each feature mentioned in the claims may be combined with any other feature or combination of features in the claims. Furthermore, any embodiments and features described or depicted herein may be protected in the claims alone and/or in any combination with any embodiments or features described or depicted herein, or with any features of the claims that follow.
Drawings
Fig. 1 illustrates an example system including an example content analyzer module configured to analyze video content using one or more convolutional neural networks, according to an embodiment of the present disclosure;
fig. 2 illustrates an example neural network module configured to analyze video content, in accordance with an embodiment of the present disclosure;
FIG. 3 shows an exemplary diagram of a convolutional neural network, according to an embodiment of the present disclosure;
FIG. 4 shows another example diagram of a convolutional neural network, according to an embodiment of the present disclosure;
FIG. 5 illustrates an example method for processing a set of video frames using a convolutional neural network, in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates another example method for processing a set of video frames using a convolutional neural network, in accordance with embodiments of the present disclosure;
FIG. 7 illustrates a network diagram of an example system including an example social-networking system that may be used in various situations, according to embodiments of the present disclosure;
fig. 8 illustrates an example of a computer system or computing device that may be used in various situations according to embodiments of the present disclosure.
The figures depict various embodiments of the disclosed technology for purposes of illustration only, where like reference numerals are used to identify like elements. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated in the accompanying drawings may be employed without departing from the principles of the disclosed technology described herein.
Detailed Description
Processing content with convolutional neural networks
Computing devices or systems are used for a variety of purposes. Computing devices may provide different types of functionality. Users may utilize their computing devices to generate information, access information, and share information. In some cases, the computing device may include or correspond to a camera capable of capturing or recording media content (such as video). A video may generally comprise a set of frames that may capture or represent various scenes, items, subjects, or other objects. A set of frames taken continuously or otherwise may also capture various movements, motions, or other appearance changes over time.
Conventional methods of processing video content for various purposes such as semantic segmentation (e.g., feature recognition, such as recognizing scenes, objects, movement, etc.), optical flow, and depth recognition may be inefficient and/or unreliable. In one example, a conventional convolutional neural network may not be able to reliably identify features in a set of video frames. Furthermore, the size or resolution of the video content is typically reduced as it is processed by each different layer in the convolutional neural network, which leads to such unreliability. Conventional approaches may also be limited in that a different configuration of convolutional neural networks may be required so that the convolutional neural networks can accurately perform certain tasks, such as feature recognition, optical flow, depth recognition, etc., to name a few examples. Thus, such conventional approaches may provide sub-optimal processing of video content and may not effectively address these and other problems arising in computer technology.
The improved method based on computer technology overcomes the foregoing and other shortcomings associated with conventional methods that are specifically identified in the field of computer technology. In various embodiments, a convolutional neural network having a set of two-dimensional convolutional layers, a set of three-dimensional convolutional layers, and a set of three-dimensional deconvolution layers may be trained and used to perform various types of tasks, including, for example, feature recognition, optical flow, and/or depth recognition. In some implementations, a convolutional neural network can be trained to predict a corresponding set of feature descriptors, optical flow (e.g., a predicted direction and size of a voxel), and/or depth recognition for any voxel in a set of video frames. In such embodiments, the set of video frames may be processed by the two-dimensional convolutional layer to output a set of first signals. The set of first signals may be processed by the three-dimensional convolutional layer to output a set of second signals. The set of second signals may be processed by a three-dimensional deconvolution layer trained to amplify a size or resolution of the video content, which may be degraded by being processed by two-dimensional and three-dimensional convolution layers. The amplified signal may be input to a softmax layer of a convolutional neural network to produce one or more outputs. The output may vary according to the loss function used by the training and softmax layers of the convolutional neural network. For example, the output may be one or more feature descriptors indicative of the identified scene, object, or motion corresponding to voxels in the set of video frames, respective optical flows of the voxels in the set of video frames, or respective depths identified for the voxels in the set of video frames.
Fig. 1 illustrates an example system 100 including an example content analyzer module 102 configured to analyze video content using one or more Convolutional Neural Networks (CNNs), according to an embodiment of the present disclosure. As shown in the example of fig. 1, the example content analyzer module 102 may include a content module 104 and a convolutional neural network module 106. In some cases, the example system 100 may include at least one data store 108. The components (e.g., modules, elements, etc.) shown in this and all of the figures herein are merely exemplary, and other embodiments may include additional, fewer, integrated, or different components. Some elements may not be shown so as not to obscure the relevant details.
In some implementations, the content analyzer module 102 can be implemented partially or entirely as software, hardware, or any combination thereof. In general, the modules discussed herein may be associated with software, hardware, or any combination thereof. In some implementations, one or more functions, tasks, and/or operations of a module may be implemented or performed by software routines, software processes, hardware, and/or any combination thereof. In some cases, the content analyzer module 102 may be implemented, in part or in whole, as software running on one or more computing devices or systems, such as on a user or client computing device. In one example, the content analyzer module 102, or at least a portion thereof, may be implemented as or within an application (e.g., app), program or applet, or the like, running on a user computing device or client computing system, such as the user device 710 of fig. 7. In another example, the content analyzer module 102, or at least a portion thereof, may be implemented using one or more computing devices or a system comprising one or more servers (such as a web server or cloud server). In some cases, the content analyzer module 102 may be implemented partially or entirely within a social networking system (or service), such as social networking system 730 of fig. 7, or configured to operate in conjunction with a social networking system (or service).
As shown in the example system 100, the content analyzer module 102 may be configured to communicate and/or operate with at least one data store 108. The at least one data store 108 may be configured to store and maintain various types of data. In some implementations, the at least one data store 108 can store information associated with a social-networking system (e.g., social-networking system 730 of fig. 7). Information associated with a social networking system may include data about users, social connections, social interactions, locations, geo-fenced areas, maps, places, events, pages, groups, posts, communications, content, feeds, account settings, privacy settings, social charts, and various other types of data. In some implementations, the at least one data store 108 can store information associated with the user, such as user identifiers, user information, profile information, user-specified settings, content generated or published by the user, and various other types of user data. In some implementations, the at least one data store 108 can store media content including video content, which can be obtained by the content analyzer module 104. In some cases, the at least one data store 108 may also store training data for training one or more convolutional neural networks. In one example, the training data may include, for example, video content and any tags, labels, attributes, capabilities, and/or descriptions of the video content. For example, such training data may be used to train a convolutional neural network for performing semantic segmentation (e.g., predictive feature descriptors) of a set of frames. In another example, the training data may include one or more sets of real optical flow data that may be used to train a convolutional neural network for predicting optical flow for a set of frames, such as corresponding directions and magnitudes of voxels or pixels corresponding to the set of frames. In another example, the training data may include one or more sets of true depth recognition data that may be used to train a convolutional neural network for predicting the depth of a set of frames. It should be recognized that many variations are possible.
The content module 104 may be configured to obtain and/or receive video content to be analyzed. For example, the video content may be a set of images or video frames or a video file. In various implementations, video content may be provided (e.g., uploaded) by users and/or content providers of the social networking system. In some implementations, such video content can be stored in the data store 108, and the content module 104 can be configured to obtain the video content from the data store 108.
The convolutional neural network module 106 may be configured to analyze video content, such as the video content provided by the content module 104. In various implementations, the convolutional neural network module 106 may evaluate video content using one or more convolutional neural networks that are each configured to perform various tasks. These tasks may involve predicting, for each voxel (i.e., pixel in time) or at least a portion of a voxel corresponding to video content, a respective semantic segment (e.g., a video feature descriptor), optical flow (e.g., a direction of movement and/or a magnitude of the direction), and/or depth recognition, to name a few examples. For example, a group of video frames may capture a cat that skips a bicycle parked in a forest. Here, the convolutional neural network module 106 may train and semantically segment the voxels in the set of video frames using a convolutional neural network. When semantically segmenting voxels, the convolutional neural network module 106 may determine a set of feature descriptors corresponding to concepts identified in the set of video frames and corresponding probabilities that the concepts are present in the set of video frames. In this example, the convolutional neural network module 106 may determine that a probability of a first voxel in the set of video frames representing a portion of a forest scene is 84%, a probability of a second voxel in the set of video frames representing a portion of a cat object is 96%, and a probability of a third voxel in the set of video frames representing a portion of a bicycle object is 88%, to provide some examples. As described above, in some embodiments, the convolutional neural network module 106 may determine such a prediction for each voxel in the set of video frames. More details regarding convolutional neural network module 106 will be provided below with reference to fig. 2.
Fig. 2 illustrates an example convolutional neural network module 202 configured to analyze video content in accordance with an embodiment of the present disclosure. In some implementations, the convolutional neural network module 106 of fig. 1 can be implemented as an example convolutional neural network module 202. As shown in fig. 2, the example convolutional neural network module 202 may include a training module 204, a two-dimensional convolution module 206, a three-dimensional convolution module 208, and a three-dimensional deconvolution module 210.
In various implementations, the convolutional neural network module 202 may evaluate video content using one or more convolutional neural networks that each have been trained to perform a particular task. For example, such tasks may involve predicting, for each voxel or at least a portion of voxels corresponding to the video content, a respective semantic segmentation, optical flow, and/or depth recognition. In some embodiments, a convolutional neural network may include one or more two-dimensional convolutional layers, one or more three-dimensional deconvolution layers, and one or more aggregation layers.
The training module 204 may be used to train the convolutional neural network by training one or more two-dimensional convolutional layers that may be utilized by the two-dimensional convolution module 206, one or more three-dimensional convolutional layers that may be utilized by the three-dimensional convolution module 208, and one or more three-dimensional deconvolution layers that may be utilized by the three-dimensional deconvolution module 210. The convolutional neural network may be trained, for example, using training module 204 to provide some type of output or prediction. In various implementations, the output may be a prediction corresponding to one or more voxels in the video content. The prediction provided to the voxels may vary according to the training of the convolutional neural network. The training module 204 may train the convolutional neural network to perform a particular task (e.g., semantic segmentation, optical flow, and/or depth recognition of voxels) using, for example, real training data available from a data store (e.g., the data store 108 of fig. 1). In some implementations, when training the convolutional neural network to predict semantic segments, the training module 204 can utilize training data that includes video content that has been tagged (e.g., using social tags, descriptive tags, topic tags, etc.) to identify at least scenes, objects, and/or actions. In some implementations, the training module 204 can utilize training data (e.g., direction and size of each voxel) that includes true optical flow output for various video content when training the convolutional neural network to predict optical traffic. In some implementations, the training module 204 can utilize training data that includes true depth outputs for various video content when training the convolutional neural network to predict depth recognition.
To perform training of layers (e.g., two-dimensional convolutional layer(s), three-dimensional convolutional layer(s), and/or two-dimensional deconvolution layers), training module 204 may process any training data set through a convolutional neural network. The convolutional neural network may produce a set of corresponding outputs for the training data. These outputs may be compared to the true outputs contained in the training data to measure any inaccuracies in the outputs produced by the convolutional neural network. In various embodiments, this inaccuracy may be reduced by performing back propagation through a convolutional neural network. During back propagation, the training module 204 may adjust one or more weight values of one or more filters associated with various layers in the convolutional neural network to minimize inaccuracies. By performing back propagation over multiple training iterations, optimal or other suitable weight values may be determined for the filter of the convolutional neural network. In some cases, each weight value of a filter may correspond to a pixel value of the filter (e.g., an RGB value, a HEX code, etc.).
As described above, the convolutional neural network module 202 may process video content using a convolutional neural network that has been trained to perform a particular task. For example, video content may be propagated forward through a convolutional neural network in an inference process to generate one or more outputs. In some embodiments, the convolutional neural network is trained to output a feature descriptor for each voxel in the video content. The feature descriptors may provide a respective confidence percentage for each concept in a list of predefined concepts (e.g., scenes, objects, actions, etc.). The confidence percentage of a concept may indicate the likelihood of whether the concept is identified in the video content. In some implementations, the convolutional neural network is trained to output an optical flow (e.g., direction and size) for each voxel in the video content. In some embodiments, the convolutional neural network is trained to output the depth of each voxel in the video content. When processing video content, the two-dimensional convolution module 206 may be configured to apply at least one two-dimensional convolution operation to the video content using one or more two-dimensional convolution layers. The two-dimensional convolution operation may utilize at least one two-dimensional filter to convolve the representation of the video content, which may result in a reduction in signal size of the representation of the video content. Each two-dimensional convolutional layer may apply a respective two-dimensional convolution operation to its received input signal and may generate a respective output signal, which may be input into a subsequent layer during forward propagation. In some embodiments, signals output from a two-dimensional convolutional layer are input to one or more three-dimensional convolutional layers.
The three-dimensional convolution module 208 may be configured to apply at least one three-dimensional convolution operation to video content using one or more three-dimensional convolution layers. The three-dimensional convolution operation may utilize at least one three-dimensional filter to convolve the representation of the video content, which may result in a reduction in signal size of the representation of the video content. Each three-dimensional convolutional layer may apply a respective three-dimensional convolution operation to its received input signal and may generate a respective output signal, which may be input into a subsequent layer during forward propagation. In some embodiments, signals output from the three-dimensional convolutional layers are input to one or more three-dimensional deconvolution layers.
The three-dimensional deconvolution module 210 may be configured to apply at least one three-dimensional deconvolution operation to the video content using one or more three-dimensional deconvolution layers. The three-dimensional deconvolution operation may deconvolve a representation of the video content with at least one three-dimensional filter. By training, the filter can learn the appropriate weights needed to upsample the video content. Each three-dimensional deconvolution layer may apply a respective three-dimensional deconvolution operation to its received input signal, and may generate a respective output signal that may be input into a subsequent layer during forward propagation. As described above, each convolution operation may reduce the magnitude of the generated output signal relative to the received input signal. Thus, the output received from the three-dimensional convolution module 208 may be reduced relative to the input signal initially received by the two-dimensional convolution module 206. In various embodiments, the three-dimensional deconvolution module 210 may upsample the input signal received by the three-dimensional convolution module 208 to match the input signal originally provided to the convolutional neural network module 202. In other words, the three-dimensional deconvolution layers used by the three-dimensional deconvolution module 210 may be trained to increase the graphics resolution (e.g., pixel width and height) of the input signal received by the three-dimensional convolution module 208 to match the graphics resolution of the video content originally input to the convolutional neural network module 202. Such upsampling may be accomplished by training a three-dimensional deconvolution layer to evaluate the local structure of the video content, and then increasing the resolution to correspond to the resolution of the originally submitted video content.
Fig. 3 illustrates an example diagram 300 of a convolutional neural network, according to an embodiment of the present disclosure. Example diagram 300 illustrates the structure of a convolutional neural network 302 that has been trained to perform a particular task. As described above, the convolutional neural network 302 may be trained to predict the feature descriptors, optical flow, or depth of some or all voxels in the video content. As shown in fig. 3, an example convolutional neural network 302 may include a set of two-dimensional convolutional layers 310, a set of three-dimensional convolutional layers 320, and a set of three-dimensional deconvolution layers 330. The number of individual layers included in two-dimensional convolutional layer 310, three-dimensional convolutional layer 320, and three-dimensional deconvolution layer 330 may vary depending on the implementation. In some embodiments, the set of two-dimensional convolutional layers 310 may include at least five two-dimensional convolutional layers, the set of three-dimensional convolutional layers 320 may include at least three-dimensional convolutional layers, and the three-dimensional deconvolution layer 330 may include at least three-dimensional deconvolution layers. In some embodiments, the convolutional neural network 302 may include a set of fully connected layers 340. Further, in some embodiments, convolutional neural network 302 may include softmax layer 350. At least a portion of each layer may be connected to at least a portion of another layer and the transmission of information may occur through the layers.
In some cases, during forward propagation through the convolutional neural network, data describing video content 360 may be input to a first two-dimensional convolutional layer of the set of two-dimensional convolutional layers 310 to produce an output 380. Two-dimensional convolution layer 310 may apply one or more two-dimensional convolution operations to a representation of video content 360. Each two-dimensional convolution operation may utilize at least one two-dimensional filter to convolve the representation of the video content 360. Two-dimensional convolutional layer 310 may produce an output signal when performing a two-dimensional convolution operation. These output signals may be input to the next layer in the convolutional neural network 302 during forward propagation. In this example, the output signal is input to three-dimensional convolutional layer 320. Three-dimensional convolutional layer 320 may apply one or more three-dimensional convolution operations to the input signal received from two-dimensional convolutional layer 310. Each three-dimensional convolution operation may utilize at least one three-dimensional filter to convolve a representation of the input signal. Three-dimensional convolutional layer 320 may generate an output signal when performing a three-dimensional convolution operation. These output signals may be input to the next layer in the convolutional neural network 302 during forward propagation. In this example, the output signal is input to the three-dimensional deconvolution layer 330. Three-dimensional deconvolution layer 330 may apply one or more three-dimensional deconvolution operations to the input signal received from three-dimensional convolution layer 320. Each three-dimensional deconvolution operation may deconvolve a representation of the input signal with at least one three-dimensional filter. Three-dimensional deconvolution layer 330 may produce an output signal when performing a three-dimensional deconvolution operation. These output signals may be input to the next layer in the convolutional neural network 302 during forward propagation. In some embodiments, some or all of the signals output from the three-dimensional deconvolution layer are input to the set of fully connected layers 340. One or more outputs 380 from the convolutional neural network 302 may be generated based on the signals output from the fully-connected layer 340. In some implementations, the output(s) 380 can be normalized or adjusted by the softmax layer 350.
As described above, this output 380 may describe feature descriptors, optical flow, or depth for some or all of the voxels in the video content. In some implementations, the video content 360 input to the convolutional neural network 302 is a set of frames of a certain length (e.g., 16 frames corresponding to a portion of the video content). In such an embodiment, the output 380 produced by the convolutional neural network 302 is the same length as the set of frames that are input. Thus, for example, if the group of frames input to the convolutional neural network 302 is 16 frames in length, the output 380 may provide information (e.g., semantic segmentation, optical flow, or depth) that is 16 frames in length. In some embodiments, the convolutional neural network 302 automatically provides a smooth trajectory over multiple frames without additional smoothing. The penalty function utilized by, for example, the softmax layer 350 may vary depending on the task (e.g., semantic segmentation, optical flow, depth recognition) that the convolutional neural network 302 is trained to perform. In some embodiments, the resolution of output 380 is the same as the resolution of input 360.
Fig. 4 illustrates another example diagram of a convolutional neural network 400, according to an embodiment of the present disclosure. In this example, convolutional neural network 400 includes a first component 402 and a second component 404. The first component 402 may include a set of two-dimensional convolutional layers and a set of three-dimensional convolutional layers, as described with reference to fig. 3. The second component 404 may include a set of three-dimensional deconvolution layers, as described with reference to fig. 3. The first component 402 of the convolutional neural network 400 may receive as input a set of frames having a particular size (i.e., resolution). In this example, the set of frames has a height of 100 pixels and a width of 100 pixels (i.e., 100 × 100 pixels). As described above, the frame size may be reduced as the convolution operation is performed by different layers of the convolutional neural network 400. Size reduction typically occurs at the pooling layer, and the amount of reduction may vary depending on pooling parameters. In this example, after propagating the frame forward through the first layer 406 and the corresponding collection layer, the size of the frame may be reduced from 100x100 pixels to 50x50 pixels. Similarly, after the frame is propagated forward through the second layer 408 and the corresponding collection layer, the size of the frame may be reduced from 50x50 pixels to 25x25 pixels. Next, after propagating the frame forward through the third layer 410 and the corresponding collection layer, the size of the frame may be reduced from 25x25 pixels to 12x12 pixels. Finally, after forward propagation of the frame through the fourth layer 412 and the corresponding collection layer, the size of the frame may be reduced from 12x12 pixels to 6x6 pixels.
Next, the second component 404 can be used to perform a deconvolution operation on the various outputs from the first component 402, thereby causing the various outputs to be upsampled (e.g., by doubling the size of the respective outputs). In the example of fig. 4, the output from the fifth layer 414 of the first component 402 of the convolutional neural network 400 is input to a first deconvolution layer 416, which first deconvolution layer 416 performs one or more deconvolution operations to upsample the output from the fifth layer 414. In this example, the first deconvolution layer 416 upsamples the output from the fifth layer 414 (which is a 6 × 6 sized pixel) into 64 channels each of 12 × 12 pixels in size. In some embodiments, the output from the fourth layer 412 may be input to another layer 418 that performs one or more convolution operations to produce 64 channels each of 12x12 pixels in size. Next, the output from layer 418 as 64 channels each of 12x12 pixels in size may be cascaded (420) with the output from first deconvolution layer 416, which has been upsampled to 64 channels each of 12x12 pixels in size. In this example, after cascading, there will be 128 channels each of 12 × 12 pixels in size. This concatenated output of 128 channels each of size 12x12 pixels may be input to a second deconvolution layer 422, which performs one or more deconvolution operations to upsample the concatenated output 420 from 128 channels of size 12x12 pixels to 64 channels of size 25x25 pixels.
In some embodiments, the output from the third layer 410 may be input to another layer 424 that performs one or more convolution operations to produce 64 channels each of 25x25 pixels in size. Next, the outputs from layer 424, which are 64 channels each of 25 × 25 pixels in size, may be concatenated 426 with the outputs from the second deconvolution layer 422, which outputs of the second deconvolution layer 422 have been upsampled into 64 channels each of 25 × 25 pixels in size. In this example, after cascading, there will be 128 channels, each of which is 25x25 pixels in size. The concatenated output of 128 channels each of 25x25 pixels in size may be input to a third deconvolution layer 428, which third deconvolution layer 428 performs one or more deconvolution operations to upsample the output concatenation 426 from 128 channels of 25x25 pixels in size to 64 channels of 50x50 pixels in size.
In some embodiments, the output from the second layer 408 may be input to another layer 430 that performs one or more convolution operations to produce 64 channels each of 50x50 pixels in size. Next, the outputs from layer 430 (which are 64 channels each 50x50 pixels in size) may be concatenated (432) with the outputs from the third deconvolution layer 428, the outputs of the third deconvolution layer 428 having been upsampled to 64 channels each 50x50 pixels in size. In this example, after cascading, there will be 128 channels, each of which is 50x50 pixels in size. The 128 channels of concatenated output, each 50x50 pixels in size, may be input to a fourth deconvolution layer 434, which deconvolution layer 434 performs one or more deconvolution operations to upsample the concatenated output 432 from the 128 channels, each 50x50 pixels in size, into 64 channels, each 100x100 pixels in size. The up-sampled output of size 100 × 100 pixels may be input to the softmax layer 436 to be normalized. As described above, the loss function used by softmax layer 436 may vary depending on the task that convolutional neural network 400 is trained to perform. Depending on the implementation, these tasks may involve predicting, for each voxel or at least a portion of voxels corresponding to the video content 405, a respective semantic segmentation (e.g., video feature descriptor), optical flow (e.g., direction of motion and/or magnitude of direction), or depth recognition, to name a few examples. The number of filters, pooling operations, and/or convolution operations may vary and be specified depending on the implementation. Also, the frame size, filter size, sink size, and stride value may vary and be specified depending on the implementation. In addition, the number of descriptors may vary and be specified depending on the implementation.
Fig. 5 illustrates an example method 500 for processing a set of video frames using a convolutional neural network, in accordance with an embodiment of the present disclosure. It should be understood that, unless otherwise specified, there may be additional, fewer, or alternative steps performed in a similar or alternative order or in parallel within the scope of various embodiments. At block 502, the example method 500 may obtain a set of video frames at a first resolution. At block 504, exemplary method 500 may process the set of video frames using a convolutional neural network to output one or more signals, the convolutional neural network including (i) a set of two-dimensional convolutional layers and (ii) a set of three-dimensional convolutional layers. The processing may cause the set of video frames to be reduced to a second resolution. At block 506, the method 500 may process one or more signals using a set of three-dimensional deconvolution layers of a convolutional neural network. In block 508, the method 500 may obtain one or more outputs corresponding to the set of video frames from the convolutional neural network.
Fig. 6 illustrates another example method 600 for processing a set of video frames using a convolutional neural network, in accordance with an embodiment of the present disclosure. Again, it should be understood that there may be additional, fewer, or alternative steps performed in a similar or alternative order or in parallel within the scope of the various embodiments, unless otherwise specified. At block 602, the example method 600 may input a representation of a set of video frames to a set of two-dimensional convolutional layers to output a set of first signals. The two-dimensional convolutional layer may be trained to apply at least one two-dimensional convolution operation to a representation of video content. At block 604, method 600 may input at least a portion of the set of first signals to a set of three-dimensional convolutional layers to output a set of second signals. The three-dimensional convolution layer may be trained to apply at least one three-dimensional convolution operation to the set of first signals. At block 606, method 600 may input at least a portion of a signal produced by the set of three-dimensional convolution layers to a set of three-dimensional deconvolution layers trained to apply at least one three-dimensional deconvolution operation to a portion of the signal. At block 608, the method 600 may input at least a portion of the signal generated by the set of three-dimensional convolutional layers to a softmax layer of a convolutional neural network to generate one or more outputs.
It is contemplated that there may be many other uses, applications, and/or variations associated with various embodiments of the present disclosure. For example, in some cases, a user may select whether to choose to use the disclosed techniques. The disclosed techniques may also ensure that various privacy settings and preferences are maintained and may prevent private information from being compromised. In another example, various embodiments of the present disclosure may learn, improve, and/or be refined over time.
Social networking System-exemplary embodiments
Fig. 7 illustrates a network diagram of an example system 700 that can be utilized in various scenarios, according to embodiments of the present disclosure. The system 700 includes one or more user devices 710, one or more external systems 720, a social networking system (or service) 730, and a network 750. In one embodiment, the social networking services, providers, and/or systems associated with the embodiments described above may be implemented as social networking system 730. For illustrative purposes, the embodiment of the system 700 shown in fig. 7 includes a single external system 720 and a single user device 710. However, in other embodiments, the system 700 may include more user devices 710 and/or more external systems 720. In some embodiments, the social networking system 730 is operated by a social networking provider, while the external systems 720 are separate from the social networking system 730, as they may be operated by different entities. However, in various embodiments, the social networking system 730 and the external system 720 operate cooperatively to provide social networking services to users (or members) of the social networking system 730. In this sense, the social networking system 730 provides a platform or backbone network, and other systems (e.g., external system 720) may be used to provide social networking services and functionality to users on the Internet.
User device 710 includes one or more computing devices (or systems) that can receive input from a user and send and receive data via network 750. In one embodiment, user device 710 is a conventional computer system executing, for example, a Microsoft Windows-compatible Operating System (OS), Apple OS X, and/or a Linux distribution. In another embodiment, the user device 710 may be a computing device or a device with computer functionality, such as a smart phone, a tablet, a Personal Digital Assistant (PDA), a mobile phone, a laptop, a wearable device (e.g., a pair of glasses, a watch, a bracelet, etc.), a camera, an appliance, and so forth. The user equipment 710 is configured to communicate via a network 750. The user device 710 may execute an application, such as a browser application that allows a user of the user device 710 to interact with the social networking system 730. In another embodiment, the user device 710 interacts with the social networking system 730 through an Application Programming Interface (API) provided by a local operating system of the user device 710 (user devices such as iOS and ANDROID). The user device 710 is configured to communicate with external systems 720 and social-networking system 730 via a network 750 using a wired and/or wireless communication system, which network 750 may include any combination of local and/or wide-area networks.
In one embodiment, network 750 uses standard communication technologies and protocols. Thus, network 750 may include links using technologies such as Ethernet, 702.11, Worldwide Interoperability for Microwave Access (WiMAX), 3G, 4G, CDMA, GSM, LTE, Digital Subscriber Line (DSL), and so forth. Similarly, networking protocols used on network 750 may include multiprotocol label switching (MPLS), transmission control protocol/internet protocol (TCP/IP), User Datagram Protocol (UDP), hypertext transfer protocol (HTTP), Simple Mail Transfer Protocol (SMTP), File Transfer Protocol (FTP), and so forth. Data exchanged over network 750 may be represented using technologies and/or formats including hypertext markup language (HTML) and extensible markup language (XML). In addition, all or some of the links may be encrypted using conventional encryption techniques, such as Secure Sockets Layer (SSL), Transport Layer Security (TLS), and Internet protocol security (IPsec).
In one embodiment, the user device 710 may display content from the external system 720 and/or the social networking system 730 by processing markup language documents 714 received from the external system 720 and the social networking system 730 using the browser application 712. The markup language document 714 identifies content and one or more instructions describing the formatting or presentation of the content. By executing the instructions included in the markup language document 714, the browser application 712 displays the identified content using the format or presentation described by the markup language document 714. For example, the markup language document 714 includes instructions for generating and displaying a web page having a plurality of frames including text and/or image data retrieved from the external system 720 and the social networking system 730. In various embodiments, the markup language document 714 includes a data file containing extensible markup language (XML) data, extensible hypertext markup language (XHTML) data, or other markup language data. Additionally, the markup language document 714 can include JavaScript object notation (JSON) data, JSON with padding (JSON p), and JavaScript data to facilitate data exchange between the external system 720 and the user device 710. The browser application 712 on the user device 710 may use a JavaScript compiler to decode the markup language document 714.
The markup language document 714 can also include or link to an application or application framework such as a FLASH or Unity application, a SilverLight application framework, or the like.
In one embodiment, the user device 710 also includes one or more plug-ins 716 that include data indicating whether the user of the user device 710 is logged into the social networking system 730, which may enable modification of data transmitted from the social networking system 730 to the user device 710.
The external system 720 includes one or more web servers including one or more web pages 722a, 722b transmitted to the user device 710 using the network 750. The external system 720 is separate from the social networking system 730. For example, external system 720 is associated with a first domain, while social-networking system 730 is associated with a separate social-networking domain. The web pages 722a, 722b included in the external system 720 include a markup language document 714, the markup language document 714 identifying content and including instructions specifying formatting or presentation of the identified content.
Social-networking system 730 includes one or more computing devices for a social network that includes multiple users, and provides users of the social network with the ability to communicate and interact with other users of the social network. In some cases, the social network may be represented by a graph, i.e., a data structure that includes edges and nodes. Other data structures may also be used to represent a social network, including but not limited to databases, objects, classes, elements, files, or any other data structure. The social networking system 730 may be managed, hosted, or controlled by the operator. The operator of the social networking system 730 may be a human, an automated application, or a series of applications for managing content, adjusting policies, and collecting usage metrics within the social networking system 730. Any type of operator may be used.
A user may join the social networking system 730 and then add connections to any number of other users of the social networking system 730 that they wish to connect to. As used herein, the term "friend" refers to any other user of the social networking system 730 to whom the user has established a connection, association, or relationship through the social networking system 730. For example, in one embodiment, if the users in the social networking system 730 are represented as nodes in a social graph, the term "friend" may refer to an edge formed between and directly connecting two user nodes.
The connections may be added explicitly by the user or may be automatically created by the social networking system 730 based on common characteristics of the users (e.g., users who are alumni of the same educational institution). For example, a first user explicitly selects a particular other user as a friend. Connections in the social networking system 730 are typically in two directions, but not necessarily, so the terms "user" and "friend" depend on the frame of reference. The connections between users of the social-networking system 730 are typically two-way ("two-way") or "mutual," but the connections may also be one-way or "one-way. For example, if Bob and Joe are both users of social-networking system 730 and are connected to each other, Bob and Joe are connected to each other. On the other hand, if Bob wishes to connect to Joe to view data transmitted by Joe to social-networking system 730, but Joe does not wish to form an interconnection, a one-way connection may be established. The connection between users may be a direct connection; however, some embodiments of social-networking system 730 allow for indirect connections via one or more levels of connection or degrees of separation.
In addition to establishing and maintaining connections between users and allowing interactions between users, the social networking system 730 also provides users with the ability to take actions on various types of items supported by the social networking system 730. These items may include groups or networks to which a user of the social networking system 730 may belong (i.e., social networks of people, entities, and concepts), events or calendar entries that may be of interest to the user, computer-based applications that the user may use via the social networking system 730, transactions that allow the user to purchase or sell items via or through services provided by the social networking system 730, and interactions with advertisements that the user may or may not perform on the social networking system 730. These are merely items that a user may act on the social networking system 730, as well as many other items are possible. The user may interact with anything that can be presented in the social networking system 730 or the external system 720, be separate from the social networking system 730, or be coupled to the social networking system 730 via the network 750.
Social-networking system 730 is also capable of linking various entities. For example, the social networking system 730 enables users to interact with each other and with external systems 720 or other entities through APIs, web services, or other communication channels. Social-networking system 730 generates and maintains a "social graph" that includes a plurality of nodes interconnected by a plurality of edges. Each node in the social graph may represent an entity that may act on and/or be acted upon by another node. The social graph may include various types of nodes. Examples of node types include users, non-human entities, content items, web pages, groups, activities, messages, concepts, and any other things that may be represented by objects in social-networking system 730. An edge of two nodes in a social graph may represent a particular type of connection or association between the two nodes, which may result from a node relationship or from an action performed by one of the nodes on the other node. In some cases, the edges between nodes may be weighted. The weight of an edge may represent an attribute associated with the edge, such as the strength of the connection or the association between nodes. Different types of edges may provide different weights. For example, edges created when one user "likes" another user may be given one weight, while edges created when a user is a friend with another user may be given a different weight.
As an example, when a first user identifies a second user as a friend, an edge in a social graph is generated connecting a node representing the first user and a second node representing the second user. When various nodes are associated or interact with each other, the social networking system 730 modifies the edges connecting the various nodes to reflect the relationships and interactions.
The social networking system 730 also includes user-generated content that enhances user interaction with the social networking system 730. User-generated content may include anything a user may add, upload, send, or "post" to social-networking system 730. For example, the user transmits a post from the user device 710 to the social networking system 730. Posts may include data such as status updates or other textual data, location information, images such as photos, videos, links, music, or other similar data and/or media. Content may also be added to the social networking system 730 by a third party. The content "item" is represented as an object in the social networking system 730. In this manner, users of the social networking system 730 are encouraged to publish text and content items of various types of media to communicate with each other through various communication channels. Such communication increases the interaction of users with each other and increases the frequency with which users interact with social-networking system 730.
Social-networking system 730 includes web server 732, API request server 734, user profile store 736, connection store 738, action recorder 740, activity log 742, and authorization server 744. In one embodiment of the invention, social-networking system 730 may include additional, fewer, or different components for various applications. Other components such as network interfaces, security mechanisms, load balancers, failover servers, management and network operations consoles, etc., are not shown so as not to obscure the details of the system.
The user profile store 736 maintains information about user accounts, including biographical, demographic, and other types of descriptive information (such as work experience, educational history, hobbies or preferences, location, etc. that have been declared by the user or inferred by the social networking system 730). This information is stored in the user profile store 736 so that each user is uniquely identified. The social networking system 730 also stores data describing one or more connections between different users in the connection store 738. The connection information may indicate users with similar or co-working experiences, group membership, hobbies, or educational history. In addition, social-networking system 730 includes user-defined connections between different users, allowing users to specify their relationships with other users. For example, user-defined connections allow a user to generate relationships with other users in parallel with the user's real-life relationships, such as friends, colleagues, partners, and so forth. The user may select from predefined connection types or define their own connection type as desired. Connections to other nodes in the social networking system 730, such as non-human entities, buckets, cluster centers, images, interests, pages, external systems, concepts, and the like, are also stored in the connection store 738.
Social networking system 730 maintains data about objects with which a user may interact. To maintain this data, the user profile store 736 and the connection store 738 store instances of objects of the corresponding type maintained by the social networking system 730. Each object type has an information field adapted to store information appropriate for the object type. For example, the user profile store 736 contains data structures with fields suitable for describing and information related to the user's account. When a new object of a particular type is created, the social networking system 730 initializes a new data structure of the corresponding type, assigns it a unique object identifier, and begins to add data to the object as needed. This may occur, for example, when a user becomes a user of the social networking system 730, the social networking system 730 generates a new instance of the user profile in the user profile store 736, assigns a unique identifier to the user account, and begins populating fields of the user account with information provided by the user.
The connection store 738 comprises data structures suitable for describing a user's connections to other users, connections to external systems 720, or connections to other entities. The connection store 738 may also associate a connection type with a user's connection, which may be used in conjunction with the user's privacy settings to manage access to information about the user. In an embodiment of the present invention, the user profile store 736 and the connection store 738 may be implemented as a federated database.
The data stored in the connection store 738, the user profile store 736, and the activity log 742 enable the social networking system 730 to generate a social graph that identifies various objects and edges connecting the nodes using the nodes to identify relationships between different objects. For example, if a first user establishes a connection with a second user in the social-networking system 730, the user accounts of the first user and the second user from the user profile store 736 may act as nodes in the social graph. The connection between the first user and the second user stored by the connection store 738 is an edge between nodes associated with the first user and the second user. Continuing with the example, the second user may then send a message to the first user within the social-networking system 730. The act of sending the message (which may be stored) is to represent the other side between two nodes in the social graph of the first user and the second user. Additionally, the message itself may be identified and included in the social graph as another node connected to the nodes representing the first user and the second user.
In another example, a first user may tag a second user in an image maintained by the social-networking system 730 (or alternatively, in an image maintained by another system external to the social-networking system 730). The images themselves may be represented as nodes in the social networking system 730. The tagging action may create an edge between the first user and the second user and an edge between each user and an image, which is also a node in the social graph. In yet another example, if the user confirms attending the event, the user and event are nodes obtained from the user profile store 736, where the attendance of the event is an edge between the nodes that can be retrieved from the activity log 742. By generating and maintaining a social graph, the social networking system 730 includes data that describes many different types of objects and the interactions and connections between these objects, thereby providing a rich source of socially relevant information.
The web server 732 links the social networking system 730 to one or more user devices 710 and/or one or more external systems 720 via the network 750. Web server 732 provides services for Web pages and other web-related content (such as Java, JavaScript, Flash, XML, etc.). The web server 732 may include a mail server or other messaging functionality for receiving and routing messages between the social networking system 730 and one or more user devices 710. The messages may be instant messages, queued messages (e.g., email), text and SMS messages, or any other suitable message format.
The API request server 734 allows one or more external systems 720 and user devices 710 to invoke access information from the social networking system 730 by invoking one or more API functions. The API request server 734 may also allow the external system 720 to send information to the social networking system 730 by calling an API. In one embodiment, external system 720 sends an API request to social-networking system 730 via network 750, and API-request server 734 receives the API request. The API request server 734 processes the request by calling the API associated with the API request to generate an appropriate response for the API request server 734 to communicate with the external system 720 via the network 750. For example, in response to an API request, API request server 734 collects data associated with a user, such as a user connection that has logged into external system 720, and transmits the collected data to external system 720. In another embodiment, the user device 710 communicates with the social networking system 730 through an API in the same manner as the external system 720.
Action recorder 740 can receive communications from web server 732 regarding user actions on social-networking system 730 and/or not on social-networking system 730. The action logger 740 populates an activity log 742 with information about user actions, enabling the social networking system 730 to discover various actions that their user takes within the social networking system 730 and outside of the social networking system 730. Any action taken by a particular user with respect to another node on the social networking system 730 may be associated with each user's account through information maintained in the activity log 742 or in a similar database or other data storage repository. Examples of identified and stored actions taken by a user within social-networking system 730 may include, for example, adding a connection to another user, sending a message to another user, reading a message from another user, viewing content associated with another user, attending an event posted by another user, posting an image, attempting to post an image, or other action interacting with another user or another object. When a user takes an action within the social networking system 730, the action is recorded in the activity log 742. In one embodiment, the social networking system 730 maintains the activity log 742 as a database of entries. When an action is taken within the social networking system 730, an entry for the action is added to the activity log 742. The activity log 742 may be referred to as an action log.
Additionally, user actions may be associated with concepts and actions that occur within entities external to the social-networking system 730 (such as an external system 720 that is separate from the social-networking system 730). For example, action recorder 740 may receive data from web server 732 describing user interactions with external system 720. In this example, the external system 720 reports the user's interactions according to structured actions and objects in the social graph.
Other examples of actions that a user interacts with external system 720 include the user expressing an interest in external system 720 or another entity, the user posting a comment to social-networking system 730 discussing external system 720 or web page 722a within external system 720, the user posting a Uniform Resource Locator (URL) or other identifier associated with external system 720 to social-networking system 730, the user participating in an event associated with external system 720, or any other action taken by a user related to external system 720. Thus, the activity log 742 may include actions that describe interactions between users of the social networking system 730 and the external systems 720 that are separate from the social networking system 730.
The authorization server 744 performs one or more privacy settings for the users of the social networking system 730. The privacy settings of the user determine how to share particular information associated with the user. The privacy settings include specifications of particular information related to the user, as well as specifications of one or more entities with which the information may be shared. Examples of entities with which information may be shared may include other users, applications, external systems 720, or any entity that may potentially access the information. The information that may be shared by the user includes user account information such as profile photos, phone numbers associated with the user, connections of the user, actions taken by the user (such as adding a connection, changing user profile information), and so forth.
The privacy settings specifications may be provided at different levels of granularity. For example, the privacy settings may identify particular information to be shared with other users; the privacy settings identify a work phone number or a specific set of related information, such as personal information including profile photos, home phone numbers, and status. Alternatively, the privacy settings may be applied to all information related to the user. The specification of a set of entities that may access particular information may also be specified at various levels of granularity. The various groups of entities with which information may be shared may include, for example, all friends of the user, all friends of the friends, all applications, or all external systems 720. One embodiment allows the specification of the set of entities to include an enumeration of the entities. For example, the user may provide a list of external systems 720 that are allowed to access particular information. Another embodiment allows a specification to include a set of entities and exceptions that do not allow access to the information. For example, the user may allow all external systems 720 to access the user's work information, but specify a list of external systems 720 that are not allowed to access the work information. Some embodiments refer to the exception list as a "blocklist" that does not allow access to specific information. External systems 720 belonging to the block list specified by the user are blocked from accessing information specified in the privacy setting. Various combinations of information specification granularity and entity specification granularity may enable information sharing. For example, all personal information may be shared with friends, while all work information may be shared with friends of friends.
The authorization server 744 contains logic for determining whether certain information associated with the user may be accessed by the user's friends, external systems 720, and/or other applications and entities. The external system 720 may require authorization from the authorization server 744 to access the user's more private and sensitive information, such as the user's work phone number. Based on the privacy settings of the user, the authorization server 744 determines whether to allow another user, the external system 720, an application, or another entity to access information associated with the user, including information about actions taken by the user.
In some implementations, the social networking system 730 may include a content analyzer module 746. The content analyzer module 746 may be implemented, for example, as the content analyzer module 102 of fig. 1. As previously mentioned, it should be understood that many variations and other possibilities are possible. Other features of the content analyzer module 746 are discussed herein with reference to the content analyzer module 102.
Hardware implementation
The processes and features described above may be implemented by a variety of machines and computer system architectures, and a variety of networks and computing environments. FIG. 8 illustrates an example of a computer system 800 that can be used to implement one or more embodiments described herein, according to an embodiment of the invention. The computer system 800 includes a set of instructions for causing the computer system 800 to perform the processes and features discussed herein. The computer system 800 may be connected (e.g., networked) to other machines. In a networked deployment, the computer system 800 may operate in the capacity of a server machine or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. In one embodiment of the invention, the computer system 800 may be or be a component of the social networking system 730, the user device 710, and the external system 820. In one embodiment of the invention, the computer system 800 may be one of many servers that make up all or a portion of the social networking system 730.
Computer system 800 includes a processor 802, a cache 804, and one or more executable modules and drives stored on computer-readable media for the processes and features described herein. In addition, computer system 800 includes a high performance input/output (I/O) bus 806 and a standard I/O bus 808. Host bridge 810 couples processor 802 to high performance I/O bus 806, while I/O bus bridge 812 couples the two buses 806 and 808 to each other. A system memory 814 and one or more network interfaces 816 are coupled to high performance I/O bus 806. Computer system 800 may also include video memory and a display device coupled to the video memory (not shown). Mass memory 818 and I/O ports 820 are coupled to standard I/O bus 808. Computer system 800 may optionally include a keyboard and pointing device, display device, or other input/output device (not shown) coupled to the standard I/O bus 808. In summary, these elements are intended to represent a broad class of computer hardware systems, including, but not limited to, computer systems based on x86 compatible processors manufactured by Intel corporation of Santa Clara, Calif., x86 compatible processors manufactured by ultramicro semiconductor (AMD) corporation of Sunnyvale, Calif., and any other suitable processors.
The operating system manages and controls the operation of computer system 800, including the input and output of data to and from software applications (not shown). The operating system provides an interface between software applications executing on the system and the hardware components of the system. Any suitable operating system may be used, such as the LINUX operating system, Apple Macintosh operating system available from Apple computer, Inc. of Kubino, Calif., the UNIX operating system, the Linux operating system, the iPhone operating system, or the,
Figure BDA0001697740310000251
An operating system, a BSD operating system, etc. Other implementations are also possible.
The elements of computer system 800 are described in more detail below. In particular, network interface 816 provides communication between computer system 800 and any of a wide range of networks, such as an ethernet (e.g., IEEE 802.3) network, a backplane, and the like. Mass storage 818 provides permanent storage for data and programming instructions to perform the processes and features described above as being implemented by the various computing systems described above, while system memory 814 (e.g., DRAM) provides temporary storage for data and programming instructions as they are executed by processor 802. I/O ports 820 may be one or more serial and/or parallel communication ports that provide communication between additional peripheral devices that may be coupled to computer system 800.
The computer system 800 may include various system architectures and various components of the computer system 800 may be rearranged. For example, cache 804 may be on-chip with processor 802. Alternatively, the cache 804 and the processor 802 may be packaged together as a "processor module," with the processor 802 being referred to as a "processor core. Moreover, certain embodiments of the present invention may neither require nor include all of the above components. For example, a peripheral device coupled to a standard I/O bus 808 may be coupled to the high performance I/O bus 806. Additionally, in some embodiments, there may only be a single bus, with the components of computer system 800 coupled to the single bus. Moreover, computer system 800 may include additional components, such as additional processors, storage devices, or memories.
In general, the processes and features described herein may be implemented as an operating system or as part of a specific application, component, program, object, module, or series of instructions referred to as a "program. For example, one or more programs may be used to perform certain processes described herein. The programs generally include one or more instructions in various memory and storage devices in the computer system 800, which when read and executed by one or more processors, cause the computer system 800 to perform operations to perform the processes and features described herein. The processes and features described herein may be implemented in software, firmware, hardware (e.g., application specific integrated circuits), or any combination thereof.
In one implementation, the processes and features described herein are implemented as a series of executable modules executed separately by the computer system 800 or collectively in a distributed computing environment. The aforementioned modules may be implemented by hardware, executable modules stored on a computer-readable medium (or machine-readable medium), or a combination of both. For example, the modules may comprise a plurality or series of instructions to be executed by a processor in a hardware system, such as processor 802. Initially, the series of instructions may be stored on a storage device, such as mass storage 818. However, the series of instructions may be stored on any suitable computer readable storage medium. Further, the series of instructions need not be stored locally, and may be received from a remote storage device (such as a server on a network) via network interface 816. The instructions are copied from the storage device (such as mass storage device 818) into system memory 814 and then accessed and executed by processor 802. In various implementations, one or more modules may be executed by a processor or multiple processors in one or more locations, such as multiple servers in a parallel processing environment.
Examples of a computer-readable medium include, but are not limited to, recordable type media, (such as volatile and non-volatile memory devices, solid state memory, floppy and other removable disks, hard disk drives, magnetic media, optical disks (e.g., compact disk read only memory (CD ROM), Digital Versatile Disks (DVD)), other similar non-transitory (or transitory) tangible (or non-tangible) storage media, or any type of media suitable for storing, encoding or carrying a sequence of instructions for execution by computer system 800 to perform any one or more of the processes and features described herein.
For purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the description. It will be apparent, however, to one skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In some instances, modules, structures, processes, features, and devices are shown in block diagram form in order to avoid obscuring the description. In other instances, functional block diagrams and flow diagrams are shown representing data and logic flows. The components of the block diagrams and flow diagrams (e.g., modules, blocks, structures, devices, features, etc.) may be combined, separated, removed, reordered, and replaced in a different manner than explicitly described and depicted herein.
Reference in the specification to "one embodiment," "an embodiment," "another embodiment," "a series of embodiments," "some embodiments," "various embodiments," or the like means that a particular feature, design, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. For example, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Furthermore, various features are described which may be combined in different ways and included in some embodiments, but also omitted differently in other embodiments, whether or not "embodiments" or the like is explicitly mentioned. Similarly, various features are described which may be preferred or required for some embodiments but not for others.
The language used herein has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims of the application based hereon. Accordingly, the disclosure of the embodiments of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

Claims (20)

1. A computer-implemented method, comprising:
obtaining, by a computing system, a set of video frames of a length at a first resolution;
processing, by the computing system, the set of video frames of the length using a convolutional neural network to output one or more signals corresponding to the set of video frames, the convolutional neural network comprising (i) a set of two-dimensional convolutional layers, (ii) a set of three-dimensional convolutional layers, and (iii) a set of three-dimensional deconvolution layers, wherein the three-dimensional convolutional layers reduce the set of video frames to a second resolution, and wherein the three-dimensional deconvolution layers upsample the set of video frames; and
obtaining, by the computing system from the convolutional neural network, one or more output signals corresponding to the length of the set of video frames, the output signals providing information corresponding to one or more voxels in the set of video frames.
2. The computer-implemented method of claim 1, wherein obtaining one or more outputs corresponding to the set of video frames further comprises:
obtaining, by the computing system, one or more respective feature descriptors for one or more voxels in the set of video frames, wherein each of the feature descriptors is indicative of an identified scene, object, or action.
3. The computer-implemented method of claim 1, wherein obtaining one or more outputs corresponding to the set of video frames further comprises:
obtaining, by the computing system, respective optical flows for one or more voxels in the set of video frames, wherein the optical flows for the voxels describe at least a predicted direction and a size of the voxels.
4. The computer-implemented method of claim 1, wherein obtaining one or more outputs corresponding to the set of video frames further comprises:
obtaining, by the computing system, respective depth measurements for one or more voxels in the set of video frames.
5. The computer-implemented method of claim 1, wherein processing the one or more signals using the set of three-dimensional deconvolution layers of the convolutional neural network further comprises:
inputting, by the computing system, at least a portion of a signal produced by the set of three-dimensional convolution layers to the set of three-dimensional deconvolution layers, the three-dimensional deconvolution layers being trained to apply at least one three-dimensional deconvolution operation to a portion of the signal.
6. The computer-implemented method of claim 5, wherein the at least one three-dimensional deconvolution operation is based at least on one or more three-dimensional filters to deconvolve a portion of the signal, and wherein the three-dimensional deconvolution operation causes a representation of video content to increase in signal size.
7. The computer-implemented method of claim 1, processing the set of video frames to output one or more signals using the convolutional neural network, the convolutional neural network comprising (i) a set of two-dimensional convolutional layers and (ii) a set of three-dimensional convolutional layers further comprising:
inputting, by the computing system, representations of the set of video frames to the set of two-dimensional convolutional layers trained to apply at least one two-dimensional convolution operation to representations of video content to output a set of first signals; and
inputting, by the computing system, at least a portion of the set of first signals to the set of three-dimensional convolutional layers to output a set of second signals, the three-dimensional convolutional layers being trained to apply at least one three-dimensional convolution operation to the set of first signals.
8. The computer-implemented method of claim 7, wherein the at least one two-dimensional convolution operation convolves the representation of the video content based at least on one or more two-dimensional filters, and wherein the two-dimensional convolution operation causes the representation of the video content to decrease in signal size.
9. The computer-implemented method of claim 7, wherein the at least one three-dimensional convolution operation convolves the set of first signals based at least on one or more three-dimensional filters, and wherein the three-dimensional convolution operation causes the representation of the video content to decrease in signal size.
10. The computer-implemented method of claim 1, wherein the set of video frames includes more than two video frames.
11. A system for processing content using a convolutional neural network, comprising:
at least one processor; and
a memory storing instructions that, when executed by the at least one processor, cause the system to perform:
obtaining a set of video frames of a certain length at a first resolution;
processing the set of video frames of the length using a convolutional neural network to output one or more signals corresponding to the set of video frames, the convolutional neural network comprising (i) a set of two-dimensional convolutional layers, (ii) a set of three-dimensional convolutional layers, and (iii) a set of three-dimensional deconvolution layers, wherein the three-dimensional convolutional layers reduce the set of video frames to a second resolution, and wherein the three-dimensional deconvolution layers upsample the set of video frames; and
obtaining one or more output signals corresponding to the length of the set of video frames from the convolutional neural network, the output signals providing information corresponding to one or more voxels in the set of video frames.
12. The system of claim 11, wherein obtaining the one or more outputs corresponding to the set of video frames further causes the system to perform:
obtaining one or more respective feature descriptors for one or more voxels in the set of video frames, wherein each of the feature descriptors is indicative of an identified scene, object, or action.
13. The system of claim 11, wherein obtaining the one or more outputs corresponding to the set of video frames further causes the system to perform:
obtaining respective optical flows for one or more voxels in the set of video frames, wherein the optical flow for the voxel describes at least a predicted direction and a size of the voxel.
14. The system of claim 11, wherein obtaining the one or more outputs corresponding to the set of video frames further causes the system to perform:
obtaining respective optical flows for one or more voxels in the set of video frames, wherein the optical flow for the voxel describes at least a predicted direction and a size of the voxel.
15. The system of claim 11, wherein processing the one or more signals using a set of three-dimensional deconvolution layers of the convolutional neural network further causes the system to perform:
inputting, by a computing system, at least a portion of a signal produced by the set of three-dimensional convolution layers to the set of three-dimensional deconvolution layers, the three-dimensional deconvolution layers being trained to apply at least one three-dimensional deconvolution operation to a portion of the signal.
16. A non-transitory computer-readable storage medium comprising instructions that, when executed by at least one processor of a computing system, cause the computing system to perform:
obtaining a set of video frames of a certain length at a first resolution;
processing the set of video frames of the length using a convolutional neural network to output one or more signals corresponding to the set of video frames, the convolutional neural network comprising (i) a set of two-dimensional convolutional layers, (ii) a set of three-dimensional convolutional layers, and (iii) a set of three-dimensional deconvolution layers, wherein the three-dimensional convolutional layers reduce the set of video frames to a second resolution, and wherein the three-dimensional deconvolution layers upsample the set of video frames; and
obtaining one or more output signals corresponding to the length of the set of video frames from the convolutional neural network, the output signals providing information corresponding to one or more voxels in the set of video frames.
17. The non-transitory computer-readable storage medium of claim 16, wherein obtaining the one or more outputs corresponding to the set of video frames further causes the computing system to perform:
obtaining one or more respective feature descriptors for one or more voxels in the set of video frames, wherein each of the feature descriptors is indicative of an identified scene, object, or action.
18. The non-transitory computer-readable storage medium of claim 16, wherein obtaining the one or more outputs corresponding to the set of video frames further causes the computing system to perform:
obtaining respective optical flows for one or more voxels in the set of video frames, wherein the optical flow for the voxel describes at least a predicted direction and a size of the voxel.
19. The non-transitory computer-readable storage medium of claim 16, wherein obtaining the one or more outputs corresponding to the set of video frames further causes the computing system to perform:
obtaining respective optical flows for one or more voxels in the set of video frames, wherein the optical flow for the voxel describes at least a predicted direction and a size of the voxel.
20. The non-transitory computer-readable storage medium of claim 16, wherein processing the one or more signals using the set of three-dimensional deconvolution layers of the convolutional neural network further causes the computing system to perform:
inputting, by the computing system, at least a portion of a signal produced by the set of three-dimensional convolution layers to the set of three-dimensional deconvolution layers, the three-dimensional deconvolution layers being trained to apply at least one three-dimensional deconvolution operation to a portion of the signal.
CN201580085375.5A 2015-11-05 2015-12-30 System and method for processing content using convolutional neural network Active CN108431823B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201562251414P 2015-11-05 2015-11-05
US62/251,414 2015-11-05
US14/983,477 2015-12-29
US14/983,477 US9754351B2 (en) 2015-11-05 2015-12-29 Systems and methods for processing content using convolutional neural networks
PCT/US2015/068199 WO2017078744A1 (en) 2015-11-05 2015-12-30 Systems and methods for processing content using convolutional neural networks

Publications (2)

Publication Number Publication Date
CN108431823A CN108431823A (en) 2018-08-21
CN108431823B true CN108431823B (en) 2022-05-06

Family

ID=58662372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580085375.5A Active CN108431823B (en) 2015-11-05 2015-12-30 System and method for processing content using convolutional neural network

Country Status (10)

Country Link
US (1) US9754351B2 (en)
JP (1) JP6722292B2 (en)
KR (1) KR102452029B1 (en)
CN (1) CN108431823B (en)
AU (1) AU2015413918A1 (en)
BR (1) BR112018009271A2 (en)
CA (1) CA3003892A1 (en)
IL (1) IL259112A (en)
MX (1) MX2018005690A (en)
WO (1) WO2017078744A1 (en)

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080059157A (en) 2005-08-12 2008-06-26 릭 비. 예거 System and method for applying a reflectance modifying agent to improve the visual attractiveness of human skin
US8184901B2 (en) * 2007-02-12 2012-05-22 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image
US9940539B2 (en) * 2015-05-08 2018-04-10 Samsung Electronics Co., Ltd. Object recognition apparatus and method
WO2017106998A1 (en) * 2015-12-21 2017-06-29 Sensetime Group Limited A method and a system for image processing
US10181195B2 (en) * 2015-12-28 2019-01-15 Facebook, Inc. Systems and methods for determining optical flow
US10650045B2 (en) 2016-02-05 2020-05-12 Sas Institute Inc. Staged training of neural networks for improved time series prediction performance
US10642896B2 (en) 2016-02-05 2020-05-05 Sas Institute Inc. Handling of data sets during execution of task routines of multiple languages
US10095552B2 (en) * 2016-02-05 2018-10-09 Sas Institute Inc. Automated transfer of objects among federated areas
US10795935B2 (en) 2016-02-05 2020-10-06 Sas Institute Inc. Automated generation of job flow definitions
US10650046B2 (en) 2016-02-05 2020-05-12 Sas Institute Inc. Many task computing with distributed file system
US9836820B2 (en) * 2016-03-03 2017-12-05 Mitsubishi Electric Research Laboratories, Inc. Image upsampling using global and local constraints
CN109074473B (en) * 2016-04-11 2020-04-24 北京市商汤科技开发有限公司 Method and system for object tracking
DE102016223422B4 (en) * 2016-11-25 2024-05-08 Continental Autonomous Mobility Germany GmbH Method for automatically determining extrinsic parameters of a vehicle camera
US10289938B1 (en) * 2017-05-16 2019-05-14 State Farm Mutual Automobile Insurance Company Systems and methods regarding image distification and prediction models
JP6799510B2 (en) * 2017-07-27 2020-12-16 日本電信電話株式会社 Scene recognition devices, methods, and programs
JP7149692B2 (en) 2017-08-09 2022-10-07 キヤノン株式会社 Image processing device, image processing method
CN107506350A (en) * 2017-08-16 2017-12-22 京东方科技集团股份有限公司 A kind of method and apparatus of identification information
JP2019070934A (en) * 2017-10-06 2019-05-09 東芝デジタルソリューションズ株式会社 Video processing apparatus, video processing method and program
US11113832B2 (en) * 2017-11-03 2021-09-07 Google Llc Aperture supervision for single-view depth prediction
US10733714B2 (en) * 2017-11-09 2020-08-04 Samsung Electronics Co., Ltd Method and apparatus for video super resolution using convolutional neural network with two-stage motion compensation
US10685446B2 (en) * 2018-01-12 2020-06-16 Intel Corporation Method and system of recurrent semantic segmentation for image processing
WO2019141896A1 (en) * 2018-01-18 2019-07-25 Nokia Technologies Oy A method for neural networks
US11164003B2 (en) 2018-02-06 2021-11-02 Mitsubishi Electric Research Laboratories, Inc. System and method for detecting objects in video sequences
WO2019194460A1 (en) * 2018-04-01 2019-10-10 엘지전자 주식회사 Method for image coding using convolution neural network and apparatus thereof
WO2019209006A1 (en) * 2018-04-24 2019-10-31 주식회사 지디에프랩 Method for improving resolution of streaming files
CN108846817B (en) * 2018-06-22 2021-01-12 Oppo(重庆)智能科技有限公司 Image processing method and device and mobile terminal
US11461698B2 (en) * 2018-07-09 2022-10-04 Athene Noctua LLC Integrated machine learning audiovisual application for a defined subject
JP7443366B2 (en) * 2018-08-07 2024-03-05 メタ プラットフォームズ, インク. Artificial intelligence techniques for image enhancement
CN110969217B (en) * 2018-09-28 2023-11-17 杭州海康威视数字技术股份有限公司 Method and device for image processing based on convolutional neural network
WO2020081470A1 (en) * 2018-10-15 2020-04-23 Flir Commercial Systems, Inc. Deep learning inference systems and methods for imaging systems
CN113298845A (en) * 2018-10-15 2021-08-24 华为技术有限公司 Image processing method, device and equipment
CN110163061B (en) * 2018-11-14 2023-04-07 腾讯科技(深圳)有限公司 Method, apparatus, device and computer readable medium for extracting video fingerprint
CN109739926B (en) * 2019-01-09 2021-07-02 南京航空航天大学 Method for predicting destination of moving object based on convolutional neural network
CN109799977B (en) * 2019-01-25 2021-07-27 西安电子科技大学 Method and system for developing and scheduling data by instruction program
US10929159B2 (en) 2019-01-28 2021-02-23 Bank Of America Corporation Automation tool
CN110149531A (en) * 2019-06-17 2019-08-20 北京影谱科技股份有限公司 The method and apparatus of video scene in a kind of identification video data
US10999566B1 (en) * 2019-09-06 2021-05-04 Amazon Technologies, Inc. Automated generation and presentation of textual descriptions of video content
US11407431B2 (en) 2019-11-22 2022-08-09 Samsung Electronics Co., Ltd. System and method for object trajectory prediction in an autonomous scenario
US20210383534A1 (en) * 2020-06-03 2021-12-09 GE Precision Healthcare LLC System and methods for image segmentation and classification using reduced depth convolutional neural networks
CN111832484B (en) * 2020-07-14 2023-10-27 星际(重庆)智能装备技术研究院有限公司 Loop detection method based on convolution perception hash algorithm
US11551448B2 (en) 2020-10-01 2023-01-10 Bank Of America Corporation System for preserving image and acoustic sensitivity using reinforcement learning
CN112232283B (en) * 2020-11-05 2023-09-01 深兰科技(上海)有限公司 Bubble detection method and system based on optical flow and C3D network
CN112529054B (en) * 2020-11-27 2023-04-07 华中师范大学 Multi-dimensional convolution neural network learner modeling method for multi-source heterogeneous data
CN116416483A (en) * 2021-12-31 2023-07-11 戴尔产品有限公司 Computer-implemented method, apparatus, and computer program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8886283B1 (en) * 2011-06-21 2014-11-11 Stc.Unm 3D and 4D magnetic susceptibility tomography based on complex MR images
CN104849852A (en) * 2015-05-07 2015-08-19 清华大学 Camera array-based light field microscopic imaging system and method

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355528A (en) * 1992-10-13 1994-10-11 The Regents Of The University Of California Reprogrammable CNN and supercomputer
JPH1131214A (en) * 1997-07-10 1999-02-02 Hitachi Medical Corp Picture processor
US6549879B1 (en) * 1999-09-21 2003-04-15 Mobil Oil Corporation Determining optimal well locations from a 3D reservoir model
US6809745B1 (en) * 2001-10-01 2004-10-26 Adobe Systems Incorporated Compositing two-dimensional and 3-dimensional images
US20050131660A1 (en) * 2002-09-06 2005-06-16 Joseph Yadegar Method for content driven image compression
US7139067B2 (en) * 2003-09-12 2006-11-21 Textron Systems Corporation Three-dimensional imaging with multiframe blind deconvolution
US8620038B2 (en) 2006-05-05 2013-12-31 Parham Aarabi Method, system and computer program product for automatic and semi-automatic modification of digital images of faces
US20090070550A1 (en) * 2007-09-12 2009-03-12 Solomon Research Llc Operational dynamics of three dimensional intelligent system on a chip
JP5368687B2 (en) 2007-09-26 2013-12-18 キヤノン株式会社 Arithmetic processing apparatus and method
US8098891B2 (en) 2007-11-29 2012-01-17 Nec Laboratories America, Inc. Efficient multi-hypothesis multi-human 3D tracking in crowded scenes
US8233711B2 (en) * 2009-11-18 2012-07-31 Nec Laboratories America, Inc. Locality-constrained linear coding systems and methods for image classification
US8818923B1 (en) * 2011-06-27 2014-08-26 Hrl Laboratories, Llc Neural network device with engineered delays for pattern storage and matching
US8345984B2 (en) 2010-01-28 2013-01-01 Nec Laboratories America, Inc. 3D convolutional neural networks for automatic human action recognition
US9171247B1 (en) * 2011-06-27 2015-10-27 Hrl Laboratories, Llc System and method for fast template matching in 3D
US20140300758A1 (en) 2013-04-04 2014-10-09 Bao Tran Video processing systems and methods
US9330171B1 (en) * 2013-10-17 2016-05-03 Google Inc. Video annotation using deep network architectures
US9230192B2 (en) * 2013-11-15 2016-01-05 Adobe Systems Incorporated Image classification using images with separate grayscale and color channels
US10460194B2 (en) * 2014-03-07 2019-10-29 Lior Wolf System and method for the detection and counting of repetitions of repetitive activity via a trained network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8886283B1 (en) * 2011-06-21 2014-11-11 Stc.Unm 3D and 4D magnetic susceptibility tomography based on complex MR images
CN104849852A (en) * 2015-05-07 2015-08-19 清华大学 Camera array-based light field microscopic imaging system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Learning Spatiotemporal Features with 3D Convolutional Networks;Du Tran, et al.;《arXiv》;20151007;第1-16页 *
Recursive Training of 2D-3D Convolutional Networks for Neuronal Boundary Detection;Kisuk Lee, et al.;《arXiv》;20150820;第1-10页 *

Also Published As

Publication number Publication date
IL259112A (en) 2018-07-31
US20170132758A1 (en) 2017-05-11
WO2017078744A1 (en) 2017-05-11
JP2018534710A (en) 2018-11-22
CN108431823A (en) 2018-08-21
KR20180079431A (en) 2018-07-10
AU2015413918A1 (en) 2018-05-24
KR102452029B1 (en) 2022-10-11
CA3003892A1 (en) 2017-05-11
MX2018005690A (en) 2018-11-09
JP6722292B2 (en) 2020-07-15
US9754351B2 (en) 2017-09-05
BR112018009271A2 (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108431823B (en) System and method for processing content using convolutional neural network
US10878579B2 (en) Systems and methods for determining optical flow
US10198637B2 (en) Systems and methods for determining video feature descriptors based on convolutional neural networks
US11170288B2 (en) Systems and methods for predicting qualitative ratings for advertisements based on machine learning
US9727803B2 (en) Systems and methods for image object recognition based on location information and object categories
EP3166075B1 (en) Systems and methods for processing content using convolutional neural networks
US10325154B2 (en) Systems and methods for providing object recognition based on detecting and extracting media portions
US20180007382A1 (en) Systems and methods for determining motion vectors
US20190043075A1 (en) Systems and methods for providing applications associated with improving qualitative ratings based on machine learning
US11562328B1 (en) Systems and methods for recommending job postings
US20160188592A1 (en) Tag prediction for images or video content items
US20160188724A1 (en) Tag prediction for content based on user metadata
US20190043074A1 (en) Systems and methods for providing machine learning based recommendations associated with improving qualitative ratings
US20180012130A1 (en) Systems and methods for forecasting trends
US10496750B2 (en) Systems and methods for generating content
US11106859B1 (en) Systems and methods for page embedding generation
US10477215B2 (en) Systems and methods for variable compression of media content based on media properties
US20190043073A1 (en) Systems and methods for determining visually similar advertisements for improving qualitative ratings associated with advertisements
US10681267B2 (en) Systems and methods for increasing resolution of images captured by a camera sensor
US10560641B2 (en) Systems and methods for generating a bias for a camera sensor for increasing resolution of captured images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: California, USA

Applicant after: Yuan platform Co.

Address before: California, USA

Applicant before: Facebook, Inc.

GR01 Patent grant
GR01 Patent grant