WO2021081741A1 - 一种基于多关系社交网络的图像分类方法及*** - Google Patents

一种基于多关系社交网络的图像分类方法及*** Download PDF

Info

Publication number
WO2021081741A1
WO2021081741A1 PCT/CN2019/113935 CN2019113935W WO2021081741A1 WO 2021081741 A1 WO2021081741 A1 WO 2021081741A1 CN 2019113935 W CN2019113935 W CN 2019113935W WO 2021081741 A1 WO2021081741 A1 WO 2021081741A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
image
relationship
image data
neural network
Prior art date
Application number
PCT/CN2019/113935
Other languages
English (en)
French (fr)
Inventor
陈小军
陈炳坤
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to PCT/CN2019/113935 priority Critical patent/WO2021081741A1/zh
Publication of WO2021081741A1 publication Critical patent/WO2021081741A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present invention relates to the technical field of image classification, in particular to an image classification method and device based on an Internet social system.
  • Image classification is a hot research field with great commercial value. It is mostly used to assist image recognition technology, such as face recognition, license plate recognition, image detection, image search, etc. At present, many researchers have conducted a lot of research on image classification, and proposed methods such as convolutional neural networks to classify images, but there are still many images that are difficult to recognize through visual features.
  • the main purpose of the present invention is to provide an image classification method and device based on the Internet social system to solve the problem that the existing image classification method cannot be adapted to the image database of the Internet social system.
  • the first aspect of the embodiments of the present invention provides an image classification method based on an Internet social system, including:
  • N relationship networks based on the image data according to the social network information, where the N relationship networks represent multiple relationships between images in the image data;
  • Construct a neural network classifier and train the neural network classifier through the network representation vector ⁇ and the visual feature I, so that the trained neural network classifier classifies the image to be classified.
  • a second aspect of the embodiments of the present invention provides an image classification device based on an Internet social system, including:
  • An image data acquisition module for acquiring image data from the database of the Internet social system, and social network information based on the image data;
  • a relationship network construction module configured to construct N relationship networks based on the image data according to the social network information, the relationship networks representing multiple relationships between images in the image data;
  • the network representation vector acquisition module is used to train N said relational networks through a multi-relational network representation learning algorithm or a semi-supervised network representation learning algorithm to obtain the network representation vectors ⁇ of the N said relational networks;
  • the visual feature acquisition module is used to extract the visual feature I from the classified sample image, wherein the image data includes the classified sample image;
  • the image classification module is used to construct a neural network classifier, and train the neural network classifier through the network representation vector ⁇ and the visual feature I, so that the trained neural network classifier classifies the image to be classified.
  • a third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the computer program, the first The steps of the method provided by the aspect.
  • a fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program is executed by a processor to implement the steps of the method provided in the first aspect.
  • the embodiment of the present invention proposes an image classification method based on the Internet social system.
  • image data and social network information based on the image data are acquired, and then a multi-relational network is constructed based on the social network information, that is, N Relational network, where N relational networks represent a variety of relations between images in image data, and then through training and learning to obtain network representation vectors based on all relational networks, and at the same time extract visual features from the classified sample images, and represent them through the network
  • the vector and visual features train a neural network classifier that can classify images to be classified, and complete the image classification work based on the Internet social system.
  • the image classification method provided by the embodiments of the present invention adopts the form of a network, which can better discover the relevance between images according to the social network information carried by the image, and the learned network representation vector can effectively save the image during image processing.
  • Social network information thereby improving the performance of the image classification algorithm in the network environment, and ensuring the accuracy and efficiency of image classification when facing the database of the Internet social system.
  • FIG. 1 is a schematic diagram of an implementation process of an image classification method based on an Internet social system provided by Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of the composition structure of an image classification device based on an Internet social system provided in the second embodiment of the present invention.
  • an embodiment of the present invention provides an image classification method based on an Internet social system, including but not limited to the following steps:
  • S101 Obtain image data from a database of an Internet social system, and social network information based on the image data.
  • the Internet social system includes various Internet social platforms, such as Weibo, Tieba, Baidu community, etc.
  • the database of the Internet social system includes public images uploaded by users in the Internet social system, and images related to the images. Social network information.
  • the social network information includes at least one of tag information, user information, picture group information, shooting location information, and comment user information.
  • the tag information is the words used to describe the image edited by the user; the user information is used to indicate the uploader of the image, the picture group information is used for the image grouping situation, and a picture group is an image collection containing multiple images;
  • the location information is used to indicate the location where the image was taken; the comment user information is used to indicate the information of all users who commented on the image.
  • the N relationship networks represent multiple relationships between images in the image data.
  • step S102 may be:
  • T 1 and T 2 are two social network information of the same type in the Nth image group, and m is a positive integer less than or equal to N;
  • two images in one image group may appear in other image groups.
  • one image group includes image 1 and image 2
  • another image group may include image 1 and image 3.
  • both image 1 and image 2 include picture group information
  • the picture group information of image 1 is a person
  • the picture group information of image 2 is an animal
  • t 1 is a person
  • t 2 is an animal.
  • t 1 and t 2 are specific values.
  • the above-mentioned one relationship network represents the relationship between two images in an image group.
  • the multi-relational network representation learning algorithm adopts unsupervised learning, which can be used even if there is no image classification information, while the semi-supervised network representation learning algorithm is semi-supervised learning and needs to have a part of the node classification information.
  • the multi-relational network representation learning algorithm can automatically identify the importance of multiple relational networks, so it can directly calculate the network representation vector ⁇ of N relational networks; while the semi-supervised network representation learning algorithm is for a single-relational network, so it needs to be separately Calculate the network representation vector ⁇ of each relational network, and obtain the network representation vector ⁇ of N relational networks after processing.
  • step S103 may be:
  • ⁇ 1 is used to denote the importance of the N-th relational network, and the N-th relational networks are merged, and the formula is:
  • the representation vectors ⁇ 1 ,..., ⁇ m of each node are spliced to obtain N network representation vectors ⁇ of the relationship network.
  • N relational networks is converted so that they can be used for multi-relational network characterization learning algorithms and semi-supervised network characterization learning algorithms.
  • the representation vector of each node represents the relevance between two images in the image group
  • the network representation vector of the N relationship networks represents the relevance of each image in the image data
  • splicing the characterization vectors ⁇ 1 ,..., ⁇ m of each node to obtain N network characterization vectors ⁇ of the relationship network includes:
  • the representation vectors ⁇ 1 ,..., ⁇ m of each node are spliced, and normalized processing is performed to obtain N network representation vectors ⁇ of the relationship network.
  • the semi-supervised network representation learning algorithm will use the class label information of some nodes during the training process, and treat the class label of the data set as a kind of node, so the negative sampling algorithm is used for optimization, the purpose is to optimize the node The loss function between the characterization vectors and the loss function between the node and the class standard characterization vector.
  • the classified sample image and the image data of the above step S101 are both sample data, which are used to train the neural network classifier in the following step S105.
  • the image data includes classified sample images.
  • step S104 may be:
  • S105 Construct a neural network classifier, and train the neural network classifier through the network representation vector ⁇ and the visual feature I, so that the trained neural network classifier classifies the image to be classified.
  • the basic data of the network representation vector ⁇ and the visual feature I come from the image data and the classified sample data, where the network representation vector ⁇ is used to represent the correlation between the images, and the visual feature I is used to represent the image Therefore, the neural network classifier trained by the network representation vector ⁇ and the visual feature I can find the correlation between images according to the social network information carried by the image, and the learned network representation can effectively save the social network information of the image .
  • step S105 may be:
  • the image to be classified is input into the trained neural network classifier for classification.
  • the feature normalization needs to be performed before splicing the network representation vector ⁇ and the visual feature I together, which has been completed in steps S103 and S104.
  • the aforementioned neural network classifier may use a single hidden layer fully connected neural network or a Capsule neural network with a fully connected layer added. There are some differences between Capsule neural network and traditional neural network. Therefore, it is necessary to add a fully connected layer before the Capsule layer for feature extraction.
  • the image classification method based on the Internet social system is based on the database of the Internet social system to obtain image data and social network information based on the image data, and then construct a multi-relationship network based on the social network information, that is, N relationships Network, where N relationship networks represent multiple relationships between images in the image data, and then through training and learning to obtain network representation vectors based on all the relationship networks, and at the same time extract visual features from the classified sample images, and represent the vector through the network And visual features train a neural network classifier that can classify images to be classified, and complete the image classification work based on the Internet social system.
  • the image classification method provided by the embodiments of the present invention adopts the form of a network, which can better discover the relevance between images according to the social network information carried by the image, and the learned network representation vector can effectively save the image during image processing.
  • Social network information thereby improving the performance of the image classification algorithm in the network environment, and ensuring the accuracy and efficiency of image classification when facing the database of the Internet social system.
  • the embodiment of the present invention provides an image classification device 20 based on an Internet social system, including:
  • the image data acquisition module 21 is used to acquire image data from the database of the Internet social system and social network information based on the image data;
  • the relationship network construction module 22 is configured to construct N relationship networks based on the image data according to the social network information, the relationship networks representing multiple relationships between images in the image data;
  • the network representation vector acquisition module 23 is configured to train N said relational networks through a multi-relational network representation learning algorithm or a semi-supervised network representation learning algorithm to obtain the network representation vectors ⁇ of the N said relational networks;
  • the visual feature acquisition module 24 is configured to extract visual features I from the classified sample image, wherein the image data includes the classified sample image;
  • the image classification module 25 is configured to construct a neural network classifier, and train the neural network classifier through the network representation vector ⁇ and the visual feature I, so that the trained neural network classifier classifies the image to be classified.
  • An embodiment of the present invention also provides a terminal device including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the computer program, the implementation is as described in the first embodiment.
  • the embodiment of the present invention also provides a storage medium, the storage medium is a computer-readable storage medium, and a computer program is stored thereon.
  • the computer program is executed by a processor, the Internet-based The various steps in the image classification method of the social system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

本发明适用于图像分类技术领域,提供了一种基于互联网社交***的图像分类方法及装置,方法包括:从互联网社交***的数据库中获取图像数据,以及基于图像数据的社交网络信息;根据社交网络信息构建基于图像数据的N个关系网络,通过多关系网络表征学习算法或半监督网络表征学习算法训练N个关系网络,获得N个关系网络的网络表征向量Φ;在已分类样本图像中提取视觉特征I,构建神经网络分类器,并通过网络表征向量Φ和视觉特征I训练神经网络分类器,以使训练后的神经网络分类器对待分类图像进行分类。通过本发明可以在图像处理过程中有效保存图像的社交网络信息,从而提高在网络环境下的图像分类算法性能。

Description

一种基于多关系社交网络的图像分类方法及*** 技术领域
本发明涉及图像分类技术领域,尤其涉及一种基于互联网社交***的图像分类方法及装置。
背景技术
图像分类是一个热门的研究领域,有着巨大的商业价值,多用于辅助图像识别技术,例如人脸识别、车牌识别、图像检测、图像搜索等。目前,有不少研究人员在图像分类方面进行了大量的研究,提出了卷积神经网络等方法对图像进行分类,但仍然有不少图像难以通过视觉特征进行识别。
而随着互联网社交***的出现和移动设备的广泛使用,人们愈发倾向于把图像上传到在线社交网络中,在线社交网络或者图像分享网站中包含了大量的图像,如果仍然使用图像的视觉特征对图像进行分类,那么视觉特征的识别失败率高将导致相当一部分图像无法识别和分类,因此,使用图像的视觉特征对进行图像分类已经不适配当前的图像数据库,而亟需一种新的图像分类方法以提高对在线社交网络中图像分类的能力。
技术问题
本发明的主要目的在于提出一种基于互联网社交***的图像分类方法及装置,以解决现有的图像分类方法无法与互联网社交***的图像数据库适配的问题。
技术解决方案
为实现上述目的,本发明实施例的第一方面提供了一种基于互联网社交***的图像分类方法,包括:
从互联网社交***的数据库中获取图像数据,以及基于所述图像数据的 社交网络信息;
根据所述社交网络信息构建基于所述图像数据的N个关系网络,N个所述关系网络表示所述图像数据中图像之间的多种关系;
通过多关系网络表征学习算法或半监督网络表征学习算法训练N个所述关系网络,获得N个所述关系网络的网络表征向量Φ;
在已分类样本图像中提取视觉特征I,其中所述图像数据包括所述已分类样本图像;
构建神经网络分类器,并通过所述网络表征向量Φ和所述视觉特征I训练所述神经网络分类器,以使训练后的神经网络分类器对待分类图像进行分类。
本发明实施例的第二方面提供了一种基于互联网社交***的图像分类装置,包括:
图像数据获取模块,用于从互联网社交***的数据库中获取图像数据,以及基于所述图像数据的社交网络信息;
关系网络构建模块,用于根据所述社交网络信息构建基于所述图像数据的N个关系网络,所述关系网络表示所述图像数据中图像之间的多种关系;
网络表征向量获取模块,用于通过多关系网络表征学习算法或半监督网络表征学习算法训练N个所述关系网络,获得N个所述关系网络的网络表征向量Φ;
视觉特征获取模块,用于在已分类样本图像中提取视觉特征I,其中所述图像数据包括所述已分类样本图像;
图像分类模块,用于构建神经网络分类器,并通过所述网络表征向量Φ和所述视觉特征I训练所述神经网络分类器,以使训练后的神经网络分类器对待分类图像进行分类。
本发明实施例的第三方面提供了一种终端设备,包括存储器、处理器以及存储在上述存储器中并可在上述处理器上运行的计算机程序,上述处理器执行上述计算机程序时实现如上第一方面所提供的方法的步骤。
本发明实施例的第四方面提供了一种计算机可读存储介质,上述计算机可读存储介质存储有计算机程序,上述计算机程序被处理器执行时实现如上第一方面所提供的方法的步骤。
有益效果
本发明实施例提出一种基于互联网社交***的图像分类方法,以互联网社交***的数据库为基础,获取图像数据及基于图像数据的社交网络信息,再根据社交网络信息构建多关系网络,即N个关系网络,其中,N个关系网络表示了图像数据中图像之间的多种关系,然后通过训练学习获得基于所有关系网络的网络表征向量,同时在已分类样本图像中提取视觉特征,通过网络表征向量和视觉特征训练出一个能够对待分类图像进行分类的神经网络分类器,完成基于互联网社交***的图像分类工作。本发明实施例提供的图像分类方法采用网络的形式,能够更好地根据图像携带的社交网络信息发现图像之间的关联性,而学习得到的网络表征向量可以在图像处理过程中有效保存图像的社交网络信息,从而提高在网络环境下的图像分类算法性能,在面对互联网社交***的数据库时,确保图像分类的准确性和分类效率。
附图说明
图1为本发明实施例一提供的基于互联网社交***的图像分类方法的实现流程示意图;
图2为本发明实施例二提供的基于互联网社交***的图像分类装置的组成结构示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本文中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
在后续的描述中,发明实施例序号仅仅为了描述,不代表实施例的优劣。
实施例一
如图1所示,本发明实施例提供了一种基于互联网社交***的图像分类方法,包括但不限于一下步骤:
S101、从互联网社交***的数据库中获取图像数据,以及基于所述图像数据的社交网络信息。
在上述步骤S101中,互联网社交***包括各互联网社交平台,如微博、贴吧、百度社区等,互联网社交***的数据库中则包括用户在互联网社交***中上传的公开的图像,以及与图像相关的社交网络信息。
在本发明实施例中,社交网络信息包括标签信息、用户信息、图片组类信息、拍摄位置信息和评论用户信息中的至少一种。
其中,标签信息是由用户编辑的用于描述图像的词语;用户信息用于表示图像的上传者,图片组类信息用于图像分组情况,一个图片组为一个图像集合,包含多幅图像;拍摄位置信息用于表示图像的拍摄地点;评论用户信息用于表示所有对该图像进行评论的用户的信息。
S102、根据所述社交网络信息构建基于所述图像数据的N个关系网络。
在上述步骤S102中,N个关系网络表示图像数据中图像之间的多种关系。
在具体应用中,上述步骤S102的详细实现流程可以为:
在所述图像数据中N次随机获取两个图像作为一个图像组,并获取第N个所述图像组中两个图像的社交网络信息,其中N为正整数;
所述两个图像的社交网络信息包括类型相同的社交网络信息T,T=(t 1,...,t n),其中,t n表示社交网络信息中的一个信息,n为正整数;
计算第N个所述图像组中的两个图像的连接权重A m,公式为
Figure PCTCN2019113935-appb-000001
其中,T 1和T 2为第N个所述图像组中类型相同的两个社交网络信息,m为小于或等于N的正整数;
根据N个所述图像组中的两个图像的连接权重A m,构建以所述两个图像为网络节点的N个关系网络G,公式为G=(V,A m)。
在具体应用中,上述的在所述图像数据中N次随机获取两个图像作为一个图像组的具体实现中,一个图像组中的两个图像可以在其他图像组中出现。例如,一个图像组中包括图像1和图像2,另一个图像组中可能包括图像1和图像3。
在具体应用中,上述的类型相同的社交网络信息可以表现为:
假设图像1和图像2均包括图片组类信息,图像1的图片组类信息为人物,图像2的图片组类信息为动物,则t 1为人物,t 2为动物,在连接权重的计算过程中,t 1和t 2为具体的值。
在具体应用中,上述的一个关系网络表示一个图像组中的两个图像之间的关系,通过多个关系网络,则可以构造出基于图像数据中所有图像之间的多种关系网络,表示图像数据中所有图像之间的关系。因此,N个关系网络即多关系网络,可展开表示为:G=(V,A 1,A 2,...,A m)。
S103、通过多关系网络表征学习算法或半监督网络表征学习算法训练N个所述关系网络,获得N个所述关系网络的网络表征向量Φ;
在上述步骤S103中,多关系网络表征学习算法采用无监督学习,即使没 有图像的类标信息也可以使用,而半监督网络表征学习算法是半监督学习,需要拥有一部分节点的类标信息。
其中,多关系网络表征学习算法可以自动识别多个关系网络的重要性,因此可以直接计算N个关系网络的网络表征向量Φ;而半监督网络表征学习算法针对的是单关系网络,因此需要分别计算每个关系网络的网络表征向量Φ,进行处理后获得N个关系网络的网络表征向量Φ。
在具体应用中,上述步骤S103的详细实现流程可以为:
将N个所述关系网络进行格式转换;
通过所述关系网络表征学习算法识别N个所述关系网络的重要性;
其中,用α l表示第N个所述关系网络的重要性,并将N个所述关系网络融合,公式为:
Figure PCTCN2019113935-appb-000002
通过矩阵分解方法分解融合后的关系网络,获得N个所述关系网络的网络表征向量Φ;
或者,
将N个所述关系网络进行格式转换;
通过半监督网络表征学习算法学习N个所述关系网络中每个节点的表征向量φ 1,...,φ m
拼接所述每个节点的表征向量φ 1,...,φ m,获得N个所述关系网络的网络表征向量Φ。
在具体应用中,将N个关系网络进行格式转换,以使其可以用于多关系网络表征学习算法和半监督网络表征学习算法。
在具体应用中,每个节点的表征向量表示图像组中两个图像之间的关联性,N个关系网络的网络表征向量则表示图像数据中各图像的关联性。
在一个实施例中,拼接所述每个节点的表征向量φ 1,...,φ m,获得N个所述 关系网络的网络表征向量Φ,包括:
采用负采样算法优化所述每个节点的表征向量;
拼接所述每个节点的表征向量φ 1,...,φ m,并进行归一化处理,获得N个所述关系网络的网络表征向量Φ。
在具体应用中,半监督网络表征学习算法在训练过程中会使用部分节点的类标信息,并把数据集的类标也当成是一种节点,因此采用负采样算法进行优化,目的是优化节点表征向量之间的损失函数和节点与类标表征向量之间的损失函数。
S104、在已分类样本图像中提取视觉特征I。
在上述步骤S104中,已分类样本图像和上述步骤S101的图像数据均为样本数据,用于训练下面步骤S105中的神经网络分类器。
其中,图像数据包括已分类样本图像。
在具体应用中,上述步骤S104的详细实现流程可以为:
在图像网页上训练构建卷积神经网络,并修改所述卷积神经网络的最后一层全连接层;
使用所述图像数据再训练所述卷积神经网络,获得特征提取神经网络;
将所述已分类样本图像输入所述特征提取神经网络,所述特征提取神经网络的最后一层隐藏层的一个激活值作为所述已分类样本图像的一个图像视觉特征;
将所述已分类样本图像的所有所述图像视觉特征进行归一化处理,获得基于所述已分类样本图像的视觉特征I。
S105、构建神经网络分类器,并通过所述网络表征向量Φ和所述视觉特征I训练所述神经网络分类器,以使训练后的神经网络分类器对待分类图像进行分类。
在上述步骤S105中,网络表征向量Φ和视觉特征I的基础数据来自于图像数据和已分类样本数据,其中,网络表征向量Φ用于表示图像之间的关联性, 视觉特征I用于表示图像的特征,因此,由网络表征向量Φ和视觉特征I训练的神经网络分类器能够根据图像携带的社交网络信息,发现图像之间的关联性,学习得到的网络表征可有效保存图像的社交网络信息。
在具体应用中,上述步骤S105的详细实现流程可以为:
拼接所述网络表征向量Φ和所述视觉特征I;
根据拼接后的所述网络表征向量Φ和所述视觉特征I训练所述神经网络分类器;
将所述待分类图像输入训练后的所述神经网络分类器中进行分类。
在本发明实施例中,在将网络表征向量Φ和视觉特征I拼接在一起前需要进行特征的归一化,在步骤S103和S104中已经完成。
在一个实施例中,上述的神经网络分类器可以采用单隐藏层全连接神经网络或者加上一层全连接层的Capsule神经网络。而Capsule神经网络与传统神经网络存在一些区别,因此,还需要在Capsule层之前加入一层全连接层进行特征的提取。
本发明实施例提供的基于互联网社交***的图像分类方法,以互联网社交***的数据库为基础,获取图像数据及基于图像数据的社交网络信息,再根据社交网络信息构建多关系网络,即N个关系网络,其中,N个关系网络表示了图像数据中图像之间的多种关系,然后通过训练学习获得基于所有关系网络的网络表征向量,同时在已分类样本图像中提取视觉特征,通过网络表征向量和视觉特征训练出一个能够对待分类图像进行分类的神经网络分类器,完成基于互联网社交***的图像分类工作。本发明实施例提供的图像分类方法采用网络的形式,能够更好地根据图像携带的社交网络信息发现图像之间的关联性,而学习得到的网络表征向量可以在图像处理过程中有效保存图像的社交网络信息,从而提高在网络环境下的图像分类算法性能,在面对互联网社交***的数据库时,确保图像分类的准确性和分类效率。
实施例二
本发明实施例提供一种基于互联网社交***的图像分类装置20,包括:
图像数据获取模块21,用于从互联网社交***的数据库中获取图像数据,以及基于所述图像数据的社交网络信息;
关系网络构建模块22,用于根据所述社交网络信息构建基于所述图像数据的N个关系网络,所述关系网络表示所述图像数据中图像之间的多种关系;
网络表征向量获取模块23,用于通过多关系网络表征学习算法或半监督网络表征学习算法训练N个所述关系网络,获得N个所述关系网络的网络表征向量Φ;
视觉特征获取模块24,用于在已分类样本图像中提取视觉特征I,其中所述图像数据包括所述已分类样本图像;
图像分类模块25,用于构建神经网络分类器,并通过所述网络表征向量Φ和所述视觉特征I训练所述神经网络分类器,以使训练后的神经网络分类器对待分类图像进行分类。
本发明实施例还提供一种终端设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如实施例一中所述的基于互联网社交***的图像分类方法中的各个步骤。
本发明实施例还提供一种存储介质,所述存储介质为计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现如实施例一中所述的基于互联网社交***的图像分类方法中的各个步骤。
以上所述实施例仅用以说明本发明的技术方案,而非对其限制;尽管前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种基于互联网社交***的图像分类方法,其特征在于,包括:
    从互联网社交***的数据库中获取图像数据,以及基于所述图像数据的社交网络信息;
    根据所述社交网络信息构建基于所述图像数据的N个关系网络,N个所述关系网络表示所述图像数据中图像之间的多种关系;
    通过多关系网络表征学习算法或半监督网络表征学习算法训练N个所述关系网络,获得N个所述关系网络的网络表征向量Φ;
    在已分类样本图像中提取视觉特征I,其中所述图像数据包括所述已分类样本图像;
    构建神经网络分类器,并通过所述网络表征向量Φ和所述视觉特征I训练所述神经网络分类器,以使训练后的神经网络分类器对待分类图像进行分类。
  2. 如权利要求1所述的基于互联网社交***的图像分类方法,其特征在于,根据所述社交网络信息构建基于所述图像数据的N个关系网络,包括:
    在所述图像数据中N次随机获取两个图像作为一个图像组,并获取第N个所述图像组中两个图像的社交网络信息,其中N为正整数;
    所述两个图像的社交网络信息包括相同社交网络信息T,T=(t 1,...,t n),其中,t n表示社交网络信息中的一个信息,n为正整数;
    计算第N个所述图像组中的两个图像的连接权重A m,公式为
    Figure PCTCN2019113935-appb-100001
    其中,T 1和T 2为第N个所述图像组中类型相同的两个社交网络信息,m为小于或等于N的正整数;
    根据N个所述图像组中的两个图像的连接权重A m,构建以所述两个图像为网络节点的N个关系网络G,公式为G=(V,A m)。
  3. 如权利要求1所述的基于互联网社交***的图像分类方法,其特征在 于,通过多关系网络表征学习算法或半监督网络表征学习算法训练N个所述关系网络,获得N个所述关系网络的网络表征向量Φ,包括:
    将N个所述关系网络进行格式转换;
    通过所述关系网络表征学习算法识别N个所述关系网络的重要性;
    其中,用α l表示第N个所述关系网络的重要性,并将N个所述关系网络融合,公式为:
    Figure PCTCN2019113935-appb-100002
    通过矩阵分解方法分解融合后的关系网络,获得N个所述关系网络的网络表征向量Φ;
    或者,
    将N个所述关系网络进行格式转换;
    通过半监督网络表征学习算法学习N个所述关系网络中每个节点的表征向量φ 1,...,φ m
    拼接所述每个节点的表征向量φ 1,...,φ m,获得N个所述关系网络的网络表征向量Φ。
  4. 如权利要求3所述的基于互联网社交***的图像分类方法,其特征在于,拼接所述每个节点的表征向量φ 1,...,φ m,获得N个所述关系网络的网络表征向量Φ,包括:
    采用负采样算法优化所述每个节点的表征向量;
    拼接所述每个节点的表征向量φ 1,...,φ m,并进行归一化处理,获得N个所述关系网络的网络表征向量Φ。
  5. 如权利要求1所述的基于互联网社交***的图像分类方法,其特征在于,在已分类样本图像中提取视觉特征I,包括:
    在图像网页上训练构建卷积神经网络,并修改所述卷积神经网络的最后一层全连接层;
    使用所述图像数据再训练所述卷积神经网络,获得特征提取神经网络;
    将所述已分类样本图像输入所述特征提取神经网络,所述特征提取神经网络的最后一层隐藏层的一个激活值作为所述已分类样本图像的一个图像视觉特征;
    将所述已分类样本图像的所有所述图像视觉特征进行归一化处理,获得基于所述已分类样本图像的视觉特征I。
  6. 权利要求1所述的基于互联网社交***的图像分类方法,其特征在于,构建神经网络分类器,并通过所述网络表征向量Φ和所述视觉特征I训练所述神经网络分类器,以使训练后的神经网络分类器对待分类图像进行分类,包括:
    拼接所述网络表征向量Φ和所述视觉特征I;
    根据拼接后的所述网络表征向量Φ和所述视觉特征I训练所述神经网络分类器;
    将所述待分类图像输入训练后的所述神经网络分类器中进行分类。
  7. 如权利要求1所述的基于互联网社交***的图像分类方法,其特征在于,所述社交网络信息包括标签信息、用户信息、图片组类信息、拍摄位置信息和评论用户信息中的至少一种。
  8. 一种基于互联网社交***的图像分类装置,其特征在于,包括:
    图像数据获取模块,用于从互联网社交***的数据库中获取图像数据,以及基于所述图像数据的社交网络信息;
    关系网络构建模块,用于根据所述社交网络信息构建基于所述图像数据的N个关系网络,所述关系网络表示所述图像数据中图像之间的多种关系;
    网络表征向量获取模块,用于通过多关系网络表征学习算法或半监督网络表征学习算法训练N个所述关系网络,获得N个所述关系网络的网络表征向量Φ;
    视觉特征获取模块,用于在已分类样本图像中提取视觉特征I,其中所 述图像数据包括所述已分类样本图像;
    图像分类模块,用于构建神经网络分类器,并通过所述网络表征向量Φ和所述视觉特征I训练所述神经网络分类器,以使训练后的神经网络分类器对待分类图像进行分类。
  9. 一种终端设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,实现如权利要求1至7任一项所述的基于互联网社交***的图像分类方法中的各个步骤。
  10. 一种存储介质,所述存储介质为计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1至7任一项所述的基于互联网社交***的图像分类方法中的各个步骤。
PCT/CN2019/113935 2019-10-29 2019-10-29 一种基于多关系社交网络的图像分类方法及*** WO2021081741A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/113935 WO2021081741A1 (zh) 2019-10-29 2019-10-29 一种基于多关系社交网络的图像分类方法及***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/113935 WO2021081741A1 (zh) 2019-10-29 2019-10-29 一种基于多关系社交网络的图像分类方法及***

Publications (1)

Publication Number Publication Date
WO2021081741A1 true WO2021081741A1 (zh) 2021-05-06

Family

ID=75714743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113935 WO2021081741A1 (zh) 2019-10-29 2019-10-29 一种基于多关系社交网络的图像分类方法及***

Country Status (1)

Country Link
WO (1) WO2021081741A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705075A (zh) * 2021-07-07 2021-11-26 西北大学 一种基于图神经网络的社交关系分析方法
CN116127204A (zh) * 2023-04-17 2023-05-16 中国科学技术大学 多视角用户画像方法、多视角用户画像***、设备和介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150036919A1 (en) * 2013-08-05 2015-02-05 Facebook, Inc. Systems and methods for image classification by correlating contextual cues with images
CN107808168A (zh) * 2017-10-31 2018-03-16 北京科技大学 一种基于强弱关系的社交网络用户行为预测方法
CN107909038A (zh) * 2017-11-16 2018-04-13 北京邮电大学 一种社交关系分类模型训练方法、装置、电子设备及介质
CN109189959A (zh) * 2018-09-06 2019-01-11 腾讯科技(深圳)有限公司 一种构建图像数据库的方法及装置
CN109948447A (zh) * 2019-02-21 2019-06-28 山东科技大学 基于视频图像识别的人物网络关系发现及演化呈现方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150036919A1 (en) * 2013-08-05 2015-02-05 Facebook, Inc. Systems and methods for image classification by correlating contextual cues with images
CN107808168A (zh) * 2017-10-31 2018-03-16 北京科技大学 一种基于强弱关系的社交网络用户行为预测方法
CN107909038A (zh) * 2017-11-16 2018-04-13 北京邮电大学 一种社交关系分类模型训练方法、装置、电子设备及介质
CN109189959A (zh) * 2018-09-06 2019-01-11 腾讯科技(深圳)有限公司 一种构建图像数据库的方法及装置
CN109948447A (zh) * 2019-02-21 2019-06-28 山东科技大学 基于视频图像识别的人物网络关系发现及演化呈现方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHAOPING WANG, BINGKUN CHEN: "Image classification algorithm based on social network metadata", SCIENTIFIC RESEARCH PROJECTS OF HUNAN EDUCATION DEPARTMENT, vol. 36, no. 4, 1 July 2019 (2019-07-01), pages 453 - 459, XP055808484 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705075A (zh) * 2021-07-07 2021-11-26 西北大学 一种基于图神经网络的社交关系分析方法
CN113705075B (zh) * 2021-07-07 2024-01-19 西北大学 一种基于图神经网络的社交关系分析方法
CN116127204A (zh) * 2023-04-17 2023-05-16 中国科学技术大学 多视角用户画像方法、多视角用户画像***、设备和介质
CN116127204B (zh) * 2023-04-17 2023-07-18 中国科学技术大学 多视角用户画像方法、多视角用户画像***、设备和介质

Similar Documents

Publication Publication Date Title
CN105612514B (zh) 通过将语境线索与图像关联进行图像分类的***和方法
Hua et al. Clickage: Towards bridging semantic and intent gaps via mining click logs of search engines
Wu et al. Mining social images with distance metric learning for automated image tagging
Goh et al. Food-image Classification Using Neural Network Model
WO2015192655A1 (zh) 社交网络中用户推荐模型的建立及应用方法和装置
Mei et al. Probabilistic multimodality fusion for event based home photo clustering
WO2023185539A1 (zh) 机器学习模型训练方法、业务数据处理方法、装置及***
CN106228120B (zh) 查询驱动的大规模人脸数据标注方法
EP2245580A1 (en) Discovering social relationships from personal photo collections
Gao et al. Event classification in microblogs via social tracking
WO2023179429A1 (zh) 一种视频数据的处理方法、装置、电子设备及存储介质
Gao et al. A hierarchical recurrent approach to predict scene graphs from a visual‐attention‐oriented perspective
Hua et al. Online multi-label active annotation: towards large-scale content-based video search
WO2021081741A1 (zh) 一种基于多关系社交网络的图像分类方法及***
Ni et al. Discriminative deep transfer metric learning for cross-scenario person re-identification
Niu et al. Visual recognition by learning from web data via weakly supervised domain generalization
Lu et al. Domain-aware se network for sketch-based image retrieval with multiplicative euclidean margin softmax
Xu et al. Deep learning for person reidentification using support vector machines
Wu et al. Identifying humanitarian information for emergency response by modeling the correlation and independence between text and images
Xu et al. Towards annotating media contents through social diffusion analysis
CN113392205A (zh) 用户画像构建方法、装置、设备及存储介质
Wang et al. Listen, look, and find the one: Robust person search with multimodality index
Wu et al. Novel real-time face recognition from video streams
WO2023000792A1 (zh) 构建活体识别模型和活体识别的方法、装置、设备及介质
CN115168609A (zh) 一种文本匹配方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19951180

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19951180

Country of ref document: EP

Kind code of ref document: A1