WO2019006835A1 - Target recognition method based on compressed sensing - Google Patents

Target recognition method based on compressed sensing Download PDF

Info

Publication number
WO2019006835A1
WO2019006835A1 PCT/CN2017/098652 CN2017098652W WO2019006835A1 WO 2019006835 A1 WO2019006835 A1 WO 2019006835A1 CN 2017098652 W CN2017098652 W CN 2017098652W WO 2019006835 A1 WO2019006835 A1 WO 2019006835A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
matrix
dictionary
coefficient
image
Prior art date
Application number
PCT/CN2017/098652
Other languages
French (fr)
Chinese (zh)
Inventor
程雪岷
郝群
董常青
王育琦
Original Assignee
清华大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学深圳研究生院 filed Critical 清华大学深圳研究生院
Publication of WO2019006835A1 publication Critical patent/WO2019006835A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/457Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/242Division of the character sequences into groups prior to recognition; Selection of dictionaries

Definitions

  • the present invention relates to the field of image recognition technologies, and in particular, to a target recognition method based on compressed sensing.
  • Target recognition in images is a process of distinguishing specific targets in images by various program algorithms, and the differentiated targets are provided as the basis for further processing.
  • the human eye tends to be slower in identifying a specific target, and long-term recognition and division of similar targets can cause a large number of misidentifications in aesthetic fatigue, and machine recognition instead of human eye recognition is used instead of computer calculation.
  • the use of the human eye can increase the speed and reduce energy consumption. It is very advantageous for the field of image recognition. For example, to identify the video frame image of a thousand intersections, it is required to find out the traffic flow passing through.
  • Machine identification is far more conducive to human eye recognition; similarly, adding an image target recognition system to a robot is equivalent to adding an "eye" to the robot, which is also very beneficial for the development of artificial intelligence technology.
  • people in the field have made many contributions in target recognition. People not only apply image recognition technology to face recognition, item recognition, etc., but also apply it to handwriting recognition and the like, which greatly facilitates people's lives. .
  • the commonly used image target recognition technology generally has the disadvantages of long time and slow speed, because the traditional image target recognition technology requires the following processes: image preprocessing, image segmentation, feature extraction and feature recognition or matching; The obtained image information amount is processed a plurality of times to recognize the desired target, and the existing image recognition separates the image capturing process from the target recognition process, which is disadvantageous for speed improvement.
  • the present invention proposes a target recognition method based on compressed sensing, which can improve the speed of target recognition in an image, and can realize multi-target recognition.
  • the invention discloses a target recognition method based on compressed sensing, which comprises the following steps:
  • S2 obtaining characteristic atoms of each type of the target by using a characteristic atom extraction method according to a standard sample map of each type of the target;
  • the step S1 specifically includes: extracting target images of the various types of the targets according to at least two types of the objects that are known to exist in the image or the video stream, and respectively, the plurality of the target images of each type of the target are respectively A weighted calculation is performed to obtain a standard sample map for each of the stated targets.
  • the target image in which the objects of the various types are extracted is specifically: an image image method or a manual recognition segmentation method is used to extract target images of various types of the targets.
  • step S4 further comprises storing the sampled signal y as data in a memory.
  • the sparse coefficient ⁇ of the original image is the sparse coefficient ⁇ of the original image.
  • the feature atom extraction method in step S2 specifically uses a MOD algorithm or a K-SVD algorithm to extract feature atoms of the target.
  • the method for extracting characteristic atoms in step S2 specifically includes:
  • step S26 comparing the residual r j of all the columns in the matrix n with the first threshold ⁇ 1 , if there is r j ⁇ ⁇ 1 , step S27 is performed, otherwise returns to step S23;
  • step S28 comparing the second norm of the updated matrix n with the second threshold ⁇ 2 , if
  • the step S2 further includes S210: performing the OMP reconstruction calculation on the output feature atom in combination with the target matrix m to obtain a corresponding coefficient, and calculating the two norms of each corresponding coefficient according to the size, according to the coefficient size and the error setting. If it is required to judge that the first N rows are valid, then the first N columns corresponding to the extracted feature atoms are judged as the final feature atom output.
  • the orthogonalization process in step S29 specifically employs a Smith orthogonalization process; preferably, ⁇ 1 ⁇ ⁇ 2 .
  • the beneficial effects of the present invention are: in the target recognition method of the present invention, first The feature atom of each type of target is obtained by the feature atom extraction method, and each feature atom of each class object is arranged diagonally to form a dictionary of each type of target, and then the dictionary of each type of target is arranged side by side to form a comprehensive dictionary, by characterizing the atom Performing such a specific arrangement, combining the sparse coefficients to obtain a coefficient map, and directly identifying the count in the coefficient data, the number of targets of each type of target can be divided into different categories, so that multiple target recognition processes can be performed simultaneously;
  • the processing process is simple, the calculation amount is small, and the speed of target recognition in the image is improved.
  • the reconstructed image can be obtained in parallel while performing target recognition, and the parallel computing does not affect each other, and the image acquisition and the target recognition process are combined into one, and the target recognition is performed in the image acquisition signal, thereby further improving The speed of target recognition in the image, and improve the storage efficiency of the data and reduce the consumption of hardware.
  • a specific feature atom extraction method is designed for the distinguishability of the desired recognition target and the background environment, and the characteristic atom is rationalized. The combination forms a dictionary, and the coefficient matrix calculated by combining the dictionary with the measurement matrix can obtain a better target recognition effect.
  • FIG. 1 is a schematic flow chart of a method for recognizing a target based on a compressed sensing according to a preferred embodiment of the present invention
  • FIG. 2 is a flow chart showing a feature atom extraction method of a preferred embodiment of the present invention.
  • a preferred embodiment of the present invention discloses a target recognition method based on compressed sensing, which includes the following steps:
  • the feature atom extraction method can use MOD algorithm or K-SVD algorithm to extract characteristic atoms of various targets. Further, the feature atom extraction method can also adopt the method described later.
  • the thinning coefficient ⁇ is subjected to filtering binarization or the like to obtain a coefficient map.
  • the reconstructed image obtained by step S7 is not distorted compared with the image acquired by the conventional Nyquist method, and can be correlated with the original image for correlation processing.
  • the present invention adopts a random Gaussian or Bernoulli measurement matrix for data compression acquisition, and then collects less data than the original image and stores it in the memory, so that the same is
  • the amount of image information can be collected several times under the storage amount; then, the collected signal is reconstructed from the memory by the OMP algorithm when needed, and the atom generated by the selected dictionary is a self-designed algorithm.
  • the characteristic atoms of each type of target dictionary are arranged diagonally, and then the dictionary of each type of target is arranged side by side to form a comprehensive dictionary), so the calculated sparse coefficient is processed slightly, and the classified areas are separated.
  • the obtained sparse coefficient is directly multiplied by the selected dictionary to obtain a reconstructed image close to the original image quality, and the obtained reconstructed graph and the collation recognition can be processed in parallel without affecting each other.
  • the feature atom extraction method in step S2 specifically includes:
  • step S26 comparing the residual r j of all the columns in the matrix n with the first threshold ⁇ 1 , if there is r j ⁇ ⁇ 1 , step S27 is performed, otherwise returns to step S23;
  • Step S26 and step S27 is calculated by comparing the residuals for each column after deleting the first threshold value, determining whether the column is greater impact, if it is present can be characterized as a dictionary ⁇ p atom, or re-selection .
  • Steps S28 and S29 are to determine whether all the feature atoms have been selected. If the selection has been completed, the obtained dictionary ⁇ p is orthogonally output as the feature atom, otherwise the selection is continued.
  • S210 Performing OMP reconstruction by combining the output feature atoms with the target matrix m to obtain corresponding coefficients, and calculating the two norms of each corresponding coefficient according to the size, and determining that the first N rows are valid according to the coefficient size and the error setting requirement, Correspondingly, the first N columns corresponding to the extracted feature atoms are judged as the final feature atom output.
  • the feature atoms of each type of target are first obtained by the feature atom extraction method, and the respective feature atoms of each type of target are arranged diagonally to form a dictionary of each type of target, and the dictionary of each target is arranged side by side.
  • Forming a comprehensive dictionary by performing the specific arrangement of the characteristic atoms, combining the sparse coefficients to obtain the coefficient map, and directly identifying the counts in the coefficient data, the number of targets of each type of target can be divided into different categories, thereby At the same time, multiple target recognition processing is performed; in addition, compared with the traditional target recognition method, it is not necessary to extract the region of interest for matching calculation to perform classification and recognition, and the different types of targets can be obtained in different coefficient regions directly by setting a specific dictionary.
  • the processing process is simple, the calculation amount is small, and the speed of target recognition in the image is improved.
  • the invention also combines the principles of finite equidistance in the theory of compressed sensing, and designs a specific feature atom extraction method for the distinguishability of the desired recognition target and the background environment, and forms a reasonable combination of the characteristic atoms.
  • the dictionary using this dictionary and the measurement matrix combined to calculate the coefficient matrix, can obtain better target recognition effect, and can obtain the reconstructed image with better reconstruction quality at the same time.
  • the recognition count and the obtained reconstructed image can be calculated in parallel and mutual No impact, saving time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A target recognition method based on compressed sensing, comprising the steps of: acquiring standard sample images of at least two types of targets (S1); obtaining feature elements of the types of targets by using a feature element extraction method (S2); arranging the feature elements of each type of target diagonally into a dictionary of each type of target, and arranging the dictionaries of the types of targets in parallel into a comprehensive dictionary (S3); performing compressed sampling on an original image to be recognized by using a measurement matrix to obtain a compressed sampled signal (S4); and calculating a sparse coefficient of the original image to be recognized by means of reconstruction in combination with the comprehensive dictionary, the measurement matrix, and the sampled signal (S5); processing the sparse coefficient to obtain a coefficient graph, and recognizing the types of targets in the original image according to the coefficient graph (S6); and multiplying the sparse coefficient by the comprehensive dictionary to obtain an acquired reconstructed image (S7). The target recognition method can increase the speed of target recognition in the image and realize multi-target recognition.

Description

一种基于压缩感知的目标识别方法A Target Recognition Method Based on Compressed Sensing 技术领域Technical field
本发明涉及图像识别技术领域,尤其涉及一种基于压缩感知的目标识别方法。The present invention relates to the field of image recognition technologies, and in particular, to a target recognition method based on compressed sensing.
背景技术Background technique
在图像中进行目标识别是采用各种程序算法将图像中特定的目标区分出来的过程,并且将区分出的目标作为进行下一步处理提供基础,在信息化网络化的今天,可以广泛应用到许多领域。人眼在进行识别某个特定目标时速度往往较慢,并且对于同类目标进行长时间识别划分,也会造成审美疲劳逐渐产生大量错误识别,而采用机器识别代替人眼识别,利用计算机计算量代替人眼的用脑量可以提高速度与降低能耗,对于图像识别领域而言是非常有利的,例如:对一千幅十字路口的视频帧图片进行识别,要求找出通过的车流量,明显采用机器识别远远有利于人眼识别;同样的,若给机器人加上图像目标识别***,则相当于给机器人添加了“眼睛”,对于发展人工智能技术也是非常有利的。目前本领域技术人员在目标识别方面做出了很多贡献,人们不仅将图像识别技术应用于人脸识别,物品识别等方面,还将其应用在了手写识别等方面,极大地方便了人们的生活。Target recognition in images is a process of distinguishing specific targets in images by various program algorithms, and the differentiated targets are provided as the basis for further processing. In today's information network, it can be widely applied to many. field. The human eye tends to be slower in identifying a specific target, and long-term recognition and division of similar targets can cause a large number of misidentifications in aesthetic fatigue, and machine recognition instead of human eye recognition is used instead of computer calculation. The use of the human eye can increase the speed and reduce energy consumption. It is very advantageous for the field of image recognition. For example, to identify the video frame image of a thousand intersections, it is required to find out the traffic flow passing through. Machine identification is far more conducive to human eye recognition; similarly, adding an image target recognition system to a robot is equivalent to adding an "eye" to the robot, which is also very beneficial for the development of artificial intelligence technology. At present, people in the field have made many contributions in target recognition. People not only apply image recognition technology to face recognition, item recognition, etc., but also apply it to handwriting recognition and the like, which greatly facilitates people's lives. .
而目前常用的图像目标识别技术一般缺点是耗时较长、速度较慢,原因是传统图像目标识别技术需要以下流程:图像预处理、图像分割、特征提取和特征识别或匹配;即需要对采样得到的图像信息量进行多次处理才能将所需的目标识别出来,并且现有的图像识别将图像采集过程与目标识别过程分开,不利于速度的提高。At present, the commonly used image target recognition technology generally has the disadvantages of long time and slow speed, because the traditional image target recognition technology requires the following processes: image preprocessing, image segmentation, feature extraction and feature recognition or matching; The obtained image information amount is processed a plurality of times to recognize the desired target, and the existing image recognition separates the image capturing process from the target recognition process, which is disadvantageous for speed improvement.
以上背景技术内容的公开仅用于辅助理解本发明的构思及技术方案,其并不必然属于本专利申请的现有技术,在没有明确的证据表明上述内容在本专利申请的申请日已经公开的情况下,上述背景技术不应当用于评价本申请的新颖性和创造性。 The above disclosure of the background art is only for assisting in understanding the concepts and technical solutions of the present invention, and it does not necessarily belong to the prior art of the present patent application, and there is no clear evidence that the above content has been disclosed on the filing date of the present patent application. In this case, the above background art should not be used to evaluate the novelty and inventiveness of the present application.
发明内容Summary of the invention
为解决上述技术问题,本发明提出一种基于压缩感知的目标识别方法,能够提高在图像中目标识别的速度,并可以实现多目标识别。In order to solve the above technical problem, the present invention proposes a target recognition method based on compressed sensing, which can improve the speed of target recognition in an image, and can realize multi-target recognition.
为了达到上述目的,本发明采用以下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
本发明公开了一种基于压缩感知的目标识别方法,包括以下步骤:The invention discloses a target recognition method based on compressed sensing, which comprises the following steps:
S1:获取至少两类目标的标准样本图;S1: obtaining a standard sample map of at least two types of targets;
S2:根据各类所述目标的标准样本图,采用特征原子提取方法得到各类所述目标的特征原子;S2: obtaining characteristic atoms of each type of the target by using a characteristic atom extraction method according to a standard sample map of each type of the target;
S3:将每类所述目标的各个所述特征原子分别对角排列组成每类所述目标的字典Ψp,并将各类所述目标的字典并列排列组成综合的字典Ψ;S3: arranging each of the characteristic atoms of each type of the target diagonally to form a dictionary Ψ p of each type of the target, and arranging the dictionary of each type of the target side by side to form a comprehensive dictionary Ψ;
S4:采用测量矩阵Φ对待识别的原始图像x进行压缩采样,得到压缩的采样信号y;S4: compressing and sampling the original image x to be identified by using the measurement matrix Φ, to obtain a compressed sampling signal y;
S5:结合综合的字典Ψ、测量矩阵Φ和采样信号y,通过重构计算得到待识别的所述原始图像的稀疏系数θ;S5: combining the integrated dictionary Ψ, the measurement matrix Φ and the sampling signal y, and calculating the sparse coefficient θ of the original image to be identified by reconstruction;
S6:将稀疏系数θ进行处理得到系数图,根据所述系数图中连通域所处的行数以及大小进行分类识别和计数,以实现对所述原始图像中的各类所述目标的识别。S6: processing the sparse coefficient θ to obtain a coefficient map, performing classification identification and counting according to the number and size of the connected domains in the coefficient graph, so as to implement recognition of each type of the target in the original image.
优选地,步骤S1具体包括:根据图像或视频流中已知存在的至少两类所述目标,提取出各类所述目标的目标图像,将每类所述目标的多个所述目标图像分别进行加权计算得到每类所述目标的标准样本图。Preferably, the step S1 specifically includes: extracting target images of the various types of the targets according to at least two types of the objects that are known to exist in the image or the video stream, and respectively, the plurality of the target images of each type of the target are respectively A weighted calculation is performed to obtain a standard sample map for each of the stated targets.
优选地,其中提取出各类所述目标的目标图像具体为:采用图像形态学方法或人工识别分割方法来提取出各类所述目标的目标图像。Preferably, the target image in which the objects of the various types are extracted is specifically: an image image method or a manual recognition segmentation method is used to extract target images of various types of the targets.
优选地,步骤S4还包括将采样信号y作为数据存储在存储器中。Preferably, step S4 further comprises storing the sampled signal y as data in a memory.
优选地,步骤S5具体包括:根据综合的字典Ψ和测量矩阵Φ的乘积得到感知矩阵A=Φ*Ψ,结合采样信号y,通过OMP算法的重构计算公式y=A*θ计算得到所述原始图像的稀疏系数θ。Preferably, the step S5 specifically includes: obtaining the perceptual matrix A=Φ*Ψ according to the product of the integrated dictionary Ψ and the measurement matrix Φ, and calculating the y=A*θ by the reconstruction calculation formula y=A*θ of the OMP algorithm in combination with the sampling signal y. The sparse coefficient θ of the original image.
优选地,所述目标识别方法还包括步骤S7:将稀疏系数θ与综合的字典Ψ相乘,得到采集的重构图像x1=Ψ*θ。 Preferably, the target recognition method further comprises step S7: multiplying the sparse coefficient θ by the integrated dictionary , to obtain the acquired reconstructed image x 1 =Ψ*θ.
优选地,步骤S2中的特征原子提取方法具体采用MOD算法或K-SVD算法来提取所述目标的特征原子。Preferably, the feature atom extraction method in step S2 specifically uses a MOD algorithm or a K-SVD algorithm to extract feature atoms of the target.
优选地,步骤S2中的特征原子提取方法具体包括:Preferably, the method for extracting characteristic atoms in step S2 specifically includes:
S21:输入各类所述目标的标准样本图的目标矩阵m,并进行初始化:将矩阵n初始化为n=m,循环标识i=0,初始的所述目标的字典Ψp为空集;S21: input a target matrix m of a standard sample map of each type of the target, and perform initialization: initialize the matrix n to n=m, the loop identifier i=0, and the initial dictionary Ψ p of the target is an empty set;
S22:赋值循环标识i=i+1;S22: the assignment cycle identifier i=i+1;
S23:寻找矩阵n的所有列中二范数最大的第k列,作为提取的列元素λi,其中
Figure PCTCN2017098652-appb-000001
S为矩阵n的列数;
S23: Find the kth column with the largest norm in all the columns of the matrix n as the extracted column element λ i , wherein
Figure PCTCN2017098652-appb-000001
S is the number of columns of the matrix n;
S24:计算矩阵n中所有列与第k列的列元素λi的最优比率t,其中
Figure PCTCN2017098652-appb-000002
j是指矩阵n中的第j列;
S24: calculating an optimal ratio t of all the columns in the matrix n and the column elements λ i of the kth column, wherein
Figure PCTCN2017098652-appb-000002
j is the jth column in the matrix n;
S25:更新矩阵n的残差r,残差r中各列的残差rj的计算公式为rj=nj-λi*tjS25: update the residual r of the matrix n, and calculate the residual r j of each column in the residual r as r j =n j -λi*t j ;
S26:将矩阵n中所有列的残差rj与第一阈值ξ1进行比较,若存在rj<ξ1,执行步骤S27,否则返回步骤S23;S26: comparing the residual r j of all the columns in the matrix n with the first threshold ξ 1 , if there is r j < ξ 1 , step S27 is performed, otherwise returns to step S23;
S27:矩阵n更新为n=r,并删除nj,字典Ψp更新为Ψp=[Ψp,λi];S27: the matrix n is updated to n=r, and n j is deleted, and the dictionary Ψ p is updated to Ψ p =[Ψ p , λ i ];
S28:将更新后的矩阵n的二范数与第二阈值ξ2进行比较,若||n||<ξ2,执行步骤S29,否则返回步骤S22;S28: comparing the second norm of the updated matrix n with the second threshold ξ 2 , if ||n|| < ξ 2 , step S29 is performed, otherwise returns to step S22;
S29:将字典Ψp进行正交化处理,输出特征原子。S29: The dictionary Ψ p is orthogonalized to output a feature atom.
优选地,步骤S2中还包括S210:将输出的特征原子结合目标矩阵m进行OMP重构计算得到相应的系数,将相应的系数每行计算二范数按照大小排列,按照系数大小以及误差设定要求判断前N行有效,则相应判断提取的特征原子对应的前N列作为最终的特征原子输出。Preferably, the step S2 further includes S210: performing the OMP reconstruction calculation on the output feature atom in combination with the target matrix m to obtain a corresponding coefficient, and calculating the two norms of each corresponding coefficient according to the size, according to the coefficient size and the error setting. If it is required to judge that the first N rows are valid, then the first N columns corresponding to the extracted feature atoms are judged as the final feature atom output.
优选地,步骤S29中正交化处理具体采用史密斯正交化处理;优选地,ξ1≤ξ2Preferably, the orthogonalization process in step S29 specifically employs a Smith orthogonalization process; preferably, ξ 1 ≤ ξ 2 .
与现有技术相比,本发明的有益效果在于:本发明的目标识别方法中,首先 通过特征原子提取方法得到各类目标的特征原子,将每类目标的各个特征原子分别对角排列组成每类目标的字典,再将各类目标的字典并列排列组成综合的字典,通过将特征原子进行这样的特定排列,结合稀疏系数得到系数图,直接在系数数据中识别计数,即可分门归类地区分出每类目标各有多少个目标,从而可以同时进行多个目标的识别处理;另外相比传统的目标识别方法,无需提取感兴趣区域进行匹配计算来进行分类识别,直接通过设定特定的字典便可获得不同类目标分别处于不同系数区域,即在系数数据中进行识别计数,处理过程简单,计算量小,提高了在图像中目标识别的速度。Compared with the prior art, the beneficial effects of the present invention are: in the target recognition method of the present invention, first The feature atom of each type of target is obtained by the feature atom extraction method, and each feature atom of each class object is arranged diagonally to form a dictionary of each type of target, and then the dictionary of each type of target is arranged side by side to form a comprehensive dictionary, by characterizing the atom Performing such a specific arrangement, combining the sparse coefficients to obtain a coefficient map, and directly identifying the count in the coefficient data, the number of targets of each type of target can be divided into different categories, so that multiple target recognition processes can be performed simultaneously; In addition, compared with the traditional target recognition method, it is not necessary to extract the region of interest for matching calculation to perform classification and recognition, and directly set a specific dictionary to obtain different types of targets in different coefficient regions, that is, to identify and count in the coefficient data. The processing process is simple, the calculation amount is small, and the speed of target recognition in the image is improved.
在进一步的方案中,在进行目标识别的同时还可并行获得重构图像,并行计算互不影响,将图像采集与目标识别过程合二为一,在图像采集信号中进行目标识别,更进一步提高了在图像中目标识别的速度,并且提高了数据的存储效率与降低硬件的消耗。在更进一步的方案中,结合了压缩感知理论中的有限等距性等原则,针对所需识别目标与背景环境可区分性的特点设计了特定的特征原子提取方法,并通过将特征原子经过合理的组合形成字典,用这个字典与测量矩阵结合计算得到的系数矩阵,可以获得更好的目标识别效果。In a further scheme, the reconstructed image can be obtained in parallel while performing target recognition, and the parallel computing does not affect each other, and the image acquisition and the target recognition process are combined into one, and the target recognition is performed in the image acquisition signal, thereby further improving The speed of target recognition in the image, and improve the storage efficiency of the data and reduce the consumption of hardware. In a further scheme, combined with the principle of finite equidistance in the theory of compressed sensing, a specific feature atom extraction method is designed for the distinguishability of the desired recognition target and the background environment, and the characteristic atom is rationalized. The combination forms a dictionary, and the coefficient matrix calculated by combining the dictionary with the measurement matrix can obtain a better target recognition effect.
附图说明DRAWINGS
图1是本发明优选实施例的基于压缩感知的目标识别方法的流程示意图;1 is a schematic flow chart of a method for recognizing a target based on a compressed sensing according to a preferred embodiment of the present invention;
图2是本发明优选实施例的特征原子提取方法的流程示意图。2 is a flow chart showing a feature atom extraction method of a preferred embodiment of the present invention.
具体实施方式Detailed ways
下面对照附图并结合优选的实施方式对本发明作进一步说明。The invention will now be further described with reference to the drawings in conjunction with the preferred embodiments.
如图1所示,本发明的优选实施例公开了一种基于压缩感知的目标识别方法,包括以下步骤:As shown in FIG. 1, a preferred embodiment of the present invention discloses a target recognition method based on compressed sensing, which includes the following steps:
S1:获取至少两类目标的标准样本图;S1: obtaining a standard sample map of at least two types of targets;
根据一些图像或视频流中已知存在的至少两类目标,通过图像心态学方法提取出目标图像,或者人工识别分割出图像中的目标图像;然后将每类目标的多个目标图像分别进行加权计算得到每类目标的标准样本图。Extracting the target image by image mindset method according to at least two types of objects known to exist in some images or video streams, or manually identifying the target image in the segmented image; and then weighting each of the plurality of target images of each type of target separately A standard sample plot for each type of target is calculated.
S2:根据各类目标的标准样本图,采用特征原子提取方法得到各类目标的特 征原子;S2: According to the standard sample map of various targets, the feature atom extraction method is used to obtain the characteristics of various targets. Atom atom
其中特征原子提取方法可以采用MOD算法或K-SVD算法来提取各类目标的特征原子,更进一步地,特征原子提取方法也可采用后文讲述的方法。The feature atom extraction method can use MOD algorithm or K-SVD algorithm to extract characteristic atoms of various targets. Further, the feature atom extraction method can also adopt the method described later.
S3:将每类目标的各个特征原子分别对角排列组成每类目标的字典Ψp,并将各类目标的字典并列排列组成综合的字典Ψ;S3: arranging the respective characteristic atoms of each type of target diagonally to form a dictionary Ψ p of each type of target, and arranging the dictionary of each target side by side to form a comprehensive dictionary Ψ;
S4:采用测量矩阵Φ对待识别的原始图像x进行压缩采样,得到压缩的采样信号y,并将采样信号y作为数据存储在存储器中;S4: performing compression sampling on the original image x to be identified by using the measurement matrix Φ, obtaining a compressed sampling signal y, and storing the sampling signal y as data in a memory;
S5:结合综合的字典Ψ、测量矩阵Φ和采样信号y,通过重构计算得到待识别的原始图像的稀疏系数θ;S5: combining the integrated dictionary Ψ, the measurement matrix Φ and the sampling signal y, and calculating the sparse coefficient θ of the original image to be identified by reconstruction;
具体地,根据综合的字典Ψ和测量矩阵Φ的乘积得到感知矩阵A=Φ*Ψ,结合采样信号y,通过OMP算法的重构计算公式y=A*θ计算得到原始图像的稀疏系数θ。Specifically, the perceptual matrix A=Φ*Ψ is obtained according to the product of the integrated dictionary Ψ and the measurement matrix Φ, and the sparse coefficient θ of the original image is calculated by the reconstruction calculation formula y=A*θ of the OMP algorithm in combination with the sampling signal y.
S6:将稀疏系数θ处理得到系数图,根据系数图中连通域所处的行数以及大小进行分类识别和计数,以实现对原始图像中的各类目标的识别;S6: processing the sparse coefficient θ to obtain a coefficient map, and classifying and counting according to the number and size of the connected domains in the coefficient graph, so as to realize recognition of various targets in the original image;
具体地,将稀疏系数θ进行滤波二值化等处理得到系数图。Specifically, the thinning coefficient θ is subjected to filtering binarization or the like to obtain a coefficient map.
在进一步的实施例中,步骤S5中还包括将原始图像的稀疏系数θ复制一份;步骤S6中为将其中一份的稀疏系数θ进行处理得到系数图;该目标识别方法还包括步骤S7:将另外一份稀疏系数θ与综合的字典Ψ相乘,得到采集的重构图像x1=Ψ*θ,其中步骤S7与步骤S6可以同时进行。通过步骤S7得到的重构图像与传统Nyquist方法采集得到的图像相比不失真,可以替代原始图像进行相关处理。In a further embodiment, the step S5 further comprises: copying the sparse coefficient θ of the original image; in step S6, processing the sparse coefficient θ of one of the copies to obtain a coefficient map; the target recognition method further comprises the step S7: The other sparse coefficient θ is multiplied by the integrated dictionary , to obtain the acquired reconstructed image x 1 = Ψ * θ, wherein step S7 and step S6 can be performed simultaneously. The reconstructed image obtained by step S7 is not distorted compared with the image acquired by the conventional Nyquist method, and can be correlated with the original image for correlation processing.
为了提高目标识别速度与降低图像采集的数据量,本发明采用随机高斯或伯努利测量矩阵进行数据压缩采集,则可收集到相比原始图像较少的数据存储到存储器中,这样在同样的存储量下可以采集多几倍的图像信息;然后,将采集到的信号在需要时从存储器中用OMP算法重构出来,由于所选择的字典是自行设计的算法所生成的原子经过特定排列组成的(每类目标的字典中各个特征原子呈对角线排列方式,然后每类目标的字典并列排列组成综合的字典),因此计算得到的稀疏系数稍经处理便分门归类地区分出每类目标各有多少个目标,同时计算得 到的稀疏系数直接与所选择的字典相乘便可得到与原始图像质量接近的重构图像,并且获得重构图与归类识别可以进行并行处理互不影响。In order to improve the target recognition speed and reduce the amount of data collected by the image, the present invention adopts a random Gaussian or Bernoulli measurement matrix for data compression acquisition, and then collects less data than the original image and stores it in the memory, so that the same is The amount of image information can be collected several times under the storage amount; then, the collected signal is reconstructed from the memory by the OMP algorithm when needed, and the atom generated by the selected dictionary is a self-designed algorithm. (The characteristic atoms of each type of target dictionary are arranged diagonally, and then the dictionary of each type of target is arranged side by side to form a comprehensive dictionary), so the calculated sparse coefficient is processed slightly, and the classified areas are separated. How many goals are there for each class goal, and calculate The obtained sparse coefficient is directly multiplied by the selected dictionary to obtain a reconstructed image close to the original image quality, and the obtained reconstructed graph and the collation recognition can be processed in parallel without affecting each other.
在更进一步的实施例中,步骤S2中的的特征原子提取方法具体包括:In a further embodiment, the feature atom extraction method in step S2 specifically includes:
S21:输入各类目标的标准样本图的目标矩阵m,并进行初始化:将矩阵n初始化为n=m,循环标识i=0,初始的目标的字典Ψp为空集;S21: input a target matrix m of a standard sample map of each type of target, and initialize: initialize the matrix n to n=m, the loop identifier i=0, and the initial target dictionary Ψ p is an empty set;
S22:赋值循环标识i=i+1,该步骤是记载目标的字典Ψp已经包含了多少列;S22: the assignment loop identifier i=i+1, the step is to record the target dictionary Ψ p has already included how many columns;
S23:寻找矩阵n的所有列中二范数最大的第k列,作为提取的列元素λi,其中
Figure PCTCN2017098652-appb-000003
S为矩阵n的列数,该步骤是获取矩阵n中影响因素最大的列;
S23: Find the kth column with the largest norm in all the columns of the matrix n as the extracted column element λ i , wherein
Figure PCTCN2017098652-appb-000003
S is the number of columns of the matrix n, and this step is to obtain the column with the largest influencing factor in the matrix n;
S24:计算矩阵n中所有列与第k列的列元素λi的最优比率tj,其中
Figure PCTCN2017098652-appb-000004
j是指矩阵n中的第j列,该步骤是计算影响最大的列与矩阵n中每列的相关系数;
S24: calculating an optimal ratio t j of all the columns in the matrix n and the column elements λ i of the k- th column, wherein
Figure PCTCN2017098652-appb-000004
j is the jth column in the matrix n, this step is to calculate the correlation coefficient between the column with the greatest influence and each column in the matrix n;
S25:更新矩阵n的残差r,其中残差r中各列的残差rj的计算公式为rj=nji*tj,该步骤是将矩阵n中所有列删去影响最大的列,使得每列得到的结果最小(二范数);S25: Update the residual r of the matrix n, wherein the residual r j of each column in the residual r is calculated as r j =n ji *t j , and the step is to delete all the columns in the matrix n. The largest column, so that the results obtained per column are the smallest (two norms);
S26:将矩阵n中所有列的残差rj与第一阈值ξ1进行比较,若存在rj<ξ1,执行步骤S27,否则返回步骤S23;S26: comparing the residual r j of all the columns in the matrix n with the first threshold ξ 1 , if there is r j < ξ 1 , step S27 is performed, otherwise returns to step S23;
S27:矩阵n更新为n=r,并删除nj,字典Ψp更新为Ψp=[Ψp,λi];S27: the matrix n is updated to n=r, and n j is deleted, and the dictionary Ψ p is updated to Ψ p =[Ψ p , λ i ];
步骤S26和步骤S27中是通过计算每列删去后的残差与第一阈值作比较,判断是否是影响较大的列,如果是则可作为一个特征原子存在字典Ψp中,否则重新挑选。Step S26 and step S27 is calculated by comparing the residuals for each column after deleting the first threshold value, determining whether the column is greater impact, if it is present can be characterized as a dictionary Ψ p atom, or re-selection .
S28:将更新后的矩阵n的二范数与第二阈值ξ2进行比较,若||n||<ξ2,执行步骤S29,否则返回步骤S22,其中ξ1≤ξ2S28: comparing the second norm of the updated matrix n with the second threshold ξ 2 , if ||n||<ξ 2 , performing step S29, otherwise returning to step S22, where ξ 1 ≤ ξ 2 ;
S29:将字典Ψp进行正交化处理(可以采用史密斯正交化处理),输出特征原子; S29: orthogonalizing the dictionary Ψ p (may be processed by Smith orthogonalization), and outputting characteristic atoms;
步骤S28和步骤S29是判断是否挑选完所有特征原子,若已经挑选完,则将得到的字典Ψp正交作为特征原子输出,否则继续挑选。Steps S28 and S29 are to determine whether all the feature atoms have been selected. If the selection has been completed, the obtained dictionary Ψ p is orthogonally output as the feature atom, otherwise the selection is continued.
S210:将输出的特征原子结合目标矩阵m进行OMP重构计算得到相应的系数,将相应的系数每行计算二范数按照大小排列,按照系数大小以及误差设定要求判断前N行有效,则相应判断提取的特征原子对应的前N列作为最终的特征原子输出。S210: Performing OMP reconstruction by combining the output feature atoms with the target matrix m to obtain corresponding coefficients, and calculating the two norms of each corresponding coefficient according to the size, and determining that the first N rows are valid according to the coefficient size and the error setting requirement, Correspondingly, the first N columns corresponding to the extracted feature atoms are judged as the final feature atom output.
本发明的目标识别方法中,首先通过特征原子提取方法得到各类目标的特征原子,将每类目标的各个特征原子分别对角排列组成每类目标的字典,再将各类目标的字典并列排列组成综合的字典,通过将特征原子进行这样的特定排列,结合稀疏系数得到系数图,直接在系数数据中识别计数,即可分门归类地区分出每类目标各有多少个目标,从而可以同时进行多个目标的识别处理;另外相比传统的目标识别方法,无需提取感兴趣区域进行匹配计算来进行分类识别,直接通过设定特定的字典便可获得不同类目标分别处于不同系数区域,处理过程简单,计算量小,提高了在图像中目标识别的速度。其中本发明还结合了压缩感知理论中的有限等距性等原则,针对所需识别目标与背景环境可区分性的特点设计了特定的特征原子提取方法,并通过将特征原子经过合理的组合形成字典,用这个字典与测量矩阵结合计算得到的系数矩阵,可以获得更好的目标识别效果,并可以同时获得重构质量较好的重构图,识别计数与获得重构图像可以并行计算且互不影响,节约时间。In the target recognition method of the present invention, the feature atoms of each type of target are first obtained by the feature atom extraction method, and the respective feature atoms of each type of target are arranged diagonally to form a dictionary of each type of target, and the dictionary of each target is arranged side by side. Forming a comprehensive dictionary, by performing the specific arrangement of the characteristic atoms, combining the sparse coefficients to obtain the coefficient map, and directly identifying the counts in the coefficient data, the number of targets of each type of target can be divided into different categories, thereby At the same time, multiple target recognition processing is performed; in addition, compared with the traditional target recognition method, it is not necessary to extract the region of interest for matching calculation to perform classification and recognition, and the different types of targets can be obtained in different coefficient regions directly by setting a specific dictionary. The processing process is simple, the calculation amount is small, and the speed of target recognition in the image is improved. The invention also combines the principles of finite equidistance in the theory of compressed sensing, and designs a specific feature atom extraction method for the distinguishability of the desired recognition target and the background environment, and forms a reasonable combination of the characteristic atoms. The dictionary, using this dictionary and the measurement matrix combined to calculate the coefficient matrix, can obtain better target recognition effect, and can obtain the reconstructed image with better reconstruction quality at the same time. The recognition count and the obtained reconstructed image can be calculated in parallel and mutual No impact, saving time.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干等同替代或明显变型,而且性能或用途相同,都应当视为属于本发明的保护范围。 The above is a further detailed description of the present invention in connection with the specific preferred embodiments, and the specific embodiments of the present invention are not limited to the description. It will be apparent to those skilled in the art that <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt;

Claims (10)

  1. 一种基于压缩感知的目标识别方法,其特征在于,包括以下步骤:A method for object recognition based on compressed sensing, characterized in that it comprises the following steps:
    S1:获取至少两类目标的标准样本图;S1: obtaining a standard sample map of at least two types of targets;
    S2:根据各类所述目标的标准样本图,采用特征原子提取方法得到各类所述目标的特征原子;S2: obtaining characteristic atoms of each type of the target by using a characteristic atom extraction method according to a standard sample map of each type of the target;
    S3:将每类所述目标的各个所述特征原子分别对角排列组成每类所述目标的字典Ψp,并将各类所述目标的字典并列排列组成综合的字典Ψ;S3: arranging each of the characteristic atoms of each type of the target diagonally to form a dictionary Ψ p of each type of the target, and arranging the dictionary of each type of the target side by side to form a comprehensive dictionary Ψ;
    S4:采用测量矩阵Φ对待识别的原始图像x进行压缩采样,得到压缩的采样信号y;S4: compressing and sampling the original image x to be identified by using the measurement matrix Φ, to obtain a compressed sampling signal y;
    S5:结合综合的字典Ψ、测量矩阵Φ和采样信号y,通过重构计算得到待识别的所述原始图像的稀疏系数θ;S5: combining the integrated dictionary Ψ, the measurement matrix Φ and the sampling signal y, and calculating the sparse coefficient θ of the original image to be identified by reconstruction;
    S6:将稀疏系数θ进行处理得到系数图,根据所述系数图中连通域所处的行数以及大小进行分类识别和计数,以实现对所述原始图像中的各类所述目标的识别。S6: processing the sparse coefficient θ to obtain a coefficient map, performing classification identification and counting according to the number and size of the connected domains in the coefficient graph, so as to implement recognition of each type of the target in the original image.
  2. 根据权利要求1所述的目标识别方法,其特征在于,步骤S1具体包括:根据图像或视频流中已知存在的至少两类所述目标,提取出各类所述目标的目标图像,将每类所述目标的多个所述目标图像分别进行加权计算得到每类所述目标的标准样本图。The object recognition method according to claim 1, wherein the step S1 comprises: extracting target images of the various types of the target according to at least two types of the objects known to exist in the image or video stream, each of A plurality of the target images of the target of the class are respectively weighted and calculated to obtain a standard sample map of each type of the target.
  3. 根据权利要求2所述的目标识别方法,其特征在于,其中提取出各类所述目标的目标图像具体为:采用图像形态学方法或人工识别分割方法来提取出各类所述目标的目标图像。The object recognition method according to claim 2, wherein the target image of each of the objects is extracted by using an image morphology method or a manual recognition segmentation method to extract target images of various types of the target. .
  4. 根据权利要求1所述的目标识别方法,其特征在于,步骤S4还包括将采样信号y作为数据存储在存储器中。The object recognition method according to claim 1, wherein the step S4 further comprises storing the sampling signal y as data in the memory.
  5. 根据权利要求1所述的目标识别方法,其特征在于,步骤S5具体包括:根据综合的字典Ψ和测量矩阵Φ的乘积得到感知矩阵A=Φ*Ψ,结合采样信号y,通过OMP算法的重构计算公式y=A*θ计算得到所述原始图像的稀疏系数θ。The object recognition method according to claim 1, wherein the step S5 comprises: obtaining the perceptual matrix A=Φ*Ψ according to the product of the integrated dictionary Ψ and the measurement matrix Φ, and combining the sampling signal y by the weight of the OMP algorithm The calculation formula y=A*θ calculates the sparse coefficient θ of the original image.
  6. 根据权利要求1所述的目标识别方法,其特征在于,还包括步骤S7:将 稀疏系数θ与综合的字典Ψ相乘,得到采集的重构图像x1=Ψ*θ。The object recognition method according to claim 1, further comprising the step S7 of multiplying the sparse coefficient θ by the integrated dictionary , to obtain the acquired reconstructed image x 1 = Ψ * θ.
  7. 根据权利要求1至6任一项所述的目标识别方法,其特征在于,步骤S2中的特征原子提取方法具体采用MOD算法或K-SVD算法来提取所述目标的特征原子。The object recognition method according to any one of claims 1 to 6, wherein the feature atom extraction method in step S2 specifically uses a MOD algorithm or a K-SVD algorithm to extract feature atoms of the target.
  8. 根据权利要求1至6任一项所述的目标识别方法,其特征在于,步骤S2中的特征原子提取方法具体包括:The object recognition method according to any one of claims 1 to 6, wherein the feature atom extraction method in step S2 specifically includes:
    S21:输入各类所述目标的标准样本图的目标矩阵m,并进行初始化:将矩阵n初始化为n=m,循环标识i=0,初始的所述目标的字典Ψp为空集;S21: input a target matrix m of a standard sample map of each type of the target, and perform initialization: initialize the matrix n to n=m, the loop identifier i=0, and the initial dictionary Ψ p of the target is an empty set;
    S22:赋值循环标识i=i+1;S22: the assignment cycle identifier i=i+1;
    S23:寻找矩阵n的所有列中二范数最大的第k列,作为提取的列元素λi,其中
    Figure PCTCN2017098652-appb-100001
    S为矩阵n的列数;
    S23: Find the kth column with the largest norm in all the columns of the matrix n as the extracted column element λ i , wherein
    Figure PCTCN2017098652-appb-100001
    S is the number of columns of the matrix n;
    S24:计算矩阵n中所有列与第k列的列元素λi的最优比率t,其中
    Figure PCTCN2017098652-appb-100002
    j是指矩阵n中的第j列;
    S24: calculating an optimal ratio t of all the columns in the matrix n and the column elements λ i of the kth column, wherein
    Figure PCTCN2017098652-appb-100002
    j is the jth column in the matrix n;
    S25:更新矩阵n的残差r,残差r中各列的残差rj的计算公式为rj=nji*tjS25: update the residual r of the matrix n, and calculate the residual r j of each column in the residual r as r j =n ji *t j ;
    S26:将矩阵n中所有列的残差rj与第一阈值ξ1进行比较,若存在rj<ξ1,执行步骤S27,否则返回步骤S23;S26: comparing the residual r j of all the columns in the matrix n with the first threshold ξ 1 , if there is r j < ξ 1 , step S27 is performed, otherwise returns to step S23;
    S27:矩阵n更新为n=r,并删除nj,字典Ψp更新为Ψp=[Ψp,λi];S27: the matrix n is updated to n=r, and n j is deleted, and the dictionary Ψ p is updated to Ψ p =[Ψ p , λ i ];
    S28:将更新后的矩阵n的二范数与第二阈值ξ2进行比较,若||n||<ξ2,执行步骤S29,否则返回步骤S22;S28: comparing the second norm of the updated matrix n with the second threshold ξ 2 , if ||n|| < ξ 2 , step S29 is performed, otherwise returns to step S22;
    S29:将字典Ψp进行正交化处理,输出特征原子。S29: The dictionary Ψ p is orthogonalized to output a feature atom.
  9. 根据权利要求8所述的目标识别方法,其特征在于,步骤S2中还包括S210:将输出的特征原子结合目标矩阵m进行OMP重构计算得到相应的系数,将相应的系数每行计算二范数按照大小排列,按照系数大小以及误差设定要求判断前N行有效,则相应判断提取的特征原子对应的前N列作为最终的特征原子 输出。The object recognition method according to claim 8, wherein the step S2 further comprises: S210: calculating the corresponding coefficient by performing the OMP reconstruction by combining the output feature atom with the target matrix m, and calculating the corresponding coefficient for each line. The numbers are arranged according to the size, and the first N rows are determined according to the coefficient size and the error setting requirement, and the first N columns corresponding to the extracted feature atoms are correspondingly determined as the final characteristic atom. Output.
  10. 根据权利要求8所述的目标识别方法,其特征在于,步骤S29中正交化处理具体采用史密斯正交化处理;优选地,ξ1≤ξ2The object recognition method according to claim 8, wherein the orthogonalization processing in step S29 specifically employs Smith orthogonalization processing; preferably, ξ 1 ≤ ξ 2 .
PCT/CN2017/098652 2017-07-06 2017-08-23 Target recognition method based on compressed sensing WO2019006835A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710548175.X 2017-07-06
CN201710548175.XA CN107273908B (en) 2017-07-06 2017-07-06 A kind of compressed sensing based target identification method

Publications (1)

Publication Number Publication Date
WO2019006835A1 true WO2019006835A1 (en) 2019-01-10

Family

ID=60073082

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/098652 WO2019006835A1 (en) 2017-07-06 2017-08-23 Target recognition method based on compressed sensing

Country Status (2)

Country Link
CN (1) CN107273908B (en)
WO (1) WO2019006835A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796253A (en) * 2020-09-01 2020-10-20 西安电子科技大学 Radar target constant false alarm detection method based on sparse signal processing
CN112508089A (en) * 2020-12-03 2021-03-16 国网山西省电力公司晋城供电公司 Self-adaptive compressed sensing method for partial discharge signal compression transmission
CN113093164A (en) * 2021-03-31 2021-07-09 西安电子科技大学 Translation-invariant and noise-robust radar image target identification method
CN113670435A (en) * 2021-08-20 2021-11-19 西安石油大学 Underground vibration measuring device based on compressed sensing technology and measuring method thereof
CN113922823A (en) * 2021-10-29 2022-01-11 电子科技大学 Social media information propagation graph data compression method based on constraint sparse representation
CN116541470A (en) * 2023-07-07 2023-08-04 深圳创维智慧科技有限公司 Synchronization method, device, equipment and medium of database read-only standby library
CN117041359A (en) * 2023-10-10 2023-11-10 北京安视华业科技有限责任公司 Efficient compression transmission method for information data

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280818B (en) * 2018-01-19 2020-04-03 清华大学深圳研究生院 Rapid target imaging method and system based on compressed sensing
CN109033963B (en) * 2018-06-22 2021-07-06 王连圭 Multi-camera video cross-region human motion posture target recognition method
CN110472576A (en) * 2019-08-15 2019-11-19 西安邮电大学 A kind of method and device for realizing mobile human body Activity recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844635A (en) * 2016-03-21 2016-08-10 北京工业大学 Sparse representation depth image reconstruction algorithm based on structure dictionary
CN106157232A (en) * 2016-06-30 2016-11-23 广东技术师范学院 A kind of general steganalysis method of digital picture characteristic perception
CN106557784A (en) * 2016-11-23 2017-04-05 上海航天控制技术研究所 Fast target recognition methodss and system based on compressed sensing
CN106815806A (en) * 2016-12-20 2017-06-09 浙江工业大学 Single image SR method for reconstructing based on compressed sensing and SVR

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10524686B2 (en) * 2014-12-01 2020-01-07 The Regents Of The University Of California Diffusion reproducibility evaluation and measurement (DREAM)-MRI imaging methods
CN104574450B (en) * 2014-12-31 2017-06-16 南京邮电大学 A kind of image reconstructing method based on compressed sensing
CN105631478A (en) * 2015-12-25 2016-06-01 天津科技大学 Plant classification method based on sparse expression dictionary learning
CN106203374B (en) * 2016-07-18 2018-08-24 清华大学深圳研究生院 A kind of characteristic recognition method and its system based on compressed sensing
CN106203453B (en) * 2016-07-18 2019-05-28 清华大学深圳研究生院 A kind of compressed sensing based biological and abiotic target identification method and its system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844635A (en) * 2016-03-21 2016-08-10 北京工业大学 Sparse representation depth image reconstruction algorithm based on structure dictionary
CN106157232A (en) * 2016-06-30 2016-11-23 广东技术师范学院 A kind of general steganalysis method of digital picture characteristic perception
CN106557784A (en) * 2016-11-23 2017-04-05 上海航天控制技术研究所 Fast target recognition methodss and system based on compressed sensing
CN106815806A (en) * 2016-12-20 2017-06-09 浙江工业大学 Single image SR method for reconstructing based on compressed sensing and SVR

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796253B (en) * 2020-09-01 2022-12-02 西安电子科技大学 Radar target constant false alarm detection method based on sparse signal processing
CN111796253A (en) * 2020-09-01 2020-10-20 西安电子科技大学 Radar target constant false alarm detection method based on sparse signal processing
CN112508089A (en) * 2020-12-03 2021-03-16 国网山西省电力公司晋城供电公司 Self-adaptive compressed sensing method for partial discharge signal compression transmission
CN112508089B (en) * 2020-12-03 2023-10-31 国网山西省电力公司晋城供电公司 Self-adaptive compressed sensing method for partial discharge signal compressed transmission
CN113093164A (en) * 2021-03-31 2021-07-09 西安电子科技大学 Translation-invariant and noise-robust radar image target identification method
CN113670435A (en) * 2021-08-20 2021-11-19 西安石油大学 Underground vibration measuring device based on compressed sensing technology and measuring method thereof
CN113670435B (en) * 2021-08-20 2023-06-23 西安石油大学 Underground vibration measuring device and method based on compressed sensing technology
CN113922823B (en) * 2021-10-29 2023-04-21 电子科技大学 Social media information propagation graph data compression method based on constraint sparse representation
CN113922823A (en) * 2021-10-29 2022-01-11 电子科技大学 Social media information propagation graph data compression method based on constraint sparse representation
CN116541470A (en) * 2023-07-07 2023-08-04 深圳创维智慧科技有限公司 Synchronization method, device, equipment and medium of database read-only standby library
CN116541470B (en) * 2023-07-07 2024-02-13 深圳创维智慧科技有限公司 Synchronization method, device, equipment and medium of database read-only standby library
CN117041359A (en) * 2023-10-10 2023-11-10 北京安视华业科技有限责任公司 Efficient compression transmission method for information data
CN117041359B (en) * 2023-10-10 2023-12-22 北京安视华业科技有限责任公司 Efficient compression transmission method for information data

Also Published As

Publication number Publication date
CN107273908A (en) 2017-10-20
CN107273908B (en) 2019-09-06

Similar Documents

Publication Publication Date Title
WO2019006835A1 (en) Target recognition method based on compressed sensing
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN108446601B (en) Face recognition method based on dynamic and static feature fusion
CN107784293B (en) A kind of Human bodys&#39; response method classified based on global characteristics and rarefaction representation
WO2020125216A1 (en) Pedestrian re-identification method, device, electronic device and computer-readable storage medium
CN108228915B (en) Video retrieval method based on deep learning
WO2020147257A1 (en) Face recognition method and apparatus
WO2019134327A1 (en) Facial expression recognition feature extraction method employing edge detection and sift
Paisitkriangkrai et al. Strengthening the effectiveness of pedestrian detection with spatially pooled features
CN107944427B (en) Dynamic face recognition method and computer readable storage medium
US11263435B2 (en) Method for recognizing face from monitoring video data
CN109145745B (en) Face recognition method under shielding condition
CN107092884B (en) Rapid coarse-fine cascade pedestrian detection method
CN111126307B (en) Small sample face recognition method combining sparse representation neural network
CN104715266B (en) The image characteristic extracting method being combined based on SRC DP with LDA
CN108710836B (en) Lip detection and reading method based on cascade feature extraction
Song et al. Feature extraction and target recognition of moving image sequences
CN109002771A (en) A kind of Classifying Method in Remote Sensing Image based on recurrent neural network
CN113723188B (en) Dressing uniform personnel identity verification method combining human face and gait characteristics
CN109359530B (en) Intelligent video monitoring method and device
CN110163489B (en) Method for evaluating rehabilitation exercise effect
CN116311403A (en) Finger vein recognition method of lightweight convolutional neural network based on FECAGhostNet
CN115578778A (en) Human face image feature extraction method based on trace transformation and LBP (local binary pattern)
CN114359786A (en) Lip language identification method based on improved space-time convolutional network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17917087

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17917087

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/08/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17917087

Country of ref document: EP

Kind code of ref document: A1