CN101777059A - Method for extracting landmark scene abstract - Google Patents
Method for extracting landmark scene abstract Download PDFInfo
- Publication number
- CN101777059A CN101777059A CN200910242751A CN200910242751A CN101777059A CN 101777059 A CN101777059 A CN 101777059A CN 200910242751 A CN200910242751 A CN 200910242751A CN 200910242751 A CN200910242751 A CN 200910242751A CN 101777059 A CN101777059 A CN 101777059A
- Authority
- CN
- China
- Prior art keywords
- image
- landmark
- conspicuousness
- landmark scene
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000000007 visual effect Effects 0.000 claims abstract description 10
- 238000012795 verification Methods 0.000 claims abstract description 5
- 238000012216 screening Methods 0.000 claims abstract description 4
- 239000000284 extract Substances 0.000 claims description 20
- 239000004744 fabric Substances 0.000 claims description 18
- 238000004458 analytical method Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 230000008878 coupling Effects 0.000 claims description 8
- 238000010168 coupling process Methods 0.000 claims description 8
- 238000005859 coupling reaction Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 238000003475 lamination Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 6
- 230000008447 perception Effects 0.000 description 5
- 238000013441 quality evaluation Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000002386 leaching Methods 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for extracting a landmark scene abstract, which comprises the following steps of: extracting color moment and ripple lamination of each landmark scene image as global feature of the landmark scene image, and extracting a SIFT descriptor as the local feature of the landmark scene image; performing initial clustering to a landmark scene image library through the global feature according to the two dimension feature of the image; selecting a plurality of representative images closest to the clustering center from each cluster to the initial clustering result through the local feature, and performing pair matching for the representative images for saliency-based geometric verification; after the geometric verification; extracting the landmark image from each cluster aggregate; and performing geometric matching and screening to the landmark image of the area interested by the user in the selected landmark scene images through the geometric verification of clusters, and gathering the landmark images with same or similar visual angles, thus realizing the mixing of the similar clusters, and extracting the landmark scene abstract.
Description
Technical field
The present invention relates to analysis of image content method, conspicuousness detection method, network multimedia analytical approach etc.
Background technology
In recent years, along with the develop rapidly of progress of science and technology and popularization, particularly multimedia technology and computer networking technology, the human epoch of having stepped into an advanced IT application.Multimedia messages miscellaneous on the network constantly expands, and especially the digital picture of visual pattern increases just with surprising rapidity.In the face of the image information of magnanimity, how it is carried out effective management and use, become a problem demanding prompt solution.
A given specific inquiry utilizes existing image retrieval technologies can obtain image collection associated with the query easily from the large nuber of images storehouse.Yet, because the quantity of Query Result is often still quite huge, as how brief and concise mode Query Result being presented to the user, thereby allow the user need not to travel through the overall picture that all images just can be understood Query Result, is the previous research topic with important practical usage of order.The extraction of significant image (iconic image) can address the above problem preferably.So-called significant image refers to a subclass of original image collection, this subclass on scale much smaller than the original image collection, on the content high level overview full content of original image collection.The user just can understand the overall picture of original image collection easily by browsing several limited significant images.
Extract significant image from the image library of magnanimity, existing method roughly can be divided into three major types: (1) only considers the two dimensional image feature, in conjunction with methods such as characteristic matching, clusters, and then realizes the extraction of significant image.Yet the two dimensional image feature can not provide effective geometrical constraint, thus for commonly in the landmark scene image block, phenomenon such as visual angle conversion, illumination variation, this method often is not very effective.(2) in recent years, the researchist has proposed to utilize three-dimensional structure information that landmark scene is carried out the method for modeling in article, and has obtained result preferably.But, because these class methods need be mated in twos to Three Dimensions Structure complicated in all images in the database, cause computation complexity quite high, therefore only be applicable to the image library that scale is less.(3) for said method being generalized to extensive image library, so that extract significant image from the large nuber of images set efficiently, image two and three dimensions information combined be proved to be a kind of comparatively desirable selection, the main thought of this method is at first to utilize the two dimensional image feature to carry out initial cluster, utilizes the three-dimensional structure constraint progressively to carry out refinement then on the cluster basis.When the limitation of these class methods is to extract significant image only from the representativeness of image cluster, do not consider as picture browsing person's user's the perception and the characteristic of image itself, thereby often cause the significant image choosing out and user's the demand of browsing to produce deviation.
Summary of the invention
In order to solve prior art problems, the objective of the invention is Query Result to be presented to the user in brief and concise mode, thereby allow the user need not to travel through the overall picture that all images just can be understood Query Result, for this reason, the invention provides a kind of method of extracting landmark scene abstract.
In order to reach described purpose, it is as described below that the present invention proposes to extract the technical scheme of method of landmark scene abstract:
Step 1: to each width of cloth landmark scene image, extract color moment, small echo texture global characteristics, extract the local feature of the constant conversion of yardstick (SIFT) descriptor as the landmark scene image as the landmark scene image;
Step 2: utilize the two-dimentional global characteristics of image that the terrestrial reference scene image library is carried out initial clustering;
Step 3: the nearest some width of cloth presentation graphicses of selected distance cluster centre from each class, utilize local feature that presentation graphics is carried out verifying for how much in twos based on conspicuousness;
Step 4: after how much checkings, from each cluster set, extract significant image;
Step 5: the significant image in user's interest zone carries out the geometric match screening in the landmark scene image to selecting to utilize between class how much checkings, with the significant image at identical or close visual angle poly-be a class, thereby realize the fusion of similar class, extract landmark scene abstract.
Wherein, described how much verification steps based on conspicuousness are as follows:
Step 21: by the significance analysis method each width of cloth presentation graphics is calculated contrast distribution or information density distribution, obtain the conspicuousness value of each position in the presentation graphics, and then obtain conspicuousness figure;
Step 22: utilize conspicuousness figure two width of cloth presentation graphicses to be carried out the three-dimensional geometry coupling, utilize the weight of the pairing conspicuousness value of match point, realize how much checkings based on conspicuousness as matching degree as the weighting template.
Wherein, it is as follows to obtain the step of significant image:
Step 31: in every class, calculate the conspicuousness weighted registration value summation of each width of cloth presentation graphics and all other presentation graphicses;
Step 32: choose the highest some width of cloth images of conspicuousness weighted registration value summation and carry out image quality measure;
Step 33: choose the significant image of the highest image of image quality measure score value as the place class.
Wherein, the step of image quality measure is as follows:
Step 41: from training set, train the picture quality disaggregated model;
Step 42: utilize the picture quality disaggregated model that trains that each presentation graphics to be assessed is carried out the picture quality classification, the picture quality classification results is as the image quality measure score value.
Beneficial effect of the present invention: it is the effective means of carrying out extensive image library summary that significant image extracts, existing method only is conceived to image two dimension, the three-dimensional feature representativeness in image collection, has but ignored picture browsing person's the subjective perception and the characteristic of image itself.The present invention proposes conspicuousness and the high-quality two big key elements that property image as a token of need possess, and then introduces image significance analysis and quality evaluation in significant image leaching process.We adopt the weight of the conspicuousness information of image as characteristic matching, the coupling between image is concentrated in the zone of user's concern, thereby realization are based on the coupling of conspicuousness.We obtain the picture quality disaggregated model from the training plan image set image are carried out quality evaluation, and then the choosing of the representativeness of synthetic image and qualitative factor complement mark image.Agree with the subjective perception of browsing the user more according to the resulting significant image of method of the present invention, thereby make that the final landmark scene abstract that generates is significantly optimized.
Description of drawings
Fig. 1 is the overall flow figure that the present invention extracts the landmark scene abstract method
Fig. 2 is the synoptic diagram that the present invention extracts the landmark scene abstract method.
Fig. 3 be among the present invention based on the images match of conspicuousness: (1) calculates conspicuousness figure; (2) conspicuousness weighted registration.
Fig. 4 is the process flow diagram of image quality measure among the present invention.
Embodiment
Describe each related detailed problem in the technical solution of the present invention in detail below in conjunction with accompanying drawing.Be to be noted that described embodiment only is intended to be convenient to the understanding of the present invention, and it is not played any qualification effect.
Significant image refers to a subclass of original image collection, this subclass on scale much smaller than the original image collection, on the content high level overview full content of original image collection.The user just can understand the overall picture of original image collection easily by browsing several limited significant images.
Emphasis of the present invention is conceived to the landmark scene image, and promptly about the image of a certain famous landmark scene, such image extensively is present in the internet, is the comparatively common image of a class.In order to fully understand a certain landmark scene, the user can not browse one by one to all associated pictures, therefore is necessary this landmark scene image library is summarized (summary).The present invention proposes a kind of method of effectively carrying out organization and administration at magnanimity landmark scene image data base, and extracts significant image (iconic image) on this basis, thereby generates landmark scene abstract.
The present invention has further introduced image significance analysis and quality evaluation two big key elements on the basis of comprehensive utilization image two dimensional character and three-dimensional feature: the image significance analysis makes the significant image that extracts meet perception of human eyes more; Image quality measure guarantees that then the image that extracts is high-quality image.The introducing of these two big key elements of image significance analysis and quality evaluation has effectively strengthened the acceptance of user for significant image, thereby has improved the quality of the landmark scene abstract that is generated.
The significant image that selects as the summary of image set must have the representativeness of height can summarize the content of entire image collection with a spot of image.It is considered herein that except representative, significant image also should possess conspicuousness and high-quality two big key elements:
At first, the extraction of significant image is in order to allow the user can browse extensive image set, in the time of therefore in face of significant image being presented on the user, must to consider user's visual characteristic, make every effort to consistent with user's perception more conveniently.Utilize the image significance analysis can obtain user's interest zone in the image, main target in the selected significant image or object should be positioned at the significant zone of this image, but not less important, inconspicuous zone.
Secondly, itself must have higher picture quality significant image, and one side helps the user to browse better like this, also can provide image information accurately and reliably for other subsequent treatment of computing machine (as three-dimensional reconstruction etc.) on the other hand.
Based on above-mentioned two big key elements, the present invention proposes the landmark scene abstract based on image significance analysis and quality evaluation, and this method is divided into following five steps (as depicted in figs. 1 and 2):
1, feature extraction
To each width of cloth image, extract color moment, small echo texture as global characteristics, extract the SIFT descriptor as local feature;
2, based on the initial clustering of two dimensional character
Two dimensional character according to image set carries out the initialization cluster, and this is based on following consideration: great majority similar landmark scene image on two dimensional character often also has similar three-dimensional visual angle.Therefore, we can adopt comparatively simple two-dimensional feature, carry out the initialization cluster by k-means in two-dimensional feature space.In order to make that the three-dimensional information of image is consistent as far as possible in each class, having only the enough similar image of two dimensional character to gather is a class, thereby the number of initialization cluster gained is quite huge.
3, verify based on the geometry of three-dimensional structure
Not only two dimensional character is similar but also three-dimensional structure is also close in order to ensure the image in the same class, and we carry out the coupling of three-dimensional geometry feature to each class.Consider efficiency factor, we do not mate in twos to images all in each class, but from each class the nearest some width of cloth presentation graphicses (representative image) of selected distance cluster centre, and presentation graphics mated in twos, thereby can reduce operand greatly.The present invention adopts how much checkings based on conspicuousness, concrete grammar is as shown in Figure 3: (1) expression among Fig. 3: to each width of cloth presentation graphics, we are by significance analysis method (calculating contrast distribution or information density distributes), calculate the conspicuousness value of each position, and then obtain conspicuousness figure (saliency map).(2) expression among Fig. 3: when two width of cloth presentation graphicses being carried out the three-dimensional geometry coupling, we utilize corresponding conspicuousness figure as the weighting template, promptly utilize the weight of the pairing conspicuousness value of match point, thereby realize how much checkings based on conspicuousness as matching degree.Under this matching principle, the matching degree of two width of cloth images not only depends on similarity degree, and the significance degree that depends on corresponding match point (is in not marking area as if match point, promptly corresponding to position black among the conspicuousness figure, then matching value is composed with low weights and is punished), therefore can naturally the coupling between image be concentrated in the salient region (being the zone that human eye is paid close attention to).
4, significant image chooses
After how much checkings, be selected out greater than the presentation graphics of a certain threshold value with the conspicuousness weighted registration value summation of all other presentation graphicses in every class and carry out image quality measure.Concrete grammar is as shown in Figure 4: (comprise through the professional and pass judgment on and the high-quality and the low-quality image that obtain for cut-and-dried training set, corresponding respectively positive sample and negative sample), at first carry out feature extraction, extract features such as marginal distribution, color distribution, tone number, thereby obtain the characteristics of image collection.From the characteristics of image set, train the picture quality sorter.To the presentation graphics of selecting, carry out feature extraction at first after the same method, extract features such as marginal distribution, color distribution, tone number.Utilize the picture quality sorter that trains that the quality of each width of cloth presentation graphics is assessed then.The selected significant image of presentation graphics assessment result that finally has optimum picture quality as such.
5, the screening of significant image
Because the initialization clustering criteria is set relatively stricter, the class number that is obtained is often quite huge, thereby the significant picture number as every class representative is also very huge, and browse still one by one wastes time and energy, and therefore is necessary existing significant image collection is further simplified.Not having poly-when (1) considering initial clustering is that the image of a class still might have same three-dimensional visual angle, we can by how much checkings between class promptly to the significant image of gained carry out geometric match with the significant image at identical or close visual angle poly-be a class, thereby realize the fusion of similar class.For fear of mating the high computation complexity that is caused in twos, for each significant image, we at first utilize two dimensional character to obtain its k neighbour, carry out between significant image and two-dimentional neighbour thereof then based on conspicuousness three-dimensional geometry coupling (method is with step 2).(2) on this basis, we can obtain a non-directed graph, the corresponding significant image of its each node, and the weight on each bar limit then depends on the conspicuousness weighted registration value of a pair of significant image that is connected.The method of utilizing figure to cut apart, we with similar significant image division in same subgraph, thereby realized the fusion at similar visual angle.(3) for the image in each subgraph, we take all factors into consideration the matching value of other image in this image and the subgraph and the quality assessment value of this image itself sorts, and the most forward image of ranking results will be as the significant image of this subgraph.
The above; only be the embodiment among the present invention; but protection scope of the present invention is not limited thereto; anyly be familiar with the people of this technology in the disclosed technical scope of the present invention; can understand conversion or the replacement expected; all should be encompassed in of the present invention comprising within the scope, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.
Claims (4)
1. method of extracting landmark scene abstract, it is characterized in that: it is as follows that the method comprising the steps of:
Step is to each width of cloth landmark scene image, extracts color moment, the small echo texture global characteristics as the landmark scene image, extracts the local feature of constant conversion described of yardstick as the landmark scene image;
Step 2: utilize the two-dimentional global characteristics of image that the terrestrial reference scene image library is carried out initial clustering;
Step 3: the nearest some width of cloth presentation graphicses of selected distance cluster centre from each class, utilize local feature that presentation graphics is carried out verifying for how much in twos based on conspicuousness;
Step 4: after how much checkings, from each cluster set, extract significant image;
Step 5: the significant image in user's interest zone carries out the geometric match screening in the landmark scene image to selecting to utilize between class how much checkings, with the significant image at identical or close visual angle poly-be a class, thereby realize the fusion of similar class, extract landmark scene abstract.
2. according to the method for the described extraction landmark scene abstract of claim 1, it is characterized in that: described how much verification steps based on conspicuousness are as follows:
Step 21: by the significance analysis method each width of cloth presentation graphics is calculated contrast distribution or information density distribution, obtain the conspicuousness value of each position in the presentation graphics, and then obtain conspicuousness figure;
Step 2 utilizes conspicuousness figure as the weighting template two width of cloth presentation graphicses to be carried out the three-dimensional geometry coupling, utilizes the weight of the pairing conspicuousness value of match point as matching degree, realizes how much checkings based on conspicuousness.
3. according to the method for the described extraction landmark scene abstract of claim 1, it is characterized in that: the step of obtaining significant image is as follows:
Step 31: in every class, calculate the conspicuousness weighted registration value summation of each width of cloth presentation graphics and all other presentation graphicses;
Step 32: choose the highest some width of cloth images of conspicuousness weighted registration value summation and carry out image quality measure;
Step 3 is chosen the significant image of the highest image of image quality measure score value as the place class.
4. according to the method for the described extraction landmark scene abstract of claim 3, it is characterized in that: the step of image quality measure is as follows:
Step 41: from training set, train the picture quality disaggregated model;
Step 42: utilize the picture quality disaggregated model that trains that each presentation graphics to be assessed is carried out the picture quality classification, the picture quality classification results is as the image quality measure score value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102427513A CN101777059B (en) | 2009-12-16 | 2009-12-16 | Method for extracting landmark scene abstract |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102427513A CN101777059B (en) | 2009-12-16 | 2009-12-16 | Method for extracting landmark scene abstract |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101777059A true CN101777059A (en) | 2010-07-14 |
CN101777059B CN101777059B (en) | 2011-12-07 |
Family
ID=42513522
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009102427513A Expired - Fee Related CN101777059B (en) | 2009-12-16 | 2009-12-16 | Method for extracting landmark scene abstract |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101777059B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034267A (en) * | 2010-11-30 | 2011-04-27 | 中国科学院自动化研究所 | Three-dimensional reconstruction method of target based on attention |
CN103324677A (en) * | 2013-05-24 | 2013-09-25 | 西安交通大学 | Hierarchical fast image global positioning system (GPS) position estimation method |
CN103390165A (en) * | 2012-05-10 | 2013-11-13 | 北京百度网讯科技有限公司 | Picture clustering method and device |
CN103995864A (en) * | 2014-05-19 | 2014-08-20 | 深圳先进技术研究院 | Image retrieval method and device |
CN104717400A (en) * | 2015-02-03 | 2015-06-17 | 北京理工大学深圳研究院 | Real-time defogging method of monitoring video |
CN104966290A (en) * | 2015-06-12 | 2015-10-07 | 天津大学 | Self-adaptive weight three-dimensional matching method based on SIFT descriptor |
WO2017020467A1 (en) * | 2015-08-03 | 2017-02-09 | 百度在线网络技术(北京)有限公司 | Scenario reconstruction method and apparatus, terminal device, and storage medium |
EP3125158A3 (en) * | 2015-07-28 | 2017-03-08 | Xiaomi Inc. | Method and device for displaying images |
CN107133260A (en) * | 2017-03-22 | 2017-09-05 | 新奥特(北京)视频技术有限公司 | The matching and recognition method and device of a kind of landmark image |
CN107133261A (en) * | 2017-03-22 | 2017-09-05 | 新奥特(北京)视频技术有限公司 | The input method and device of a kind of landmark information |
CN107563366A (en) * | 2017-07-26 | 2018-01-09 | 安徽讯飞爱途旅游电子商务有限公司 | A kind of localization method and device, electronic equipment |
CN108122231A (en) * | 2018-01-10 | 2018-06-05 | 山东华软金盾软件股份有限公司 | Image quality evaluating method based on ROI Laplacian algorithms under monitor video |
CN109597907A (en) * | 2017-12-07 | 2019-04-09 | 深圳市商汤科技有限公司 | Dress ornament management method and device, electronic equipment, storage medium |
CN109614998A (en) * | 2018-11-29 | 2019-04-12 | 北京航天自动控制研究所 | Landmark database preparation method based on deep learning |
CN109657083A (en) * | 2018-12-27 | 2019-04-19 | 广州华迅网络科技有限公司 | The method for building up and device in textile picture feature library |
CN110516618A (en) * | 2019-08-29 | 2019-11-29 | 苏州大学 | Put together machines the assembly method and system of people and view-based access control model and force-location mix control |
CN111522986A (en) * | 2020-04-23 | 2020-08-11 | 北京百度网讯科技有限公司 | Image retrieval method, apparatus, device and medium |
CN113139468A (en) * | 2021-04-24 | 2021-07-20 | 西安交通大学 | Video abstract generation method fusing local target features and global features |
CN113159039A (en) * | 2021-02-09 | 2021-07-23 | 北京市商汤科技开发有限公司 | Image recognition method and device, electronic equipment and storage medium |
WO2024039300A1 (en) * | 2022-08-19 | 2024-02-22 | Grabtaxi Holdings Pte. Ltd. | Location-specific image collection |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104615642B (en) * | 2014-12-17 | 2017-09-29 | 吉林大学 | The erroneous matching detection method of the space checking constrained based on local neighborhood |
-
2009
- 2009-12-16 CN CN2009102427513A patent/CN101777059B/en not_active Expired - Fee Related
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034267A (en) * | 2010-11-30 | 2011-04-27 | 中国科学院自动化研究所 | Three-dimensional reconstruction method of target based on attention |
CN103390165A (en) * | 2012-05-10 | 2013-11-13 | 北京百度网讯科技有限公司 | Picture clustering method and device |
CN103390165B (en) * | 2012-05-10 | 2017-08-22 | 北京百度网讯科技有限公司 | A kind of method and device of picture cluster |
CN103324677A (en) * | 2013-05-24 | 2013-09-25 | 西安交通大学 | Hierarchical fast image global positioning system (GPS) position estimation method |
CN103324677B (en) * | 2013-05-24 | 2017-02-01 | 西安交通大学 | Hierarchical fast image global positioning system (GPS) position estimation method |
CN103995864B (en) * | 2014-05-19 | 2017-10-27 | 深圳先进技术研究院 | A kind of image search method and device |
CN103995864A (en) * | 2014-05-19 | 2014-08-20 | 深圳先进技术研究院 | Image retrieval method and device |
CN104717400A (en) * | 2015-02-03 | 2015-06-17 | 北京理工大学深圳研究院 | Real-time defogging method of monitoring video |
CN104966290A (en) * | 2015-06-12 | 2015-10-07 | 天津大学 | Self-adaptive weight three-dimensional matching method based on SIFT descriptor |
CN104966290B (en) * | 2015-06-12 | 2017-12-08 | 天津大学 | A kind of adaptive weighting solid matching method based on SIFT description |
US10032076B2 (en) | 2015-07-28 | 2018-07-24 | Xiaomi Inc. | Method and device for displaying image |
EP3125158A3 (en) * | 2015-07-28 | 2017-03-08 | Xiaomi Inc. | Method and device for displaying images |
KR20180035869A (en) * | 2015-08-03 | 2018-04-06 | 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 | Method, device, terminal device and storage medium |
KR102033262B1 (en) | 2015-08-03 | 2019-10-16 | 바이두 온라인 네트웍 테크놀러지 (베이징) 캄파니 리미티드 | Canonical reconstruction method, apparatus, terminal device and storage medium |
WO2017020467A1 (en) * | 2015-08-03 | 2017-02-09 | 百度在线网络技术(北京)有限公司 | Scenario reconstruction method and apparatus, terminal device, and storage medium |
US10467800B2 (en) | 2015-08-03 | 2019-11-05 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for reconstructing scene, terminal device, and storage medium |
CN107133261A (en) * | 2017-03-22 | 2017-09-05 | 新奥特(北京)视频技术有限公司 | The input method and device of a kind of landmark information |
CN107133260A (en) * | 2017-03-22 | 2017-09-05 | 新奥特(北京)视频技术有限公司 | The matching and recognition method and device of a kind of landmark image |
CN107563366A (en) * | 2017-07-26 | 2018-01-09 | 安徽讯飞爱途旅游电子商务有限公司 | A kind of localization method and device, electronic equipment |
CN109597907A (en) * | 2017-12-07 | 2019-04-09 | 深圳市商汤科技有限公司 | Dress ornament management method and device, electronic equipment, storage medium |
CN108122231A (en) * | 2018-01-10 | 2018-06-05 | 山东华软金盾软件股份有限公司 | Image quality evaluating method based on ROI Laplacian algorithms under monitor video |
CN108122231B (en) * | 2018-01-10 | 2021-09-24 | 山东华软金盾软件股份有限公司 | Image quality evaluation method based on ROI Laplacian algorithm under monitoring video |
CN109614998A (en) * | 2018-11-29 | 2019-04-12 | 北京航天自动控制研究所 | Landmark database preparation method based on deep learning |
CN109657083A (en) * | 2018-12-27 | 2019-04-19 | 广州华迅网络科技有限公司 | The method for building up and device in textile picture feature library |
CN109657083B (en) * | 2018-12-27 | 2020-07-14 | 广州华迅网络科技有限公司 | Method and device for establishing textile picture feature library |
CN110516618A (en) * | 2019-08-29 | 2019-11-29 | 苏州大学 | Put together machines the assembly method and system of people and view-based access control model and force-location mix control |
CN111522986A (en) * | 2020-04-23 | 2020-08-11 | 北京百度网讯科技有限公司 | Image retrieval method, apparatus, device and medium |
CN111522986B (en) * | 2020-04-23 | 2023-10-10 | 北京百度网讯科技有限公司 | Image retrieval method, device, equipment and medium |
US11836186B2 (en) | 2020-04-23 | 2023-12-05 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for retrieving image, device, and medium |
CN113159039A (en) * | 2021-02-09 | 2021-07-23 | 北京市商汤科技开发有限公司 | Image recognition method and device, electronic equipment and storage medium |
CN113139468A (en) * | 2021-04-24 | 2021-07-20 | 西安交通大学 | Video abstract generation method fusing local target features and global features |
CN113139468B (en) * | 2021-04-24 | 2023-04-11 | 西安交通大学 | Video abstract generation method fusing local target features and global features |
WO2024039300A1 (en) * | 2022-08-19 | 2024-02-22 | Grabtaxi Holdings Pte. Ltd. | Location-specific image collection |
Also Published As
Publication number | Publication date |
---|---|
CN101777059B (en) | 2011-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101777059B (en) | Method for extracting landmark scene abstract | |
Tang et al. | Fast and robust dynamic hand gesture recognition via key frames extraction and feature fusion | |
Gao et al. | The deep features and attention mechanism-based method to dish healthcare under social IoT systems: An empirical study with a hand-deep local–global net | |
Yan et al. | Exploiting multi-grain ranking constraints for precisely searching visually-similar vehicles | |
Zhu et al. | Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery | |
Torralba et al. | Labelme: Online image annotation and applications | |
CN101315663B (en) | Nature scene image classification method based on area dormant semantic characteristic | |
CN101162470B (en) | Video frequency advertisement recognition method based on layered matching | |
CN108389251A (en) | The full convolutional network threedimensional model dividing method of projection based on fusion various visual angles feature | |
CN103324677B (en) | Hierarchical fast image global positioning system (GPS) position estimation method | |
CN102129568B (en) | Method for detecting image-based spam email by utilizing improved gauss hybrid model classifier | |
CN105574063A (en) | Image retrieval method based on visual saliency | |
CN101599179A (en) | Method for automatically generating field motion wonderful scene highlights | |
Liu et al. | Subtler mixed attention network on fine-grained image classification | |
CN103631932A (en) | Method for detecting repeated video | |
CN109408655A (en) | The freehand sketch retrieval method of incorporate voids convolution and multiple dimensioned sensing network | |
Sun et al. | A multi-level convolution pyramid semantic fusion framework for high-resolution remote sensing image scene classification and annotation | |
CN116152494A (en) | Building foot point identification segmentation method based on two-stage 3D point cloud semantic segmentation | |
CN106066993A (en) | A kind of crowd's semantic segmentation method and system | |
Kim et al. | Classification and indexing scheme of large-scale image repository for spatio-temporal landmark recognition | |
Luo | Social image aesthetic classification and optimization algorithm in machine learning | |
CN101894267A (en) | Three-dimensional object characteristic view selection method | |
Sheng et al. | Style-based classification of Chinese ink and wash paintings | |
CN104965928A (en) | Chinese character image retrieval method based on shape matching | |
CN107818319A (en) | A kind of method of automatic discrimination face beauty degree |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20111207 Termination date: 20211216 |
|
CF01 | Termination of patent right due to non-payment of annual fee |