WO2018014828A1 - 识别二维码位置的方法及其*** - Google Patents

识别二维码位置的方法及其*** Download PDF

Info

Publication number
WO2018014828A1
WO2018014828A1 PCT/CN2017/093370 CN2017093370W WO2018014828A1 WO 2018014828 A1 WO2018014828 A1 WO 2018014828A1 CN 2017093370 W CN2017093370 W CN 2017093370W WO 2018014828 A1 WO2018014828 A1 WO 2018014828A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional code
position information
positioning block
location information
point position
Prior art date
Application number
PCT/CN2017/093370
Other languages
English (en)
French (fr)
Inventor
刘欢
刘文荣
屠寅海
Original Assignee
阿里巴巴集团控股有限公司
刘欢
刘文荣
屠寅海
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司, 刘欢, 刘文荣, 屠寅海 filed Critical 阿里巴巴集团控股有限公司
Priority to SG11201900444XA priority Critical patent/SG11201900444XA/en
Priority to KR1020197005252A priority patent/KR102104219B1/ko
Priority to JP2019503329A priority patent/JP6936306B2/ja
Priority to EP17830462.2A priority patent/EP3489856B1/en
Priority to MYPI2019000141A priority patent/MY193939A/en
Publication of WO2018014828A1 publication Critical patent/WO2018014828A1/zh
Priority to US16/252,138 priority patent/US10685201B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1452Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Definitions

  • the present application relates to the field of computer technology, and in particular, to a method for identifying a location of a two-dimensional code and a system thereof.
  • Augmented Reality is a new technology that integrates real world information and virtual world information "seamlessly". It is an entity information that is difficult to experience in a certain time and space of the real world. Visual information, sound, taste, touch, etc., through computer technology, simulation and then superimposed into the real world is perceived by human senses, thus achieving a sensory experience beyond reality.
  • the augmented reality technology using the two-dimensional code as a marker (Marker) in the prior art mainly has the following two implementation schemes:
  • the scheme uses the two-dimensional code contour as the feature point set, and the system presets a two-dimensional code contour feature point set, and then applies preset feature point sets to other captured two-dimensional codes to match.
  • the main disadvantage of this scheme is that since the pattern of the two-dimensional code is different due to different code values, including the pattern size and the density of black and white blocks, the contour of the two-dimensional code does not have stable features, resulting in no tracking accuracy. Stable (highly accurate with contours and low accuracy with large contours).
  • the scheme first decodes the two-dimensional code to obtain a code value string, and then regenerates a standard two-dimensional code picture identical to the captured two-dimensional code, and then performs feature point extraction on the newly generated two-dimensional code picture, and the obtained feature is obtained.
  • the point set is used as a system preset feature point set.
  • the main disadvantage of this scheme is that for any new QR code, the system needs to repeat the above steps to generate a new set of preset feature points, which is time consuming and slows down the processing speed of the entire system.
  • the main purpose of the present application is to provide a method for identifying a position of a two-dimensional code and a system thereof, which solves the problem of slow recognition speed and low tracking accuracy of the prior art Augmented reality scheme using a two-dimensional code as a marker.
  • a method for identifying a position of a two-dimensional code includes: acquiring a two-dimensional code in an image; performing feature detection according to a primary positioning block of the two-dimensional code, and identifying the second Position information in the dimension code; determining spatial position information of the two-dimensional code in the image according to position information in the two-dimensional code.
  • the method further comprises: tracking location information in the two-dimensional code.
  • the step of performing feature detection according to the primary positioning block of the two-dimensional code includes: determining one or more primary positioning blocks of the two-dimensional code; respectively acquiring a central point position information of each primary positioning block And a plurality of corner point position information; the center point position information and the corner point position information of the obtained main positioning block are used as feature point sets to perform feature detection.
  • the step of performing feature detection according to the primary positioning block of the two-dimensional code includes: determining one or more primary positioning blocks of the two-dimensional code; respectively acquiring a central point position information of each primary positioning block And a plurality of black and white pixel boundary edge center point position information; the center point position information of the obtained main positioning block and the black and white pixel boundary edge center point position information are used as feature point sets to perform feature detection.
  • the step of determining the spatial location information of the two-dimensional code in the image according to the location information in the two-dimensional code includes: acquiring preset standard location information; and using the standard location information and the location The position information in the two-dimensional code is matched to obtain the spatial position information.
  • the two-dimensional code is a fast response QR two-dimensional code.
  • the method further includes: acquiring virtual application data corresponding to the two-dimensional code; and determining a spatial location of the virtual application data according to the spatial location information.
  • a system for identifying a location of a two-dimensional code including: an acquisition module, configured to acquire a two-dimensional code in an image; and an identification module, configured to perform a feature according to the primary positioning block of the two-dimensional code Detecting, identifying location information in the two-dimensional code; a spatial location determining module, configured to determine spatial location information of the two-dimensional code in the image according to location information in the two-dimensional code.
  • the system further includes: a tracking module, configured to track location information in the two-dimensional code.
  • the identification module includes: a first determining module, configured to determine one or more primary positioning blocks of the two-dimensional code; and a first acquiring module, configured to separately acquire a central location point information of each primary positioning block And a plurality of corner point position information; the first detecting module is configured to use the center point position information and the corner point position information of the acquired main positioning block as a feature point set to perform feature detection.
  • the first obtaining module acquires 12, 8 or 4 corner point position information of each main positioning block.
  • the identification module includes: a second determining module, configured to determine one or more primary positioning blocks of the two-dimensional code; and a second acquiring module, configured to acquire a central point location information of each primary positioning block And a plurality of black and white pixel boundary edge center point position information; the second detecting module is configured to perform feature detection on the acquired center point position information of the main positioning block and the black and white pixel boundary edge center point position information as a feature point set.
  • the second obtaining module acquires 12, 8 or 4 black and white pixel boundary edge center point position information of each main positioning block.
  • the space location determining module is configured to: acquire preset standard location information, and match the standard location information with location information in the two-dimensional code to obtain the spatial location information.
  • the two-dimensional code is a fast response QR two-dimensional code.
  • the system further includes: a virtual application data acquiring module, configured to acquire virtual application data corresponding to the two-dimensional code; and a location updating module, configured to update a spatial location of the virtual application data according to the spatial location information .
  • the present application recognizes the position of the two-dimensional code by using the two-dimensional code as the Marker and performing feature detection on the preset position of the main positioning block of the two-dimensional code.
  • the feature point set of the extracted two-dimensional code has a fixed relative position, is unique, is not easy to be confused, and has a good tracking effect.
  • FIG. 1 is a flow chart of a method of identifying a location of a two-dimensional code in accordance with one embodiment of the present application
  • FIGS. 2A and 2B are diagrams showing the composition of a main positioning block of a QR code according to an embodiment of the present application
  • 3A and 3B are schematic diagrams of feature point extraction according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of feature point extraction according to another embodiment of the present application.
  • FIG. 5 is a flowchart of a method of identifying a location of a two-dimensional code according to another embodiment of the present application.
  • FIG. 6 is a structural block diagram of a system for identifying a location of a two-dimensional code according to an embodiment of the present application
  • FIG. 7A and 7B are structural block diagrams of an identification module according to an embodiment of the present application.
  • FIG. 8 is a structural block diagram of a system for recognizing a position of a two-dimensional code for augmented reality technology according to another embodiment of the present application.
  • FIG. 1 is a flowchart of a method for identifying a location of a two-dimensional code according to an embodiment of the present application. As shown in FIG. 1, the method includes:
  • Step S102 acquiring a two-dimensional code in the image.
  • the real scene image containing the two-dimensional code is acquired by the camera device of the terminal device, and the terminal device may be a smart phone, a tablet computer, a digital camera or the like. Then, preprocessing the input two-dimensional code image, specifically comprising: gray-converting the image to obtain gray The degree image is subjected to binarization processing on the grayscale image.
  • the two-dimensional code may be a QR code.
  • Step S104 Perform feature detection according to the primary positioning block of the two-dimensional code, and identify location information in the two-dimensional code.
  • the position information in the two-dimensional code includes position information such as a center point position of the main positioning block of the two-dimensional code, a corner point position, and a black-and-white pixel boundary edge center point position.
  • the position information of the two-dimensional code on the plane can be determined by the position information described above, that is, the spatial position information such as the direction, the rotation angle, and the inclination angle of the two-dimensional code is determined.
  • FIG. 2A is a structural diagram of a main positioning block of a QR code.
  • the positioning of the QR code includes two types: a main positioning block and an auxiliary positioning block, wherein the main positioning block includes an upper left corner 201, an upper right corner 202, and a lower left corner 203.
  • a large back-shaped area, and the auxiliary positioning block is a small black border box in the middle.
  • the main positioning block is characterized in that the length ratio of the line segments composed of black and white pixels is 1:1:3:1:1, and the feature extraction and positioning of the code can be performed on the two-dimensional code. Since the main positioning block is common to all the two-dimensional codes, and the shape and position of the pattern are fixed, it can be used as a universal identifier of the two-dimensional code pattern, that is, the main positioning block can be used as a common feature point set of the two-dimensional code pattern.
  • the feature extraction process of the primary positioning block is described in detail below.
  • a center point and a corner point of the main positioning block may be employed as a feature point set.
  • the corner point is a point at a right angle in the main positioning block, and the corner point is also a point where the pixel in the image changes sharply.
  • the center point 301 Taking the main positioning block center point 301 as a starting point and detecting the corner points around it, it can be seen that there is a first group of 4 corner points (311, 312, 313 and 314) closest to the center point 301, in the first group 4 There are a second set of 4 corner points (321, 322, 323, and 324) outside the corner points, and a third set of 4 corner points (331, 332, 333, and 334) outside the second set of 4 corner points.
  • the center point 301 and any one of the above three groups (ie, 4 corner points) or any two groups (ie, 8 corner points) or all three groups (ie, 12 corner points) may be selected as feature points.
  • the set that is, the feature point set of one main positioning block can be 5, 9 or 13 feature points.
  • a two-dimensional code has three main positioning blocks. If each main positioning block selects five feature points, there are 15 feature points. If each main positioning block selects 9 feature points, then there are a total of 27 feature points; if each feature block selects 13 feature points, there are 39 feature points. Need to explain, the above The same number of feature points are selected for each of the primary positioning blocks. In other embodiments, a different number of feature points may be selected for each primary positioning block, and details are not described herein again.
  • the number of feature point sets can be selected according to actual conditions. The more feature points are selected, the more accurate the calculation result is, but the larger the calculation amount is; the less the feature points are selected, the smaller the calculation amount is, but the calculation result may be biased.
  • FIG. 3B is a schematic diagram of selecting the largest number of feature point sets. Referring to FIG. 3B, 39 feature points are obtained in the entire two-dimensional code image, and the relative positions of the 39 feature points are fixed, that is, the 39 feature points can be uniquely determined.
  • a center point of the main positioning block and a black and white pixel boundary edge center point may also be adopted as the feature point set.
  • the following is an example of a main positioning block.
  • 401 is the center point of the main positioning block.
  • the center point 401 is used as a starting point, and the center point of the black and white pixel boundary edge is detected around the center point, and the center point 401 is closest to the center point 401.
  • the center point of the black and white pixel boundary edge that can be acquired by one main positioning block can be 4, 8, or 12, and the center point, that is, the feature point set is 5, 9, or 13 feature points.
  • the three main positioning blocks of a two-dimensional code can obtain 15, 27 or 39 feature points.
  • the two-dimensional code is used as the Marker, and the position of the two-dimensional code can be effectively recognized by performing feature detection on the preset position of the main positioning block of the two-dimensional code.
  • Step S106 determining spatial position information of the two-dimensional code in the image according to position information in the two-dimensional code.
  • the preset standard position information is acquired, and the standard position information is matched with the position information in the two-dimensional code to obtain spatial position information of the two-dimensional code in the image (including spatial position parameters and rotation). parameter).
  • the spatial position of the two-dimensional code can be continuously tracked.
  • the spatial position of the obtained two-dimensional code can be applied to augmented reality technology, which will be described in detail below with reference to FIG.
  • FIG. 5 is a flowchart of a two-dimensional code identification method according to another embodiment of the present application. As shown in FIG. 5, the method includes:
  • Step S502 starting a camera device of the terminal device (for example, including a smart phone, a tablet computer, a digital camera, etc.), and capturing a real scene image containing the two-dimensional code.
  • a camera device of the terminal device for example, including a smart phone, a tablet computer, a digital camera, etc.
  • Step S504 scanning the two-dimensional code to extract a feature point set of the two-dimensional code, and the step of extracting the feature point set of the specific two-dimensional code may refer to the description of FIG. 3 or FIG. 4. If the feature point set extraction fails, step S502 is continued.
  • step S506 internal parameters of the camera device, such as a focal length, an image center, and the like, are acquired.
  • Step S508 point cloud registration, if successful, step S510 is performed, otherwise step S502 is performed.
  • the preset standard position information is acquired, for example, the position of the front view of the two-dimensional code that can be captured by the camera as a standard position.
  • the first point cloud data (such as a 3d point cloud) of the feature point set of the standard position can be obtained by combining the standard position information with the internal parameters of the pre-acquired camera device (for example, including the focal length, the image center, and the like).
  • the second point cloud data of the feature point set of the two-dimensional code (such as a 3d point cloud) can be obtained.
  • the point cloud registration is performed according to the first point cloud data and the two point cloud data, and the positional relationship between the first point cloud data and the two point cloud data is calculated, thereby obtaining spatial position information of the two-dimensional code in the image (ie, The spatial position of the camera).
  • Step S510 updating the spatial position of the virtual object by using the spatial position of the camera, that is, completing the complete process of the augmented reality.
  • the virtual object includes any one or a combination of text, picture, video, three-dimensional model, animation, sound, and geographic location information.
  • FIG. 6 is a structural block diagram of a system for identifying a location of a two-dimensional code according to an embodiment of the present application. As shown in FIG. 6, the method includes:
  • An obtaining module 610 configured to acquire a two-dimensional code in an image
  • the identification module 620 is configured to perform feature detection according to the primary positioning block of the two-dimensional code, and identify location information in the two-dimensional code;
  • the spatial location determining module 630 is configured to determine spatial location information of the two-dimensional code in the image according to location information in the two-dimensional code. Further, the spatial location determining module 630 is configured to: acquire preset standard location information, and match the standard location information with location information in the two-dimensional code to obtain the spatial location information.
  • the tracking module 640 is configured to track location information in the two-dimensional code, and determine spatial location information of the two-dimensional code in the image according to the location information in the new two-dimensional code.
  • the identification module 620 further includes:
  • a first determining module 621 configured to determine one or more primary positioning blocks of the two-dimensional code
  • the first obtaining module 622 is configured to acquire a central location point information and a plurality of corner location information of each primary positioning block, where the first acquiring module acquires 12, 8 or 4 of each primary positioning block. Corner location information.
  • the first detecting module 623 is configured to perform feature detection by using the center point position information and the corner point position information of the acquired main positioning block as feature point sets.
  • the identification module 620 further includes:
  • a second determining module 626 configured to determine one or more primary positioning blocks of the two-dimensional code
  • the second obtaining module 627 is configured to acquire a central point position information of each of the main positioning blocks and a plurality of black and white pixel boundary edge center point position information, where the second obtaining module acquires 12 of each main positioning block. 8 or 4 black and white pixel boundary edge center point position information.
  • the second detecting module 628 is configured to perform feature detection on the acquired center point position information of the main positioning block and the black and white pixel boundary edge center point position information as a feature point set.
  • FIG. 8 is a structural block diagram of a system for recognizing a location of a two-dimensional code for augmented reality technology according to an embodiment of the present application. As shown in FIG. 8, the method includes:
  • the obtaining module 810 is configured to acquire a two-dimensional code in the image
  • the identification module 820 is configured to perform feature detection according to the primary positioning block of the two-dimensional code, and identify location information in the two-dimensional code;
  • the spatial location determining module 830 is configured to calculate spatial location information of the two-dimensional code in the image according to location information in the two-dimensional code.
  • a virtual application data obtaining module 840 configured to acquire virtual application data corresponding to the two-dimensional code
  • the virtual application data location update module 850 is configured to determine a spatial location of the virtual application data according to the spatial location information.
  • the operation steps of the method of the present application correspond to the structural features of the system, and can be cross-referenced, no longer One by one.
  • the present application recognizes the position of the two-dimensional code by using the two-dimensional code as the Marker and performing feature detection on the preset position of the main positioning block of the two-dimensional code.
  • the feature point set of the extracted two-dimensional code has a fixed relative position, is unique, is not easy to be confused, and has a good tracking effect.
  • the technical solution provided according to the present application is adapted to all QR codes without regenerating the preset feature point set each time.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Toxicology (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Navigation (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)

Abstract

一种识别二维码位置的方法及其***,其中该方法包括:获取图像中的二维码(S102);根据所述二维码的主定位块进行特征检测,识别所述二维码中的位置信息(S104);根据所述二维码中的位置信息确定所述二维码在所述图像中的空间位置信息(S106)。该方法以二维码为标志物的增强现实方案具有良好的跟踪效果。

Description

识别二维码位置的方法及其*** 技术领域
本申请涉及计算机技术领域,尤其涉及一种识别二维码位置的方法及其***。
背景技术
增强现实技术(Augmented Reality,简称AR)是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术,是把原本在现实世界的一定时间空间范围内很难体验到的实体信息(视觉信息、声音、味道、触觉等),通过计算机技术,模拟仿真后再叠加到现实世界被人类感官所感知,从而达到超越现实的感官体验。
现有技术中的以二维码为标志物(Marker)的增强现实技术主要有以下两种实现方案:
(1)二维码轮廓法。
该方案以二维码轮廓作为特征点集,***预置一个二维码轮廓特征点集,然后对其它拍摄的二维码都应用预置的特征点集进行匹配。该方案的主要缺点在于:由于二维码的图案因码值的不同而不同,包括图案大小、黑白块密度都***,因此二维码的轮廓并不具有稳定的特征,从而导致追踪准确度不稳定(轮廓相似则准确度高,轮廓差异大则准确度低)。
(2)重新生成法。
该方案先将二维码解码得到码值字符串,然后重新生成一个与被拍摄二维码一样的标准二维码图片,然后对新生成的二维码图片进行特征点提取,将得到的特征点集作为***预置特征点集。该方案的主要缺点在于:对任何一个新的二维码,***都需要重复上述步骤,生成新的预置特征点集,而这个过程又比较耗时,从而拖慢了整个***的处理速度。
综上所述,可知现有技术的以二维码为Marker的增强现实技术具有识别速度慢、跟踪精度低的问题。
发明内容
本申请的主要目的在于提供一种识别二维码位置的方法及其***,以解决现有技术的以二维码为标志物(Marker)的增强现实方案识别速度慢、跟踪精度低的问题。
为了解决上述问题,根据本申请实施例提供一种识别二维码位置的方法,其包括:获取图像中的二维码;根据所述二维码的主定位块进行特征检测,识别所述二维码中的位置信息;根据所述二维码中的位置信息确定所述二维码在所述图像中的空间位置信息。
其中,所述方法还包括:跟踪所述二维码中的位置信息。
其中,所述根据所述二维码的主定位块进行特征检测的步骤,包括:确定所述二维码的一个或多个主定位块;分别获取每个主定位块的一中心点位置信息及多个角点位置信息;将获取的主定位块的中心点位置信息和角点位置信息作为特征点集,进行特征检测。
其中,获取每个主定位块的12、8或4个角点位置信息。
其中,所述根据所述二维码的主定位块进行特征检测的步骤,包括:确定所述二维码的一个或多个主定位块;分别获取每个主定位块的一中心点位置信息及多个黑白像素交界边缘中心点位置信息;将获取的主定位块的中心点位置信息和黑白像素交界边缘中心点位置信息作为特征点集,进行特征检测。
其中,获取每个主定位块的12、8或4个黑白交界边缘中心点位置信息。
其中,所述根据所述二维码中的位置信息确定所述二维码在所述图像中的空间位置信息的步骤,包括:获取预置的标准位置信息;将所述标准位置信息与所述二维码中的位置信息进行匹配,得到所述空间位置信息。
其中,所述二维码为快速反应QR二维码。
其中,所述方法还包括:获取与所述二维码对应的虚拟应用数据;根据所述空间位置信息确定所述虚拟应用数据的空间位置。
根据本申请实施例还提供一种识别二维码位置的***,其包括:获取模块,用于获取图像中的二维码;识别模块,用于根据所述二维码的主定位块进行特征检测,识别所述二维码中的位置信息;空间位置确定模块,用于根据所述二维码中的位置信息确定所述二维码在所述图像中的空间位置信息。
其中,所述***还包括:跟踪模块,用于跟踪所述二维码中的位置信息。
其中,所述识别模块包括:第一确定模块,用于确定所述二维码的一个或多个主定位块;第一获取模块,用于分别获取每个主定位块的一中心位置点信息及多个角点位置信息;第一检测模块,用于将获取的主定位块的中心点位置信息和角点位置信息作为特征点集,进行特征检测。
其中,所述第一获取模块获取每个主定位块的12、8或4个角点位置信息。
其中,所述识别模块包括:第二确定模块,用于确定所述二维码的一个或多个主定位块;第二获取模块,用于分别获取每个主定位块的一中心点位置信息及多个黑白像素交界边缘中心点位置信息;第二检测模块,用于将获取的主定位块的中心点位置信息和黑白像素交界边缘中心点位置信息作为特征点集,进行特征检测。
其中,所述第二获取模块获取每个主定位块的12、8或4个黑白像素交界边缘中心点位置信息。
其中,所述空间位置确定模块用于:获取预置的标准位置信息,将所述标准位置信息与所述二维码中的位置信息进行匹配,得到所述空间位置信息。
其中,所述二维码为快速反应QR二维码。
其中,所述***还包括:虚拟应用数据获取模块,用于获取与所述二维码对应的虚拟应用数据;位置更新模块,用于根据所述空间位置信息更新所述虚拟应用数据的空间位置。
综上所述,本申请通过将二维码作为Marker,并通过对二维码的主定位块的预设位置进行特征检测识别二维码的位置。所提取二维码的特征点集相对位置固定,唯一性好,不易混淆,具有良好的跟踪效果。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请一个实施例的识别二维码位置的方法的流程图;
图2A和2B是根据本申请实施例的QR码的主定位块的组成结构图;
图3A和3B是根据本申请一个实施例的特征点提取的示意图;
图4是根据本申请另一实施例的特征点提取的示意图;
图5是根据本申请另一实施例的识别二维码位置的方法的流程图;
图6是根据本申请一个实施例的识别二维码位置的***的结构框图;
图7A和7B是根据本申请实施例的识别模块的结构框图;
图8是根据本申请另一实施例的用于增强现实技术的识别二维码位置的***的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
图1是根据本申请实施例的识别二维码位置的方法的流程图,如图1所示,该方法包括:
步骤S102,获取图像中的二维码。
通过终端设备的摄像头装置获取含有二维码的真实场景图像,所述终端设备可以是智能手机、平板电脑、数码相机或其它类似终端设备。然后,对输入的二维码图像进行预处理,具体包括:将该图像进行灰度转换后得到灰 度图像,并对所述灰度图像进行二值化处理。在本申请实施例中,所述二维码可以是QR码。
步骤S104,根据所述二维码的主定位块进行特征检测,识别所述二维码中的位置信息。
在本申请中,二维码中的位置信息包括:二维码的主定位块的中心点位置、角点位置、黑白像素交界边缘中心点位置等位置信息。通过上述的位置信息能够确定二维码在平面上的位置,即确定二维码的方向、旋转角度、倾斜角度等空间位置信息。
参考图2A,图2A为QR码的主定位块的组成结构图,QR码的定位包括主定位块和辅助定位块两类,其中主定位块包括左上角201、右上角202和左下角203三个大的回形区域,而辅助定位块则是中间的小的黑色边框方块。主定位块有且只有三个,而辅助定位块的数量则会因二维码的密度增加而增加。参考图2B,主定位块的特点为黑白像素构成的线段的长度比为1:1:3:1:1,利用这个特点可以对二维码进行特征提取并定位码的位置。由于主定位块是所有二维码共有的,且图案形状和位置都固定,因此可以作为二维码图案的通用标识,也即可以用主定位块来作为二维码图案的通用特征点集,下面详细描述主定位块的特征提取过程。
在本申请的一个实施例中,可以采用将主定位块的中心点和角点作为特征点集。如图3A所示,角点是主定位块中直角处的点,并且角点也是图像中像素变化剧烈的点。以主定位块中心点301为起点,向其周围检测角点,可以看出,距离中心点301最近的有第一组4个角点(311、312、313和314),在第一组4个角点外侧有第二组4个角点(321、322、323和324),在第二组4个角点外侧还有第三组4个角点(331、332、333和334)。在本申请中,可以选择中心点301以及上述三组中任意一组(即4个角点)或任意二组(即8个角点)或全部三组(即12个角点)作为特征点集,也就是说,一个主定位块的特征点集可以为5、9或13个特征点。以此类推,一个二维码共有3个主定位块,如果每个主定位块都选择5个特征点,则共有15个特征点;如果每个主定位块都选择9个特征点,则共有27个特征点;如果每个主定位块都选择13个特征点,则共有39个特征点。需要说明,以上所 述为每个主定位块选取相同数量的特征点,在其他实施例中,也可以每个主定位块选取不同数量的特征点,此处不再赘述。
在具体应用中,可以根据实际情况选取特征点集的数量。选取的特征点越多,计算结果越精准,但计算量也越大;而选取的特征点越少,计算量也较小,但有可能导致计算结果有偏差。图3B是选取数量最多的特征点集的示意图。参考图3B,在整个二维码图像中可得39个特征点,且这39个特征点的相对位置固定,即这39个特征点都能唯一确定。
在本申请的一个实施例中,还可以采用将主定位块的中心点和黑白像素交界边缘中心点作为特征点集。下面以一个主定位块为例进行说明,参考图4,401是主定位块的中心点,以中心点401为起点,向其周围检测黑白像素交界边缘中心点,距离中心点401最近的有第一组4个黑白像素交界边缘中心点(411、412、413和414),在第一组4个角点外侧有第二组4个角点(未标示),在第二组4个角点外侧还有第三组4个角点(未标示)。与获取角点的方法类似,一个主定位块可以获取的黑白像素交界边缘中心点可以为4、8或12个,再加上中心点即特征点集为5、9或13个特征点。一个二维码的3个主定位块可获得15、27或39个特征点。
在本申请中,将二维码作为Marker,通过对二维码的主定位块的预设位置进行特征检测能够有效识别二维码的位置。
步骤S106,根据所述二维码中的位置信息确定所述二维码在所述图像中的空间位置信息。
具体地,获取预置的标准位置信息,将所述标准位置信息与所述二维码中的位置信息进行匹配,得到二维码在所述图像中的空间位置信息(包括空间位置参数与旋转参数)。在实际应用中,通过连续拍摄含有二维码的真实场景图像,可以持续跟踪二维码的空间位置。根据得到的二维码的空间位置可应用于增强现实技术中,下面结合图5详细描述。
图5是根据本申请另一实施例的二维码识别方法的流程图,如图5所示,该方法包括:
步骤S502,启动终端设备(例如包括智能手机、平板电脑、数码相机等)的摄像头装置,拍摄含有二维码的真实场景图像。
步骤S504,扫描所述二维码,提取二维码的特征点集,具体的二维码的特征点集的提取步骤可以参考图3或图4的描述。如果特征点集提取失败则继续执行步骤S502。
步骤S506,获取摄像头装置的内部参数,例如焦距、图像中心等。
步骤S508,点云配准,若成功则执行步骤S510,否则执行步骤S502。
具体而言,获取预置的标准位置信息,例如可以通过摄像头拍摄的二维码的正视图的位置作为标准位置。通过标准位置信息结合预先获取的摄像头装置的内部参数(例如包括焦距、图像中心等),可得到标准位置的特征点集的第一点云数据(如3d点云)。同样地,将提取的二维码的特征点集结合相机内参,可得到二维码的特征点集的第二点云数据(如3d点云)。然后,根据第一点云数据和二点云数据进行点云配准,计算第一点云数据和二点云数据之间的位置关系,从而得到二维码在图像中的空间位置信息(即相机的空间位置)。
步骤S510,利用相机的空间位置更新虚拟物体的空间位置,即完成增强现实的完整流程。其中,虚拟物体包括:文字、图片、视频、三维模型、动画、声音、地理位置信息中的任意一种或几种的组合。
图6是根据本申请实施例的识别二维码位置的***的结构框图,如图6所示,其包括:
获取模块610,用于获取图像中的二维码;
识别模块620,用于根据所述二维码的主定位块进行特征检测,识别所述二维码中的位置信息;
空间位置确定模块630,用于根据所述二维码中的位置信息确定所述二维码在所述图像中的空间位置信息。进一步,所述空间位置确定模块630用于:获取预置的标准位置信息,将所述标准位置信息与所述二维码中的位置信息进行匹配,得到所述空间位置信息。
跟踪模块640,用于跟踪所述二维码中的位置信息,并根据新的二维码中的位置信息确定二维码在图像中的空间位置信息。
在本申请的一个实施例中,参考图7A,所述识别模块620进一步包括:
第一确定模块621,用于确定所述二维码的一个或多个主定位块;
第一获取模块622,用于分别获取每个主定位块的一中心位置点信息及多个角点位置信息;其中,所述第一获取模块获取每个主定位块的12、8或4个角点位置信息。
第一检测模块623,用于将获取的主定位块的中心点位置信息和角点位置信息作为特征点集,进行特征检测。
在本申请的一个实施例中,参考图7B,所述识别模块620进一步包括:
第二确定模块626,用于确定所述二维码的一个或多个主定位块;
第二获取模块627,用于分别获取每个主定位块的一中心点位置信息及多个黑白像素交界边缘中心点位置信息;其中,所述第二获取模块获取每个主定位块的12、8或4个黑白像素交界边缘中心点位置信息。
第二检测模块628,用于将获取的主定位块的中心点位置信息和黑白像素交界边缘中心点位置信息作为特征点集,进行特征检测。
参考图8,是根据本申请实施例的用于增强现实技术的识别二维码位置的***的结构框图,如图8所示,其包括:
获取模块810,用于获取图像中的二维码;
识别模块820,用于根据所述二维码的主定位块进行特征检测,识别所述二维码中的位置信息;
空间位置确定模块830,用于根据所述二维码中的位置信息计算所述二维码在所述图像中的空间位置信息。
虚拟应用数据获取模块840,用于获取与所述二维码对应的虚拟应用数据;
虚拟应用数据位置更新模块850,用于根据所述空间位置信息确定所述虚拟应用数据的空间位置。
本申请的方法的操作步骤与***的结构特征对应,可以相互参照,不再 一一赘述。
综上所示,本申请通过将二维码作为Marker,并通过对二维码的主定位块的预设位置进行特征检测识别二维码的位置。所提取二维码的特征点集相对位置固定,唯一性好,不易混淆,具有良好的跟踪效果。根据本申请提供的技术方案适应于所有QR码,无需每次重新生成预置特征点集。
本领域技术人员应明白,本申请的实施例可提供为方法、***或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备 不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (18)

  1. 一种识别二维码位置的方法,其特征在于,包括:
    获取图像中的二维码;
    根据所述二维码的主定位块进行特征检测,识别所述二维码中的位置信息;
    根据所述二维码中的位置信息确定所述二维码在所述图像中的空间位置信息。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    跟踪所述二维码中的位置信息。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述二维码的主定位块进行特征检测的步骤,包括:
    确定所述二维码的一个或多个主定位块;
    分别获取每个主定位块的一中心点位置信息及多个角点位置信息;
    将获取的主定位块的中心点位置信息和角点位置信息作为特征点集,进行特征检测。
  4. 根据权利要求2所述的方法,其特征在于,获取每个主定位块的12、8或4个角点位置信息。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述二维码的主定位块进行特征检测的步骤,包括:
    确定所述二维码的一个或多个主定位块;
    分别获取每个主定位块的一中心点位置信息及多个黑白像素交界边缘中心点位置信息;
    将获取的主定位块的中心点位置信息和黑白像素交界边缘中心点位置信息作为特征点集,进行特征检测。
  6. 根据权利要求5所述的方法,其特征在于,获取每个主定位块的12、8或4个黑白交界边缘中心点位置信息。
  7. 根据权利要求1所述的方法,其特征在于,所述根据所述二维码中的位置信息确定所述二维码在所述图像中的空间位置信息的步骤,包括:
    获取预置的标准位置信息;
    将所述标准位置信息与所述二维码中的位置信息进行匹配,得到所述空间位置信息。
  8. 根据权利要求1所述的方法,其特征在于,所述二维码为快速反应QR二维码。
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,还包括:
    获取与所述二维码对应的虚拟应用数据;
    根据所述空间位置信息确定所述虚拟应用数据的空间位置。
  10. 一种识别二维码位置的***,其特征在于,包括:
    获取模块,用于获取图像中的二维码;
    识别模块,用于根据所述二维码的主定位块进行特征检测,识别所述二维码中的位置信息;
    空间位置确定模块,用于根据所述二维码中的位置信息确定所述二维码在所述图像中的空间位置信息。
  11. 根据权利要求10所述的***,其特征在于,还包括:
    跟踪模块,用于跟踪所述二维码中的位置信息。
  12. 根据权利要求10所述的***,其特征在于,所述识别模块包括:
    第一确定模块,用于确定所述二维码的一个或多个主定位块;
    第一获取模块,用于分别获取每个主定位块的一中心位置点信息及多个 角点位置信息;
    第一检测模块,用于将获取的主定位块的中心点位置信息和角点位置信息作为特征点集,进行特征检测。
  13. 根据权利要求12所述的***,其特征在于,所述第一获取模块获取每个主定位块的12、8或4个角点位置信息。
  14. 根据权利要求10所述的***,其特征在于,所述识别模块包括:
    第二确定模块,用于确定所述二维码的一个或多个主定位块;
    第二获取模块,用于分别获取每个主定位块的一中心点位置信息及多个黑白像素交界边缘中心点位置信息;
    第二检测模块,用于将获取的主定位块的中心点位置信息和黑白像素交界边缘中心点位置信息作为特征点集,进行特征检测。
  15. 根据权利要求14所述的***,其特征在于,所述第二获取模块获取每个主定位块的12、8或4个黑白像素交界边缘中心点位置信息。
  16. 根据权利要求10所述的***,其特征在于,所述空间位置确定模块用于:获取预置的标准位置信息,将所述标准位置信息与所述二维码中的位置信息进行匹配,得到所述空间位置信息。
  17. 根据权利要求10所述的***,其特征在于,所述二维码为快速反应QR二维码。
  18. 根据权利要求10至17中任一项所述的***,其特征在于,还包括:
    虚拟应用数据获取模块,用于获取与所述二维码对应的虚拟应用数据;
    虚拟应用数据位置确定模块,用于根据所述空间位置信息确定所述虚拟应用数据的空间位置。
PCT/CN2017/093370 2016-07-22 2017-07-18 识别二维码位置的方法及其*** WO2018014828A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
SG11201900444XA SG11201900444XA (en) 2016-07-22 2017-07-18 Method and system for recognizing location information in two-dimensional code
KR1020197005252A KR102104219B1 (ko) 2016-07-22 2017-07-18 2차원 코드 내 위치 정보를 인지하기 위한 방법 및 시스템
JP2019503329A JP6936306B2 (ja) 2016-07-22 2017-07-18 2次元コードの位置情報を認識する方法及びシステム
EP17830462.2A EP3489856B1 (en) 2016-07-22 2017-07-18 Method and system for recognizing location information in two-dimensional code
MYPI2019000141A MY193939A (en) 2016-07-22 2017-07-18 Method and system for recognizing location information in two-dimensional code
US16/252,138 US10685201B2 (en) 2016-07-22 2019-01-18 Method and system for recognizing location information in two-dimensional code

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610584889.1A CN106897648B (zh) 2016-07-22 2016-07-22 识别二维码位置的方法及其***
CN201610584889.1 2016-07-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/252,138 Continuation US10685201B2 (en) 2016-07-22 2019-01-18 Method and system for recognizing location information in two-dimensional code

Publications (1)

Publication Number Publication Date
WO2018014828A1 true WO2018014828A1 (zh) 2018-01-25

Family

ID=59190965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/093370 WO2018014828A1 (zh) 2016-07-22 2017-07-18 识别二维码位置的方法及其***

Country Status (9)

Country Link
US (1) US10685201B2 (zh)
EP (1) EP3489856B1 (zh)
JP (1) JP6936306B2 (zh)
KR (1) KR102104219B1 (zh)
CN (2) CN106897648B (zh)
MY (1) MY193939A (zh)
SG (1) SG11201900444XA (zh)
TW (1) TWI683257B (zh)
WO (1) WO2018014828A1 (zh)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897648B (zh) 2016-07-22 2020-01-31 阿里巴巴集团控股有限公司 识别二维码位置的方法及其***
CN107689061A (zh) * 2017-07-11 2018-02-13 西北工业大学 用于室内移动机器人定位的规则图形码及定位方法
CN112861560B (zh) * 2017-09-27 2023-12-22 创新先进技术有限公司 二维码定位方法及装置
CN107895138B (zh) * 2017-10-13 2020-06-23 西安艾润物联网技术服务有限责任公司 空间障碍物检测方法、装置及计算机可读存储介质
CN108629220A (zh) * 2018-03-23 2018-10-09 阿里巴巴集团控股有限公司 一种二维码识读方法、装置及设备
CN108960384B (zh) * 2018-06-07 2020-04-28 阿里巴巴集团控股有限公司 一种图形码的解码方法及客户端
CN111507119B (zh) * 2019-01-31 2024-02-06 北京骑胜科技有限公司 标识码识别方法、装置、电子设备及计算机可读存储介质
US10854016B1 (en) * 2019-06-20 2020-12-01 Procore Technologies, Inc. Computer system and method for creating an augmented environment using QR tape
CN110481602B (zh) * 2019-07-15 2021-06-25 广西柳钢东信科技有限公司 一种轨道运输设备的实时定位方法及装置
CN110539307A (zh) * 2019-09-09 2019-12-06 北京极智嘉科技有限公司 机器人、机器人定位方法、定位导航***及定位标记
CN110852132B (zh) * 2019-11-15 2023-10-03 北京金山数字娱乐科技有限公司 一种二维码空间位置确认方法及装置
WO2022032680A1 (zh) * 2020-08-14 2022-02-17 深圳传音控股股份有限公司 操作方法、终端及计算机存储介质
CN112560606B (zh) * 2020-12-02 2024-04-16 北京经纬恒润科技股份有限公司 挂车角度识别方法及装置
CN113384361B (zh) * 2021-05-21 2022-10-28 中山大学 一种视觉定位方法、***、装置及存储介质
CN113377351B (zh) * 2021-07-05 2022-05-17 重庆市规划和自然资源信息中心 用于大规模政务业务的模型构建工作***
CN113935909B (zh) * 2021-09-22 2024-07-05 南方电网数字平台科技(广东)有限公司 一种二维码校正识别方法及装置
CN115578606B (zh) * 2022-12-07 2023-03-31 深圳思谋信息科技有限公司 二维码识别方法、装置、计算机设备及可读存储介质
CN115630663A (zh) * 2022-12-19 2023-01-20 成都爱旗科技有限公司 一种二维码识别方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049728A (zh) * 2012-12-30 2013-04-17 成都理想境界科技有限公司 基于二维码的增强现实方法、***及终端
CN103971079A (zh) * 2013-01-28 2014-08-06 腾讯科技(深圳)有限公司 一种二维码的增强现实实现方法和装置
CN104809422A (zh) * 2015-04-27 2015-07-29 江苏中科贯微自动化科技有限公司 基于图像处理的qr码识别方法
US20150278573A1 (en) * 2012-07-23 2015-10-01 Korea Advanced Institute Of Science And Technology Method of recognizing qr code in image data and apparatus and method for converting qr code in content data into touchable object
CN106897648A (zh) * 2016-07-22 2017-06-27 阿里巴巴集团控股有限公司 识别二维码位置的方法及其***

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2835274B2 (ja) 1994-02-24 1998-12-14 株式会社テック 画像認識装置
US6073846A (en) 1994-08-17 2000-06-13 Metrologic Instruments, Inc. Holographic laser scanning system and process and apparatus and method
US6416154B1 (en) 1997-07-12 2002-07-09 Silverbrook Research Pty Ltd Printing cartridge with two dimensional code identification
US6377710B1 (en) * 1998-11-25 2002-04-23 Xerox Corporation Method and apparatus for extracting the skeleton of a binary figure by contour-based erosion
JP3458737B2 (ja) 1998-11-27 2003-10-20 株式会社デンソー 2次元コードの読取方法及び記録媒体
DE60118051T2 (de) 2000-04-06 2006-08-31 Seiko Epson Corp. Verfahren und Vorrichtung zum Lesen von einem zwei-dimensionalen Strichkode und Datenspeichermedium
US6959866B2 (en) * 2002-05-30 2005-11-01 Ricoh Company, Ltd. 2-Dimensional code pattern, 2-dimensional code pattern supporting medium, 2-dimensional code pattern generating method, and 2-dimensional code reading apparatus and method
JP3516144B1 (ja) 2002-06-18 2004-04-05 オムロン株式会社 光学情報コードの読取方法および光学情報コード読取装置
JP4301775B2 (ja) 2002-07-18 2009-07-22 シャープ株式会社 2次元コード読み取り装置,2次元コード読み取り方法,2次元コード読み取りプログラム及び該プログラムの記録媒体
JP3996520B2 (ja) 2003-01-30 2007-10-24 株式会社デンソーウェーブ 二次元情報コードおよびその生成方法
JP4180497B2 (ja) 2003-12-05 2008-11-12 富士通株式会社 コード種類判別方法、およびコード境界検出方法
US7751629B2 (en) 2004-11-05 2010-07-06 Colorzip Media, Inc. Method and apparatus for decoding mixed code
JP4810918B2 (ja) 2005-08-01 2011-11-09 富士ゼロックス株式会社 コードパターン画像生成装置及び方法、コードパターン画像読取装置及び方法、及びコードパターン画像媒体
KR100828539B1 (ko) * 2005-09-20 2008-05-13 후지제롯쿠스 가부시끼가이샤 이차원 코드의 검출 방법, 검출 장치, 및 검출 프로그램을기억한 기억 매체
JP2007090448A (ja) * 2005-09-27 2007-04-12 Honda Motor Co Ltd 二次元コード検出装置及びそのプログラム、並びに、ロボット制御情報生成装置及びロボット
JP4911340B2 (ja) 2006-02-10 2012-04-04 富士ゼロックス株式会社 二次元コード検出システムおよび二次元コード検出プログラム
US8532299B2 (en) * 2007-05-29 2013-09-10 Denso Wave Incorporated Method for producing two-dimensional code and reader for reading the two-dimensional code
JP4956375B2 (ja) 2007-10-30 2012-06-20 キヤノン株式会社 画像処理装置、画像処理方法
CN101615259B (zh) 2008-08-01 2013-04-03 凌通科技股份有限公司 一种二维光学辨识码的识别***
US20110290882A1 (en) * 2010-05-28 2011-12-01 Microsoft Corporation Qr code detection
CN101908125B (zh) * 2010-06-01 2014-07-02 福建新大陆电脑股份有限公司 Qr码条码解码芯片及其解码方法
KR101334049B1 (ko) * 2011-03-16 2013-11-28 성준형 증강 현실 기반 사용자 인터페이스 제공 장치 및 방법
CN103582893B (zh) * 2011-06-08 2017-10-13 英派尔科技开发有限公司 用于增强现实表示的二维图像获取
CN103049729B (zh) * 2012-12-30 2015-12-23 成都理想境界科技有限公司 基于二维码的增强现实方法、***及终端
US20140210857A1 (en) * 2013-01-28 2014-07-31 Tencent Technology (Shenzhen) Company Limited Realization method and device for two-dimensional code augmented reality
JP5967000B2 (ja) * 2013-03-29 2016-08-10 株式会社デンソーウェーブ 情報コード読取システム、情報コード読取装置、情報コード
CN104517108B (zh) * 2013-09-29 2017-12-22 北大方正集团有限公司 一种确定qr码二值化图像边缘线的方法及***
CN103632384B (zh) * 2013-10-25 2016-06-01 大连理工大学 组合式标记点及标记点中心的快速提取方法
WO2015067725A1 (en) * 2013-11-07 2015-05-14 Scantrust Sa Two dimensional barcode and method of authentication of such barcode
JP2015191531A (ja) * 2014-03-28 2015-11-02 株式会社トッパンTdkレーベル 2次元コードの空間位置の決定方法及びそのための装置
CN104008359B (zh) * 2014-04-18 2017-04-12 杭州晟元数据安全技术股份有限公司 一种用于qr码识别的精确网格采样方法
WO2015174191A1 (ja) * 2014-05-14 2015-11-19 共同印刷株式会社 二次元コード、二次元コードの解析システム
KR101770540B1 (ko) * 2014-05-14 2017-08-22 교도 인사쯔 가부시키가이샤 이차원 코드, 이차원 코드의 해석 시스템 및 이차원 코드의 작성 시스템
CN104268498B (zh) * 2014-09-29 2017-09-19 杭州华为数字技术有限公司 一种二维码的识别方法及终端
CN104951726B (zh) * 2015-06-25 2017-12-08 福建联迪商用设备有限公司 用于qr二维码位置探测的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150278573A1 (en) * 2012-07-23 2015-10-01 Korea Advanced Institute Of Science And Technology Method of recognizing qr code in image data and apparatus and method for converting qr code in content data into touchable object
CN103049728A (zh) * 2012-12-30 2013-04-17 成都理想境界科技有限公司 基于二维码的增强现实方法、***及终端
CN103971079A (zh) * 2013-01-28 2014-08-06 腾讯科技(深圳)有限公司 一种二维码的增强现实实现方法和装置
CN104809422A (zh) * 2015-04-27 2015-07-29 江苏中科贯微自动化科技有限公司 基于图像处理的qr码识别方法
CN106897648A (zh) * 2016-07-22 2017-06-27 阿里巴巴集团控股有限公司 识别二维码位置的方法及其***

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3489856A4 *

Also Published As

Publication number Publication date
JP6936306B2 (ja) 2021-09-15
TWI683257B (zh) 2020-01-21
US10685201B2 (en) 2020-06-16
SG11201900444XA (en) 2019-02-27
CN111291584A (zh) 2020-06-16
KR20190032501A (ko) 2019-03-27
EP3489856A4 (en) 2020-03-04
TW201804373A (zh) 2018-02-01
CN106897648A (zh) 2017-06-27
EP3489856B1 (en) 2022-04-06
EP3489856A1 (en) 2019-05-29
MY193939A (en) 2022-11-02
KR102104219B1 (ko) 2020-04-24
CN111291584B (zh) 2023-05-02
CN106897648B (zh) 2020-01-31
JP2019523500A (ja) 2019-08-22
US20190156092A1 (en) 2019-05-23

Similar Documents

Publication Publication Date Title
WO2018014828A1 (zh) 识别二维码位置的方法及其***
US20200279121A1 (en) Method and system for determining at least one property related to at least part of a real environment
WO2019218824A1 (zh) 一种移动轨迹获取方法及其设备、存储介质、终端
CN107392958B (zh) 一种基于双目立体摄像机确定物体体积的方法及装置
EP2711670B1 (en) Visual localisation
Chen et al. City-scale landmark identification on mobile devices
JP5950973B2 (ja) フレームを選択する方法、装置、及びシステム
Gao et al. Robust RGB-D simultaneous localization and mapping using planar point features
WO2018210047A1 (zh) 数据处理方法、数据处理装置、电子设备及存储介质
CN106485186B (zh) 图像特征提取方法、装置、终端设备及***
CN112435338B (zh) 电子地图的兴趣点的位置获取方法、装置及电子设备
CN110363179B (zh) 地图获取方法、装置、电子设备以及存储介质
CN112102404B (zh) 物体检测追踪方法、装置及头戴显示设备
CN110443228B (zh) 一种行人匹配方法、装置、电子设备及存储介质
Tomono Loop detection for 3D LiDAR SLAM using segment-group matching
JP2022519398A (ja) 画像処理方法、装置及び電子機器
CN113298871B (zh) 地图生成方法、定位方法及其***、计算机可读存储介质
Bae et al. Fast and scalable 3D cyber-physical modeling for high-precision mobile augmented reality systems
JP2018120320A (ja) 画像処理装置,画像処理方法,画像処理プログラム
Pereira et al. Mirar: Mobile image recognition based augmented reality framework
Amato et al. Technologies for visual localization and augmented reality in smart cities
CN105930813B (zh) 一种在任意自然场景下检测行文本的方法
Choi et al. Smart Booklet: Tour guide system with mobile augmented reality
Park et al. Real‐time robust 3D object tracking and estimation for surveillance system
US20220414998A1 (en) Augmenting a first image with a second image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17830462

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019503329

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20197005252

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017830462

Country of ref document: EP

Effective date: 20190222