CN114283065B - ORB feature point matching system and method based on hardware acceleration - Google Patents

ORB feature point matching system and method based on hardware acceleration Download PDF

Info

Publication number
CN114283065B
CN114283065B CN202111618882.4A CN202111618882A CN114283065B CN 114283065 B CN114283065 B CN 114283065B CN 202111618882 A CN202111618882 A CN 202111618882A CN 114283065 B CN114283065 B CN 114283065B
Authority
CN
China
Prior art keywords
signature
data
calculation module
value
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111618882.4A
Other languages
Chinese (zh)
Other versions
CN114283065A (en
Inventor
张延军
黄百铖
卢继华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111618882.4A priority Critical patent/CN114283065B/en
Publication of CN114283065A publication Critical patent/CN114283065A/en
Application granted granted Critical
Publication of CN114283065B publication Critical patent/CN114283065B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an ORB feature point matching system and method based on hardware acceleration, and belongs to the technical fields of hardware acceleration, simultaneous positioning and mapping and image splicing. The system comprises a camera module, a PS end and a PL end; the PS end comprises an image storage module, a direction vector calculation module and a pose calculation module; the PL terminal comprises a FAST algorithm execution module, an address calculation module, a rotation invariance descriptor and signature calculation module, a Hamming distance calculation module, a rotation invariance descriptor storage unit, a signature storage unit and a matching result storage unit; the camera module is connected with the PS end, the PS end and the PL end, the method carries out low-complexity dimension reduction on the environmental pixels of the feature points, a signature value with low bit width is constructed, rough screening is carried out on the feature points to be matched when the feature points are matched to obtain the feature points after rough screening, and the Hamming distance of the BRIEF descriptors of the feature points is calculated and compared. The method saves memory resources and improves ORB feature point matching precision and speed.

Description

ORB feature point matching system and method based on hardware acceleration
Technical Field
The invention relates to an ORB feature point matching system and method based on hardware acceleration, and belongs to the technical fields of hardware acceleration, simultaneous positioning and mapping and image splicing.
Background
The extraction and matching of feature points have been important research topics in the field of computer vision, because they constitute important links of algorithms such as image stitching fusion, three-dimensional reconstruction, vision-based simultaneous localization and mapping (simultaneous localization AND MAPPING, SLAM), image tracking, etc. In the visual SLAM, no matter which type of camera is adopted, feature point extraction and matching are required to be carried out on two adjacent frames of images before and after shooting, and then pose estimation is completed according to matched feature point information. Feature point matching is often a relatively time-consuming link in these computer vision algorithms, because the number of feature points in an image is large, and the calculation of euclidean distance or hamming distance between the current frame and the previous frame or global feature point is performed in a traversing manner during matching. Among the current feature point correlation algorithms, image feature points with rotation invariance and descriptors (Oriented Fast and Rotated BRIEF, ORB) are considered as the best algorithms combining speed and accuracy, because they adopt a picture key point fast selection algorithm (Features from ACCELERATED SEGMENTS TEST, FAST) to fast detect feature points, and meanwhile, a simple and Robust feature point descriptor construction algorithm (Binary route INDEPENDENT ELEMENTARY Features, BRIEF) with rotation invariance is used to construct descriptors. Thus, the ORB algorithm exhibits good performance in SLAM systems. In embedded applications, a common means of speeding up feature point matching algorithms is to design field programmable gate array (Field Programmable GATE ARRAY, FPGA) custom circuits. In the traditional FPGA implementation process of ORB feature point matching, the system reads descriptors one by one from a storage module storing the feature point descriptors of the previous frame, and calculates the Hamming distance between the descriptors and a single descriptor of the current frame. The hamming distance is calculated by counting the number of different data of the corresponding bit of the two descriptors. The two feature points with the smallest hamming distance are considered to be the matched feature points. The number of feature points extracted by the SLAM system varies depending on the environment and the size of the picture, and can be thousands in general. Comparing hamming distances one by one makes ORB feature point matching time consuming.
The existing hardware accelerator patent completes image downsampling, feature extraction and feature matching through FPGA hardware. A plurality of Hamming distance calculating units and comparators are designed in the matching unit, so that the matching parallelism can be improved. However, the hamming distance between the current frame descriptor and the previous frame or all the descriptors in the whole world needs to be calculated in the comparison process, and still a large calculation amount is still required.
A patent discloses a panoramic image stitching method. In the FPGA architecture, two frames of images are transmitted to an ip core generated by feature detection and descriptors through a video frame buffer controller (Video Direct Memory Access, VDMA) ip core under the control of a microprocessor (ADVANCED RISC MACHINE, ARM), and feature matching and image fusion are completed through an embedded soft core microblaze optimized by Xilinx company. The ARM hard core and microblaze soft core two central processing units simultaneously appear in the architecture, and the ARM hard core and microblaze soft core are huge wastes of resources and power consumption for a small algorithm of image stitching. The design uses microblaze soft cores to complete the feature matching task with simple calculation and large calculation amount, not only does not exert the advantage of completing complex calculation by using the feature matching task as a microprocessor, but also cannot bring about the improvement of matching speed.
The characteristic points in a video splicing display method and a video splicing display system are completed in a registration unit of an FPGA chip by adopting a ratio matching method. The system acquires matching feature points by adopting a ratio matching method, calculates the distance between the feature points and a threshold value 1, respectively calculates the ratio of the closest feature points and the next closest feature points of the target image to the hamming distance of the sample feature points, and judges whether the feature points are matching feature points according to the relation between the ratio and the threshold value 2. Although a certain screening function is played in the process of comparing with the threshold value, the method cannot guarantee the precision. The application does not provide a method for selecting the threshold value, and the distance between the target image feature point and the sample feature point is still traversed and calculated before the threshold value is compared with the threshold value. The application does not solve the time-consuming problem that the characteristic points of the target image are compared with the threshold value one by one from the aspect of circuit design, and the divider adopted for calculating the ratio will bring additional resource consumption.
The existing feature point matching method based on hardware acceleration has the following problems:
1) The feature points to be matched are not screened from the algorithm level, and the method for increasing the matching parallelism is usually only adopted, and the hamming distance calculation of a plurality of pairs of descriptors is finished at the same time, but the hardware resource cost is increased;
2) The time consumption caused by the one-by-one operation of the sample characteristic points and the target image characteristic points is not solved from the circuit design level.
Disclosure of Invention
The invention aims to solve the problem that the prior ORB feature point matching hardware acceleration method is difficult to achieve better balance between resource utilization and matching speed improvement, and provides an ORB feature point matching system and a matching method based on hardware acceleration.
The core idea of the invention is that: ORB feature point matching is carried out in a processor, low-complexity dimension reduction is carried out on the feature point environment pixels, signature values with low bit width are constructed for the feature points, rough screening is carried out on the feature points to be matched during matching, namely signature values of a plurality of feature points are compared at one time, the feature points after rough screening are obtained, and hamming distance calculation and comparison of a feature point BRIEF descriptor are carried out.
In order to achieve the above purpose, the present invention adopts the following technical scheme.
The ORB characteristic point matching system based on hardware acceleration comprises a camera module, a PS end and a PL end; the camera module is connected with the PS end, and the PS end is connected with the PL end.
Wherein PS is the programmable logic programmable system side in the processor and PS is an abbreviation for programming system; PL is the programmable system side in the processor and PL is an abbreviation for programming logic;
The PS end comprises an image storage module, a direction vector calculation module and a pose calculation module; the PL terminal comprises a FAST algorithm execution module, an address calculation module, a rotation invariance descriptor and signature calculation module, a Hamming distance calculation module, a rotation invariance descriptor storage unit, a signature storage unit and a matching result storage unit.
The rotation invariance descriptor storage unit comprises a first rotation invariance descriptor storage unit and a second rotation invariance descriptor storage unit; the signature storage unit comprises a first signature storage unit and a second signature storage unit;
the address calculation module comprises a single signature register, a multi-signature register, a comparator, a comparison result register, a non-zero value judgment unit, a first-in first-out (First Input First Output, FIFO) memory A, a data transmission unit, a left shift unit, a FIFO memory B and an address calculation unit;
Wherein the comparator comprises a first comparator to an mth comparator, the non-zero value judging unit comprises a first non-zero value judging unit to a kth non-zero value judging unit, the FIFO memory A comprises a FIFO memory A 1 to a FIFO memory A k, the left shifting unit comprises a first left shifting unit to an ith left shifting unit, and the FIFO memory B comprises a FIFO memory B 1 to a FIFO memory B i;
The single signature register is connected with the rotation invariance descriptor and the signature calculation module, and the multi-signature register is connected with a signature storage unit for storing the signature of the previous frame of image; the multi-signature register and the single-signature register are connected with a comparator, the comparator is connected with a comparison result register, the comparison result register is connected with a first non-zero value judging unit, the first non-zero value judging unit is connected with a FIFO memory A 1, a FIFO memory A 1 is connected with a second non-zero value judging unit, the second non-zero value judging unit is connected with a FIFO memory A 2.
The connection mode of each module in the characteristic point matching system is as follows:
The camera module is connected with the image storage module, the image storage module is respectively connected with the FAST algorithm execution module and the direction vector calculation module, the FAST algorithm execution module is connected with the direction vector calculation module, the direction vector calculation module is connected with the rotation invariance descriptor and the signature calculation module, the rotation invariance descriptor and the signature calculation module are connected with the address calculation module, the Hamming distance calculation module, the signature storage unit and the rotation invariance descriptor storage unit, the signature storage unit is connected with the address calculation module, the address calculation module and the rotation invariance descriptor storage unit are connected with the Hamming distance calculation module, the Hamming distance calculation module is connected with the matching result storage unit, and the matching result storage unit is connected with the pose calculation module.
The camera module collects pictures and transmits the pictures to the image storage module;
The image storage module performs window scanning operation on the acquired pictures to obtain image blocks, and transmits the image blocks to the FAST algorithm execution module;
The FAST algorithm execution module screens out characteristic points according to the relation between the central pixel point of the image block and the pixel values of the neighborhood circle pixel points of the central pixel point, calculates the moment of the image block, and transmits the coordinates and the moment of the characteristic points to the direction vector calculation module;
The direction vector calculation module calculates the rotation angle of the feature points, extracts pixel values of a plurality of point pairs for calculating BRIEF descriptors and signature values from the image storage module according to the feature point coordinates and the rotation angle, and transmits the feature point coordinates and the pixel values of the plurality of point pairs to the rotation invariance descriptors and the signature calculation module;
the rotation invariance descriptor and the signature are calculated by the signature calculation module, and then a single signature value is transmitted to the address calculation module; meanwhile, the result of splicing the rotation invariance descriptors and a plurality of signature values is transmitted into a pair of rotation invariance descriptor storage units and a signature storage unit in turn according to the sequence of picture frame processing, so that the rotation invariance descriptors and signature information calculated in the previous frame are reserved in the other pair of rotation invariance descriptor storage units and signature storage units;
the address calculation module extracts signature values of the image characteristic points of the frame and the previous frame from the rotation invariance descriptors, the signature calculation module and the signature storage unit respectively, screens out characteristic points with different signature values, calculates addresses of storage units where the rotation invariance descriptors of the same characteristic points of the signature values are located, and transmits the addresses to the Hamming distance calculation module;
the operation of each module in the address calculation module is specifically as follows:
The rotation invariance descriptor and the signature calculation module output the numerical value of the single signature register to be compared with the numerical value after the signature storage unit outputs the multi-signature register to be truncated into m parts, and the m-bit comparison result is obtained, specifically: if the value of the single signature register is the same as the value of the multi-signature register cut into m parts, outputting a result of 1, otherwise, outputting a result of 0 if the value of the single signature register is different from the value of the multi-signature register cut into m parts;
The multi-signature register is formed by splicing signature values of m feature points of the previous frame, and each number is the signature value of the corresponding feature point of the previous frame after being truncated into m parts;
The comparison result register sends the m-bit comparison result to a first non-zero value judging unit, and the first non-zero value judging unit counts the times of obtaining the comparison result and transmits the times as level information backwards; meanwhile, the first non-zero value judging unit splits m-bit data into front and back parts and judges whether the m-bit data is all-zero data or not; discarding if the data is all zeros; if the data is not all zero, transferring the data carrying the level information and the front-back identification information of the data to the FIFO memory A 1;
S10, when the FIFO memory A 1 is not empty, the second non-zero value judging unit reads the data of the FIFO memory A 1, the data split by the first non-zero value judging unit is further split into front and rear parts, whether the data are all zero data is judged, and if the data are all zero, the data are discarded; if the data is not all zero, transmitting the data carrying level information, the previous and next identification information of the previous level and the previous and next identification information of the current level to the FIFO memory A 2;
The second non-zero value judging unit to the kth non-zero value judging unit execute the same operation as S10, so that the pipelining operation of the system is realized, and the data with the value of 0 is screened out; when the FIFO memory a k is not empty, the data transfer unit reads the data in the FIFO memory a k;
Wherein, the data in the memory a k contains all the level information and the front and back identification information of each layer;
The data transfer unit circularly sends the obtained data to a first left shift unit to an ith left shift unit, after the data pipeline processing is realized, all the left shift units carry out left shift on non-information bits of the obtained data, and if the highest bit is non-zero, the left shift times and the information bit data are transferred to a corresponding FIFO memory B;
The information bit data comprises all layers of information and front and back identification information of each layer; when the left shift times are equal to the bit number of the non-information bit data, completing the operation of one data, and waiting for the next data transmitted by the data transmission unit;
When the FIFO memories B 1 to B i are not empty, the address calculation unit reads the data in the FIFO;
the data comprises all the level information, front and back identification information of each layer and left shift frequency information, the three information are multiplied by corresponding weight coefficients and accumulated, a corresponding address of a feature point to be matched in a rotation invariance descriptor storage unit is obtained through calculation, and the address is output to a Hamming distance calculation module;
The Hamming distance calculation module extracts corresponding rotation invariance descriptors from the first rotation invariance descriptor storage unit and the second rotation invariance descriptor storage unit according to the address transmitted by the address calculation module, completes the calculation of the Hamming distance and the comparison of different distances, and outputs the characteristic point pair with the largest Hamming distance to the matching result storage unit as a matching result; and then, the pose calculation module reads the data of the matching result storage unit, and completes pose calculation according to the matching relation of the image feature points.
The ORB feature point matching method based on hardware acceleration comprises the following steps:
step A, a camera module in the ORB characteristic point matching system collects and receives a frame of RGB image;
Step B, detecting feature points in the image by using a FAST algorithm, and calculating ORB descriptors corresponding to the feature points;
And C, calculating signature values of the feature points, wherein the signature values are specifically as follows:
step C.1, selecting j point pairs of a P multiplied by P window where the feature points are positioned, and calculating score values required by BRIEF descriptor and signature value calculation;
The score value of the point pair is k, and the signature value is formed by quantizing the k score values;
And C.2, inputting k signature calculation modules to the point pair pixel values selected in the step C.1 simultaneously, outputting 1 score value by each signature calculation module correspondingly, and finally outputting k score values, wherein the specific steps are as follows:
In each signature calculation module, firstly, F functions are carried out on each pair of characteristic point pixel values, the operation results are multiplied by operation coefficients, and the multiplication results are accumulated to form a fraction value;
wherein the F functions adopted by the same score value calculation are the same, and the F functions adopted by different score value calculation are different; in the same score value calculation, the operation coefficients corresponding to different point pairs can be the same or different;
step C.3, uniformly dividing a preset interval from a value min to a value max into 2 n intervals, and replacing the score value with an interval number according to the interval in which the score value is located to finish quantization of one score value;
Wherein, 2 n intervals are numbered 0,1, 2. n, and n is the quantization bit width;
c.4, splicing the quantized k score values to form a signature value corresponding to the feature point;
Step D, extracting a signature value s 1 and a descriptor D 1 of a next feature point of the frame image, wherein the feature point is used as a first point to be matched;
Step E, inputting a signature value s 2 of a next feature point of the previous frame image, wherein the feature point is used as a point 2 to be matched;
Step F, judging whether the signature value s 1 and the signature value s 2 are the same. If different, returning to the step E; if the two are the same, step G is entered;
Step G, inputting a descriptor d 2 of the second point to be matched;
step H, calculating the Hamming distance between the descriptor d 1 and the descriptor d 2;
step I, judging whether all feature points of the previous frame are traversed; if not, returning to the step E; if yes, obtaining the characteristic points matched with the first point to be matched in the previous frame of image according to the maximum value of the Hamming distance, and entering a step J;
Step J, judging whether all the characteristic points of the frame are traversed, if not, returning to the step D; if yes, entering a step K;
and step K, outputting coordinate pairs of all the matched characteristic points of the current frame image and the previous frame image.
Advantageous effects
Compared with the existing feature point matching system and matching method based on FPGA, the ORB feature point matching system and matching method based on hardware acceleration have the following beneficial effects:
1. According to the method, a software and hardware collaborative design mode is adopted, pictures are stored in a Double-rate synchronous dynamic random access memory (Double DATA RATE SDRAM, DDR) of a PS end, the DDR can be read by the PS end through an operating system and also can be read by a PL end through an AXI bus, the resources of the block random access memory are saved, the scheduling is convenient, the memory access efficiency is improved, and larger pictures can be processed;
2. The method adopts the PS end to finish rotation angle calculation, and compared with a coordinate rotation digital calculation method (Coordinate Rotation Digital Computer, CORDIC) used at the PL end in the FPGA implementation of the traditional feature point matching algorithm, the method can effectively reduce the FPGA resource use;
3. The method fully refines the image environment information of the feature points by the score value information in the process of calculating the signature values of the feature points, the difference of the score values among the feature points and the sum of absolute values (Sum of absolute differences, SAD) of differences of the 27×27 windows where the feature points are located are in positive correlation, namely, the process of comparing the signatures is similar to the template matching process, and the effect of reducing the mismatching of the feature points can be finally achieved;
4. according to the method, simple function operation is carried out on pixel point pairs in a window with the size of 27 multiplied by 27, wherein the pixel point pairs are the feature points, so that numerical information which is not possessed by BRIEF descriptors obtained by comparing the sizes in the traditional method is endowed; the data structure of all the characteristic points of one frame of image is constructed, so that screening is facilitated when the next frame is matched;
5. the method carries out quantization dimension reduction on the pixel point operation result, reduces the storage space of the signature value of the feature point, simultaneously adopts a specific operation mode to ensure the information conservation degree after dimension reduction, achieves the effect that the dimension reduction of a principal component analysis technology (PRINCIPAL COMPONENT ANALYSIS, PCA) is similar, and ensures that the quick matching mode in the invention is similar to the precision of the traditional ORB feature point matching method;
6. The address calculation module of the method carries out non-zero judgment on a plurality of signature value comparison results in a pipelining manner, screens out feature points to be matched in a few clock cycles, and calculates the address of the corresponding descriptor; the Hamming distance comparison module only needs to take a small number of descriptors to operate according to the addresses; compared with the traditional characteristic point matching system, the method for simply improving the parallelism of the Hamming distance comparison module and matching all descriptors one by one can be at least 5 times faster under the condition of the same FPGA resource consumption.
Drawings
FIG. 1 is a diagram of the module connection relationship of an ORB feature point matching system based on hardware acceleration;
FIG. 2 is a flow chart of an ORB feature point matching method based on hardware acceleration;
FIG. 3 is a schematic diagram of a signature calculation method in an ORB feature point matching method based on hardware acceleration;
FIG. 4 is a diagram of an address calculation module circuit implementation in an ORB feature point matching system based on hardware acceleration.
Detailed Description
The invention relates to an ORB feature point matching system and a matching method based on hardware acceleration, which are described in detail below with reference to specific embodiments and drawings.
Example 1
In implementation, an ORB feature point matching system based on hardware acceleration is shown in fig. 1. The system is built by adopting a circuit for ORB characteristic point matching of a system on a chip provided by ZCU FPGA development boards of Xilinx company, and the camera module is an RGBD camera module. And the PS end and the PL end in the FPGA development board carry out data interaction through an AXI bus and an interrupt control interface. The PL terminal is connected with a High Performance (HP) interface of the PS terminal through an AXI bus and is used for large-batch data interaction.
In this interface, the PL end acts as an AXI master and the PS end acts as an AXI slave. The PL terminal is connected with a General Purpose (GP) interface of the PS terminal through an AXI bus and is used for small-scale instruction and configuration information interaction. In this interface, the PL end acts as an AXI slave and the PS end acts as an AXI master. The PL terminal is connected with the interrupt control interface of the PS terminal through the IP core of the interrupt controller and is used for informing the PS terminal to execute the corresponding interrupt service routine.
The system adopts a software and hardware collaborative design, and the advantages of the PS end and the PL end are fully exerted. The camera driver at the PS end is used for rapidly calling the camera; the DDR is used for storing pictures, the DDR can be accessed by a PS end through an operating system and also can be accessed by a PL end through an AXI interface, so that the resources of a block random access memory are saved, the scheduling is convenient, and larger pictures can be processed; compared with the traditional characteristic point matching algorithm, the method has the advantages that the rotation angle is calculated by the CORDIC module in the FPGA implementation, the use of FPGA resources can be reduced by calculating the rotation angle by using the PS end, the calculation accuracy is improved, and the window pixel coordinates required to be extracted from the DDR can be directly calculated according to the rotation angle.
The flow of the ORB feature point matching method based on hardware acceleration is shown in fig. 2. RGBD camera obtains the depth map and the color map of 32 bits of pixel depth with 60 frames per second, and PS end converts the color map into gray scale map and saves to DDR. When the system starts to operate, the PL end informs the PS end to perform window sweeping operation with the window size of 7 multiplied by 7 on the gray level image through the interrupt controller, and obtains the pixel point address required by FAST operation. The FAST operation needs to rely on pixel values of pixel points surrounding the center of the window and having a distance of 3 pixels, and the total of 16 pixel values are stored in the DDR discontinuous address space. The PS end sends the 16 addresses and the address of the window center pixel to the PL end through the GP interface by using an AXI LITE protocol. After 17 addresses are obtained by the PL end, all 17 addresses are sent to the PS end at one time through outstanding characteristics of an AXI bus at the HP interface, and window pixel information corresponding to the DDR is read. After the FAST algorithm execution module obtains pixel values corresponding to 17 addresses, judging whether 9 continuous pixel values are larger than I p +T or smaller than I p -T; if so, the center point of the window is the characteristic point of the original image. Wherein I p is the center pixel value and T is the center pixel value multiplied by 0.2. When the feature points are determined, the PL end informs the PS end of reading the head address of the DDR of the first pixel point of each row in the 7X 7 window of the pixel points through the interrupt controller. And the PS end sends 7 head addresses to the PL end through the GP interface by using an AXI LITE protocol, and the PL end sends all 7 addresses to the PS end at one time through outstanding characteristics of an AXI bus at the HP interface to finish burst read operation with burst length of 8, so that all pixel information of a window is obtained. The PL end calculates the moment m 01、m10 of the feature point through a formula m pq=∑xpyq I (x, y) (p, q= {0,1}, x, y epsilon B), and informs the PS end of taking the moment m 01、m10 of the data feature point and the corresponding coordinate thereof at the GP interface through the interrupt controller. The PS end calculates the rotation angle theta of the feature point through the formula theta=arctan (m 01/m10), extracts pixel information of the feature point after the 27×27 window of the feature point is rotated by the angle theta from the DDR according to the coordinates of the feature point, extracts 256 point pairs from the pixel information, and sends the pixel value of the point pairs and the coordinates of the feature point to the PL end through an AXI STREAM protocol.
After the pixel information of the point pairs is obtained by the rotation invariance descriptor and signature calculation module at the PL end, 256-bit descriptors and 12-bit signature values of the feature points are calculated at the same time, and data with the address information of the feature points and the descriptor information are stored in corresponding rotation invariance descriptor storage units according to the current frame number; and sending the signature value information to an address calculation module. Meanwhile, 12-bit signature values obtained each time are spliced, and after 80 signature values are integrated and spliced, a 960-bit spliced result is stored in a signature storage unit corresponding to the current frame. The address calculation module obtains the signature value information from the rotation invariance descriptor and the signature calculation module, compares the signature value information with the signature storage unit storing the signature value information spliced by the previous frame, screens out the feature points to be matched, calculates the address stored by the descriptor, and transmits the address to the Hamming distance calculation module. After the hamming distance calculating module obtains the descriptors of the current feature points from the rotation invariance descriptors and the signature calculating module, the descriptors are taken out from the rotation invariance descriptor storage unit storing the previous frame of descriptors according to the addresses of the feature points to be matched provided by the address calculating module, the hamming distance is calculated, the descriptors with the shortest hamming distance are the matched feature points, and the addresses of the descriptors and the addresses of the feature points calculated at the time of the current frame are input into the matching result storage unit. And when all the characteristic points of the frame complete matching calculation, sending a matching result to a PS end through an AXI STREAM protocol of an HP interface to complete subsequent pose calculation.
Example 2
In specific implementation, the signature calculation mode in the ORB feature point matching method based on hardware acceleration is shown in fig. 3. Firstly, 256 point pairs in a 27 (13 multiplied by 2 multiplied by 1 multiplied by 27) window where the feature points are located are selected, the 256 point pairs are point pairs adopted by BRIEF descriptor calculation, the point pairs are randomly distributed in the window, and 256 point pairs corresponding to each feature point are distributed in the same mode. In the process of calculating the signature value, the 256 point pairs are input into 3 signature calculation modules at the same time to carry out different operations, and finally 3 score values, namely score 1, score 2 and score 3, are formed. Since the 3 score values are calculated by 256 point pairs of the window where the feature points are located, the feature point environment information is fully extracted, so that the difference between the score values of the feature points and the sum of the absolute difference values of the 27×27 window where the feature points are located are in positive correlation. Therefore, the effect of screening the feature points to be matched according to the difference of the score values is similar to the template matching, and the effect of reducing the mismatching of the feature points can be finally achieved;
In the first signature calculation module, the F function is selected as a+b, and the operation coefficients corresponding to all the point pairs are +1. Wherein a and b are the pixel values of two pixels in the point pair respectively. In the second signature calculation module, the F function is selected as |a-b|, the operation coefficient corresponding to the 1 st to 32 nd point pairs is-1, and the operation coefficient corresponding to the 33 rd to 256 th points is +1. In the third signature calculation module, the F function is selected as a-b, the operation coefficients corresponding to the 1 st to 128 th points are-1, and the operation coefficients corresponding to the 129 th to 256 th points are +1. And the 3 signature calculation modules multiply the operation coefficients corresponding to the point pair operation results of the F function by the point pair respectively and accumulate the operation coefficients to obtain three score values. By the score value calculation method, the numerical information which is not possessed by the BRIEF descriptor which is obtained only by comparing the sizes is endowed, so that the data structure of all the characteristic points of one frame of image can be constructed, and the characteristic points which need to be matched can be quickly found through the data structure when the next frame is matched, rather than being matched one by one according to the traditional method.
After calculating the score, the score is quantized and dimension-reduced, so as to reduce the storage space. The score value 1 is quantized with 5 bits, i.e. the quantization result will be represented with 5 bits. Score 2 is quantized by 4 bits and score 3 is quantized by 3 bits. In the quantization of the score value 1, the quantization interval is selected to be 0-65536, and the interval is equally divided into 32 parts. Wherein the first number is from 0 to 2047, the second number is from 2048 to 4095, and so on. And comparing the score value 1 with the boundary value of the interval, and judging the number of the interval where the score value 1 is located, wherein the value is a quantization result. If the score value 1 exceeds the quantization interval boundary, the quantization result is the corresponding boundary interval. The quantization intervals of the score values 2 and 3 are-32768 to +32767, and are divided into 16 parts and 8 parts uniformly, and the interval judgment mode is similar to the score value 1. Through verification, the quantitative dimension reduction mode can ensure that the original fraction value information is not lost after dimension reduction, the effect similar to the PCA dimension reduction can be obtained, and the accuracy of the screening process is ensured.
Finally, the 5-bit data quantized by the score value 1, the 4-bit data quantized by the score value 2 and the 3-bit data quantized by the score value 3 are spliced into 12-bit data to form a signature value corresponding to the feature point.
Example 3
In specific implementation, the circuit design of the address calculation module in the ORB feature point matching method based on hardware acceleration is shown in fig. 4. The module is the core of the system, and the addresses of the feature points with the same signature value are calculated as the addresses of the feature points to be matched by comparing the signature value of one feature point of the current frame with the signature values of all the feature points of the previous frame. And after receiving the feature point addresses to be matched, the Hamming distance calculation module takes out corresponding descriptors to calculate the Hamming distance. Compared with the traditional method for taking out all descriptors to calculate the Hamming distance, the address calculation module realizes the screening of the feature points to be matched, and saves a great deal of operation time.
In specific implementation, the number of addresses of signature storage units in the ORB feature point matching system based on hardware acceleration is 13, the data bit width corresponding to each address is 960 (80×12) bits, and the signature with the 80 bit width of 12 is spliced according to the sequence of calculation completion. After splicing, parallel operation can be completed on a plurality of signature values at the same time, and screening efficiency is improved. The descriptors of the feature points are stored in the rotary invariance descriptor storage unit according to the sequence of calculation, and the consistency of the signature value and the storage sequence of the descriptors is ensured. The corresponding relation enables the address calculation module to calculate the address of the descriptor in the rotation invariance descriptor storage unit according to the position of the signature value in the signature storage unit.
The signature storage unit stores 1040 (13×80) signature value information corresponding to feature points of the previous frame image in total, and signature values of every 80 feature points are stored in 1 address space. The number of addresses of the signature storage unit is 13. The address calculation module reads the signature value S1 of one feature point of the frame image, screens out 1040 feature points and calculates the address stored in the descriptors of the feature points to be matched, wherein the feature points are possibly matched with the signature value S1.
If the 145 (80+65) th feature point is a feature point to be matched, the corresponding signature value information is contained in the corresponding data with the signature storage unit address of 2. The address calculation module reads the data with the address of the signature storage unit of 2 and stores the data into the multi-signature register, the multi-signature register is disassembled into 80 signature values with the bit width of 12 and the signature value S1 value for comparison, if the comparison results are the same, 1 is output, and if the comparison results are different, 0 is output, and finally the comparison results with the bit width of 80 are obtained and stored into the comparison result register. And when the 145 th feature point is the feature point to be matched, the 65 th bit of the comparison result register is 1. The first non-zero value judging unit obtains the result of the comparison result register, and the corresponding level information is the address '2' of the signature storage unit. The first non-zero value judging unit splits the 80-bit comparison result into left and right data, and each data is 40 bits. If the data is 0, the data is directly discarded, and if the data is non-zero, the data carrying the hierarchy information, the left and right identification information and 40-bit non-zero data are transmitted to the FIFO memory A1. If the data is the right half data of the original 80-bit data, the left and right identification information is 1, otherwise, the data is zero.
By adopting a non-zero judgment method, the data with the bit width of 40 and the numerical value of 0 are abandoned, which means that 40 feature points corresponding to the data will not participate in subsequent Hamming distance calculation, and the time of integral operation of the system is saved. The nonzero judgment of one data is completed in one clock period, so that the screening efficiency is ensured. The hierarchy information, the left and right identification information will be used in subsequent address calculations.
When the 145 th feature point is a feature point to be matched, the first non-zero value judging unit splices the data with the bit width of 40 into data with the bit width of 45 and transmits the data to the FIFO memory A1. Wherein, the bit width of the hierarchical information is 4, and the bit width of the rest information is 1. At this time, the FIFO memory A1 is not empty, and the non-zero value judging unit 2 reads the data from the FIFO memory A1, and splits the "data with a bit width of 40" into left and right data, each of which is 20 bits. If the data is 0, the data is directly discarded, and if the data is non-zero, the data is transmitted to the FIFO memory A2, wherein the data carries the level information, the left and right identifiers of the previous level, the left and right identifiers of the current level and 20 bits of non-zero data. That is, the non-zero value judging unit 2 concatenates "0010", "1", "1", "data of bit width 20" into one data of bit width 26 to be transferred to the FIFO memory A2. By such calculation and transfer, when the 145 th feature point is the feature point to be matched, the data transfer unit will obtain a data with a bit width of 12 spliced by data "0010", "1", "1", "0", "0", "XXXX1" from the FIFO memory A2. Wherein, "0010" represents that in the data with the address of the signature storage unit being 2, the signature value of the 145 th feature point, "1", "1", "0", "0" is the left and right identification of each non-zero value judging unit, and "1" in "XXXX1" represents that the signature value of the 145 th feature point is the same as the comparison result of the signature value S1 of the feature point of the frame image. The data with the bit width of 12 spliced by the data of "0010", "1", "1", "0", "0", "XXXX1" is transferred to a left shift unit, and the left shift unit shifts left the data of the non-information bit, namely "XXXX1", and counts the left shift times. When the lowest bit of the left shifted data is non-zero, the data of the information bit and the left shift times are transferred to the corresponding FIFO memory B. That is, "0010", "1", "1", "0", "0", "101" are spliced into data having a bit width of 11 and transferred to the FIFO memory B. The address calculation unit reads the data of the FIFO memory B and calculates the corresponding address ADDR of the feature point to be matched in the rotation invariance description sub-memory unit. The calculation formula is as follows:
Addr= (hierarchy information-1) ×80+first-level left-right flag×40+second-level left-right flag×20+third-level left-right flag×10+fourth-level left-right flag×5+number of shifts left;
for the data with the bit width of 11, which is spliced by '0010', '1', '0', '101', the calculation formula is sleeved as follows: (2-1) x 80+1 x 40+1 x 20+0 x 10+0 x 5+5=145, and the address of the 145 th feature point in the rotation invariance descriptor storage unit is obtained.
The address calculation module fetches 960 bits of data from the signature storage unit every clock period, and the operation of each module is performed in a pipeline to finish screening, so that the time consumption of the process is short. Because of the large number of feature points, the screening mechanism greatly relieves the calculated amount of the system. Under the same resource consumption, the traditional feature point matching system has no way to reach the speed of the ORB feature point matching system based on hardware acceleration even if the parallelism of Hamming distance calculation is improved. Under the condition of the consumption of the same resource, the ORB characteristic point matching system based on hardware acceleration is at least 5 times faster than the traditional characteristic point matching system for improving the operation parallelism.
The above embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention, but any modifications, equivalents, improvements, etc. within the principle of the idea of the present invention should be included in the scope of protection of the present invention.

Claims (7)

1. An ORB feature point matching system based on hardware acceleration, characterized in that: the camera comprises a camera module, a PS end and a PL end; the camera module is connected with the PS end, and the PS end is connected with the PL end;
The PS end comprises an image storage module, a direction vector calculation module and a pose calculation module; the PL terminal comprises a FAST algorithm execution module, an address calculation module, a rotation invariance descriptor and signature calculation module, a Hamming distance calculation module, a rotation invariance descriptor storage unit, a signature storage unit and a matching result storage unit;
The rotation invariance descriptor storage unit comprises a first rotation invariance descriptor storage unit and a second rotation invariance descriptor storage unit; the signature storage unit comprises a first signature storage unit and a second signature storage unit;
the connection mode of each module in the characteristic point matching system is as follows:
The camera module is connected with the image storage module, the image storage module is respectively connected with the FAST algorithm execution module and the direction vector calculation module, the FAST algorithm execution module is connected with the direction vector calculation module, the direction vector calculation module is connected with the rotation invariance descriptor and the signature calculation module, the rotation invariance descriptor and the signature calculation module are connected with the address calculation module, the Hamming distance calculation module, the signature storage unit and the rotation invariance descriptor storage unit, the signature storage unit is connected with the address calculation module, the address calculation module and the rotation invariance descriptor storage unit are connected with the Hamming distance calculation module, the Hamming distance calculation module is connected with the matching result storage unit, and the matching result storage unit is connected with the pose calculation module;
The camera module collects pictures and transmits the pictures to the image storage module;
The image storage module performs window scanning operation on the acquired pictures to obtain image blocks, and transmits the image blocks to the FAST algorithm execution module;
The FAST algorithm execution module screens out characteristic points according to the relation between the central pixel point of the image block and the pixel values of the neighborhood circle pixel points of the central pixel point, calculates the moment of the image block, and transmits the coordinates and the moment of the characteristic points to the direction vector calculation module;
The direction vector calculation module calculates the rotation angle of the feature points, extracts pixel values of a plurality of point pairs for calculating BRIEF descriptors and signature values from the image storage module according to the feature point coordinates and the rotation angle, and transmits the feature point coordinates and the pixel values of the plurality of point pairs to the rotation invariance descriptors and the signature calculation module;
the rotation invariance descriptor and the signature are calculated by the signature calculation module, and then a single signature value is transmitted to the address calculation module; meanwhile, the result of splicing the rotation invariance descriptors and a plurality of signature values is transmitted into a pair of rotation invariance descriptor storage units and a signature storage unit in turn according to the sequence of picture frame processing, so that the rotation invariance descriptors and signature information calculated in the previous frame are reserved in the other pair of rotation invariance descriptor storage units and signature storage units;
the address calculation module extracts signature values of the image characteristic points of the frame and the previous frame from the rotation invariance descriptors, the signature calculation module and the signature storage unit respectively, screens out characteristic points with different signature values, calculates addresses of storage units where the rotation invariance descriptors of the same characteristic points of the signature values are located, and transmits the addresses to the Hamming distance calculation module;
The Hamming distance calculation module extracts corresponding rotation invariance descriptors from the first rotation invariance descriptor storage unit and the second rotation invariance descriptor storage unit according to the address transmitted by the address calculation module, completes the calculation of the Hamming distance and the comparison of different distances, and outputs the characteristic point pair with the largest Hamming distance to the matching result storage unit as a matching result;
the pose calculation module reads the data of the matching result storage unit and completes pose calculation according to the matching relation of the image feature points;
The address calculation module comprises a single signature register, a multi-signature register, a comparator, a comparison result register, a non-zero value judgment unit, a FIFO memory A, a data transmission unit, a left shift unit, a FIFO memory B and an address calculation unit; the comparator includes a first comparator to an mth comparator, the non-zero value judging unit includes a first non-zero value judging unit to a kth non-zero value judging unit, the FIFO memory A comprises a FIFO memory A1 to a FIFO memory Ak, the left shift unit comprises a first left shift unit to an ith left shift unit, and the FIFO memory B comprises a FIFO memory B1 to a FIFO memory Bi;
the single signature register is connected with the rotation invariance descriptor and the signature calculation module, and the multi-signature register is connected with a signature storage unit for storing the signature of the previous frame of image; the multi-signature register and the single-signature register are connected with a comparator, the comparator is connected with a comparison result register, the comparison result register is connected with a first non-zero value judging unit, the first non-zero value judging unit is connected with a FIFO memory A1, the FIFO memory A1 is connected with a second non-zero value judging unit, the second non-zero value judging unit is connected with a FIFO memory A2;
The operation of the address calculation module is specifically as follows: the rotation invariance descriptor and the signature calculation module output the numerical value of the single signature register to be compared with the numerical value after the signature storage unit outputs the multi-signature register to be truncated into m parts, and the m-bit comparison result is obtained, specifically: if the value of the single signature register is the same as the value of the multi-signature register cut into m parts, outputting a result of 1, otherwise, outputting a result of 0 if the value of the single signature register is different from the value of the multi-signature register cut into m parts;
The comparison result register sends the m-bit comparison result to a first non-zero value judging unit, and the first non-zero value judging unit counts the times of obtaining the comparison result and transmits the times as level information backwards; meanwhile, the first non-zero value judging unit splits m-bit data into front and back parts and judges whether the m-bit data is all-zero data or not; discarding if the data is all zeros; if the data is not all zero, transmitting the data carrying the level information and the front and rear identification information of the data to a FIFO memory A1;
S10: when the FIFO memory A1 is not empty, the second non-zero value judging unit reads the data of the FIFO memory A1, the data split by the first non-zero value judging unit is further split into front and rear parts, whether the data are all zero data is judged, and if the data are all zero, the data are discarded; if the data is not all zero, transmitting the data carrying level information, the front and rear identification information of the previous stage and the front and rear identification information of the current stage to a FIFO memory A2;
the second non-zero value judging unit to the kth non-zero value judging unit execute the same operation with the S0 function, so that the pipelining operation of the system is realized, and the data with the value of 0 is screened out;
when the FIFO memory Ak is not empty, the data transmission unit reads the data in the FIFO memory Ak;
The data transfer unit circularly sends the obtained data to a first left shift unit to an ith left shift unit, after the data pipeline processing is realized, all the left shift units carry out left shift on non-information bits of the obtained data, and if the highest bit is non-zero, the left shift times and the information bit data are transferred to a corresponding FIFO memory B;
When the FIFO memories B1 to Bi are not empty, the address calculation unit reads the data in the FIFO.
2. The hardware acceleration-based ORB feature matching system of claim 1 wherein: the multi-signature register is formed by splicing signature values of m feature points of the previous frame, and each numerical value is the signature value of the corresponding feature point of the previous frame after being truncated into m parts.
3. The hardware acceleration-based ORB feature matching system of claim 2 wherein: the data in the memory Ak contains all the level information and the front-back identification information of each layer.
4. A hardware acceleration based ORB feature matching system according to claim 3 wherein: the information bit data comprises all layers of information and front and back identification information of each layer; and when the left shift times are equal to the bit number of the non-information bit data, finishing the operation of one data and waiting for the next data transmitted by the data transmission unit.
5. The hardware acceleration based 0RB feature point matching system of claim 4, wherein: when the FIFO memories B 1 to B i are not empty, the address calculation unit reads data in the FIFO, wherein the data comprises all the level information, front and back identification information of each layer and left shift number information, the three information are multiplied by corresponding weight coefficients and accumulated, a corresponding address of a feature point to be matched in the rotation invariance descriptor storage unit is calculated, and the address is output.
6. The method for matching an ORB feature point matching system based on hardware acceleration according to claim 5, wherein: the method comprises the following steps:
step A, a camera module in the ORB characteristic point matching system collects and receives a frame of RGB image;
Step B, detecting feature points in the image by using a FAST algorithm, and calculating ORB descriptors corresponding to the feature points;
Step C, calculating signature values of the feature points;
Step D, extracting a signature value s of a next feature point of the frame image and a descriptor D 1, wherein the feature point is used as a first point to be matched;
Step E, inputting a signature value s 1 of a next feature point of the previous frame image, wherein the feature point is used as a second point to be matched;
Step F, judging whether the signature value s 1 is the same as the signature value s 2; if different, returning to the step E; if the two are the same, step G is entered;
Step G, inputting a descriptor d 2 of the second point to be matched;
step H, calculating the Hamming distance between the descriptor d 1 and the descriptor d 2;
step I, judging whether all feature points of the previous frame are traversed; if not, returning to the step E; if yes, obtaining the characteristic points matched with the first point to be matched in the previous frame of image according to the maximum value of the Hamming distance, and entering a step J;
Step J, judging whether all the characteristic points of the frame are traversed; if not, returning to the step D; if yes, entering a step K;
and step K, outputting coordinate pairs of all the matched characteristic points of the current frame image and the previous frame image.
7. The method for matching an ORB feature point matching system based on hardware acceleration of claim 6 wherein: step C, specifically:
step C.1, selecting j point pairs of a P multiplied by P window where the feature points are positioned, and calculating score values required by BRIEF descriptor and signature value calculation;
The score value of the point pair is k, and the signature value is formed by quantizing the k score values;
And C.2, inputting k signature calculation modules to the point pair pixel values selected in the step C.1 simultaneously, outputting 1 score value by each signature calculation module correspondingly, and finally outputting k score values, wherein the specific steps are as follows:
In each signature calculation module, firstly, F functions are carried out on each pair of characteristic point pixel values, the operation results are multiplied by operation coefficients, and the multiplication results are accumulated to form a fraction value;
wherein the F functions adopted by the same score value calculation are the same, and the F functions adopted by different score value calculation are different;
Step C.3, uniformly dividing a preset interval from a value min to a value max into 2 n intervals, and replacing the score value with an interval number according to the interval in which the calculated score value is located to finish quantization of one score value;
Wherein, 2 n intervals are numbered 0,1, 2. n, and n is the quantization bit width;
and C.4, splicing the quantized k score values to form a signature value corresponding to the feature point.
CN202111618882.4A 2021-12-28 2021-12-28 ORB feature point matching system and method based on hardware acceleration Active CN114283065B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111618882.4A CN114283065B (en) 2021-12-28 2021-12-28 ORB feature point matching system and method based on hardware acceleration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111618882.4A CN114283065B (en) 2021-12-28 2021-12-28 ORB feature point matching system and method based on hardware acceleration

Publications (2)

Publication Number Publication Date
CN114283065A CN114283065A (en) 2022-04-05
CN114283065B true CN114283065B (en) 2024-06-11

Family

ID=80876568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111618882.4A Active CN114283065B (en) 2021-12-28 2021-12-28 ORB feature point matching system and method based on hardware acceleration

Country Status (1)

Country Link
CN (1) CN114283065B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204660A (en) * 2016-07-26 2016-12-07 华中科技大学 A kind of Ground Target Tracking device of feature based coupling
CN106683046A (en) * 2016-10-27 2017-05-17 山东省科学院情报研究所 Real-time image splicing method for police unmanned aerial vehicle investigation and evidence obtaining

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101511087B1 (en) * 2013-11-13 2015-04-10 한국원자력연구원 Method for rotation invariant feature matching for the real-time processing from extracted features from images
CN109859225A (en) * 2018-12-24 2019-06-07 中国电子科技集团公司第二十研究所 A kind of unmanned plane scene matching aided navigation localization method based on improvement ORB Feature Points Matching
CN109919825B (en) * 2019-01-29 2020-11-27 北京航空航天大学 ORB-SLAM hardware accelerator
CN110414533B (en) * 2019-06-24 2023-09-05 东南大学 Feature extraction and matching method for improving ORB
CN110675437B (en) * 2019-09-24 2023-03-28 重庆邮电大学 Image matching method based on improved GMS-ORB characteristics and storage medium
CN111667506B (en) * 2020-05-14 2023-03-24 电子科技大学 Motion estimation method based on ORB feature points
CN111898428A (en) * 2020-06-23 2020-11-06 东南大学 Unmanned aerial vehicle feature point matching method based on ORB

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204660A (en) * 2016-07-26 2016-12-07 华中科技大学 A kind of Ground Target Tracking device of feature based coupling
CN106683046A (en) * 2016-10-27 2017-05-17 山东省科学院情报研究所 Real-time image splicing method for police unmanned aerial vehicle investigation and evidence obtaining

Also Published As

Publication number Publication date
CN114283065A (en) 2022-04-05

Similar Documents

Publication Publication Date Title
CN104881666B (en) A kind of real-time bianry image connected component labeling implementation method based on FPGA
US20190303731A1 (en) Target detection method and device, computing device and readable storage medium
CN112380148B (en) Data transmission method and data transmission device
CN103098077A (en) Real-time video frame pre-processing hardware
CN101236601A (en) Image recognition accelerator and MPU chip possessing image recognition accelerator
CN110489428B (en) Multi-dimensional sparse matrix compression method, decompression method, device, equipment and medium
US11704543B2 (en) Neural network hardware acceleration with stochastic adaptive resource allocation
WO2022166258A1 (en) Behavior recognition method and apparatus, terminal device, and computer-readable storage medium
WO2022242122A1 (en) Video optimization method and apparatus, terminal device, and storage medium
Park et al. A 182 mW 94.3 f/s in Full HD Pattern-Matching Based Image Recognition Accelerator for an Embedded Vision System in 0.13-$\mu {\rm m} $ CMOS Technology
Yuan et al. A two-stage hog feature extraction processor embedded with SVM for pedestrian detection
CN113327319A (en) Complex scene modeling method and device, server and readable storage medium
CN114283065B (en) ORB feature point matching system and method based on hardware acceleration
WO2022165675A1 (en) Gesture recognition method and apparatus, terminal device, and readable storage medium
WO2023184754A1 (en) Configurable real-time disparity point cloud computing apparatus and method
WO2020107267A1 (en) Image feature point matching method and device
CN111445019B (en) Device and method for realizing channel shuffling operation in packet convolution
US8072451B2 (en) Efficient Z testing
US20080107336A1 (en) Method and device for extracting a subset of data from a set of data
CN113468935B (en) Face recognition method
CN114140303A (en) Image watermark removing method and device, electronic equipment and storage medium
CN114549429B (en) Depth data quality evaluation method and device based on hypergraph structure
CN108052482B (en) Method and system for communication between GPUs
Li et al. Improving PMVS algorithm for 3D scene reconstruction from sparse stereo pairs
Li et al. Stereo Matching Accelerator With Re-Computation Scheme and Data-Reused Pipeline for Autonomous Vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant