CN106767841A - Vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling - Google Patents

Vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling Download PDF

Info

Publication number
CN106767841A
CN106767841A CN201611061729.5A CN201611061729A CN106767841A CN 106767841 A CN106767841 A CN 106767841A CN 201611061729 A CN201611061729 A CN 201611061729A CN 106767841 A CN106767841 A CN 106767841A
Authority
CN
China
Prior art keywords
point
model
self adaptation
feature
volume kalman
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611061729.5A
Other languages
Chinese (zh)
Inventor
刘宗明
曹姝清
张宇
孙俊
卢山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aerospace Control Technology Institute
Original Assignee
Shanghai Aerospace Control Technology Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Aerospace Control Technology Institute filed Critical Shanghai Aerospace Control Technology Institute
Priority to CN201611061729.5A priority Critical patent/CN106767841A/en
Publication of CN106767841A publication Critical patent/CN106767841A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/24Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for cosmonautical navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling, comprise the following steps:S1, set up self adaptation volume Kalman filter time prediction and measure more new model;S2, the state model and measurement model of setting up vision system;S3, feature extraction and description are carried out to target using ORB feature operators;S4, matching and single point R ANSAC at the beginning of feature are completed using minimum Hamming distance criterion complete feature essence matching, and correspondence point coordinates in full range modeling is replaced renewals, while completing final time prediction and measuring renewal.The present invention while characteristics of syndrome extracts real-time, also ensure that the rotational invariance and scale invariability of characteristic point efficiently against serious non-linear and noise statisticses time-varying the situation of system.

Description

Vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling
Technical field
It is more particularly to a kind of to be based on self adaptation volume Kalman filtering and list the present invention relates to movable body autonomous navigation technology The vision navigation method of point random sampling.
Background technology
The present invention relates to movable body autonomous navigation technology, and in particular to volume Kalman Filter Technology, suboptimum unbiased are very big Posteriority noise estimation techniques, single point R ANSAC characteristic matchings technology and vision measurement technology etc..The present invention is in the super low coverage of spacecraft From Relative Navigation, low-altitude unmanned vehicle independent navigation, mobile robot, independently positioning and UAV navigation are autonomous immediately The fields such as navigation have broad application prospects.
, closest to human perception effect, with containing much information, feature rich, the sampling period is short, user for vision sensor Just economic the features such as.In recent years, with image processing techniques progress and space exploration demand, view-based access control model information complete boat Its device super close distance Relative Navigation is increasingly becoming the focus of space industry research.The topmost feature of computer vision is special image Levy and be easy to extract and quantity is big, and feature association between image is easier compared with other sensing datas.Thus, to computer vision The research of navigation can not only be provided for fields such as Technique in Rendezvous and Docking, in-orbit service, space junk removing and planetary vehicle navigation Technical support, and the independent navigation field based on kinds of platform can be widely popularized and applied to.
Based on above-mentioned, therefore the present invention proposes a kind of regarding based on self adaptation volume Kalman filtering and single-point random sampling Feel air navigation aid.
The content of the invention
Led it is an object of the invention to provide a kind of vision based on self adaptation volume Kalman filtering and single-point random sampling Boat method, efficiently against serious non-linear and noise statisticses time-varying the situation of system, characteristics of syndrome extracts real-time While, also ensure that the rotational invariance and scale invariability of characteristic point.
In order to realize the above object the present invention is achieved by the following technical solutions:
A kind of vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling, its characteristic point is, Comprise the following steps:
S1, set up self adaptation volume Kalman filter time prediction and measure more new model;
S2, the state model and measurement model of setting up vision system;
S3, feature extraction and description are carried out to target using ORB feature operators;
S4, matching and single point R ANSAC at the beginning of feature are completed using minimum Hamming distance criterion complete feature essence matching, and will be complete Correspondence point coordinates is replaced renewal in state model, while completing final time prediction and measurement renewal.
It is pre- using suboptimum unbiased maximum posteriori estimator and volume Kalman filter setup time in described step S1 Model is surveyed, more new model, process noise model is measured and measures noise model.
In described S2, six degree of freedom vision system motion full range modeling and measurement model, full state information bag are set up Include vision system state and characteristic point information.
In described step S3, feature extraction and description are carried out to target using ORB feature operators.
In described S4, the initial matching of ORB characteristic points is completed using binary system minimum Hamming distance method, and use single-point Random sampling completes the matching of characteristic point, and correspondence point coordinates in full range modeling is replaced into renewal, while completing final Time prediction and measure update.
The present invention compared with prior art, with advantages below:
1st, in wave filter design aspect, by the analysis to camera noise variance matrix, camera imaging noise is a class The noise related to imaging point position, i.e., as the different measurement noise variance characteristics of imaging point also occur to change accordingly therewith Become.Thus, using the self adaptation CKF algorithms for being proposed, can more effectively overcome the serious non-linear and noise of system to unite Count the situation of characteristic time-varying.
2nd, in terms of feature extraction operator, using FAST operator extractions target signature point, and retouched using Brief operators State, constitute ORB characteristic points, while feature extraction real-time is ensured, also ensure that the rotational invariance and yardstick of characteristic point Consistency.
3rd, in terms of feature registration, status predication value this priori for being sufficiently used that CKF obtains, using single-point RANSAC technologies, it is only necessary to which choosing a data from observation data set to complete accuracy registration iteration, and can complete wave filter Update.Autonomous navigation technology research of the achievement to view-based access control model has important directive significance.
Brief description of the drawings
Fig. 1 is vision camera perspective projection coordinate system schematic diagram of the present invention
Fig. 2 is stereo vision imaging schematic diagram of the present invention;
Fig. 3 is the flow of vision navigation method of the present invention based on self adaptation volume Kalman filtering and single-point random sampling Figure.
Specific embodiment
As Figure 1-3, the present invention provides a kind of based on self adaptation volume Kalman filtering and single-point random sampling vision Air navigation aid, comprise the steps of:
S1, set up self adaptation volume Kalman filter time prediction and measure more new model;
S2, the state model and measurement model of setting up vision system;
S3, feature extraction and description are carried out to target using ORB feature operators;
S4, matching and single point R ANSAC at the beginning of feature are completed using minimum Hamming distance criterion complete feature essence matching, and will be complete Correspondence point coordinates is replaced renewal in state model, while completing final time prediction and measurement renewal.
It is pre- using suboptimum unbiased maximum posteriori estimator and volume Kalman filter setup time in described step S1 Model is surveyed, more new model, process noise model is measured and measures noise model.Consider the nonlinear system shape with additive noise State equation and measurement equation are as follows:
xk=f (xk-1,uk-1)+wk-1 (1)
zk=h (xk,uk)+vk (2)
In formula:xk∈RnIt is state of the system at the k moment;uk∈RnIt is the input of system;zk∈RmIt is the measuring value of system; wk-1And vkIt is uncorrelated zero mean Gaussian white noise, its covariance matrix is respectively Qk-1And Rk.In the design of CKF wave filters, it is assumed that At the kth moment, it is known that the posterior density function at preceding k-1 moment isWherein, z1:k-1 It is initial time to the observation at k-1 moment;It is the state estimation at k-1 moment;Pk-1It is the state covariance at k-1 moment Matrix.State estimation flow then to the kth moment is as follows:
(1) time prediction
1) volume point is calculated
In formula:
2) volume point is propagated
3) prediction average is estimatedAnd covariance matrix
(2) measure and update
1) volume point is calculated
2) volume point is propagated
Zi,k=h (Xi,k,uk) i=1 ..., 2n (9)
3) measurement predictor is calculatedNew breath varianceWith covariance matrix Pxz,k
4) calculate to measure and update
In formula:
(3) process noise and measurement noise
When system is present, noise statisticses are inaccurate or situation of time-varying, can cause the reduction of CKF estimated accuracies even Filtering divergence.The Noise statistics extimators of Sage-Husa are suboptimum unbiased maximum posteriori estimators, are adapted to estimate that statistical property is The noise of constant value.Herein on the basis of CKF, noise is estimated using Sage-Husa noise estimators, so as to realize self adaptation CKF algorithms.The essence of Sage-Husa estimators is based on white noise sampling estimated noise statistical value, for the mistake that average is zero Journey and measurement noise, its estimate of variance are respectively
Due toWhereinK is filter gain, therefore formula (16) can be write
Due toExpectation is taken to formula (17) two ends, is easy to get
ThereforeHave inclined, Q is obtained by formula (18)kSuboptimum unbiased esti-mator is
So as to obtainRecursive form
Can obtain accordingly
So as to RkSuboptimum unbiased esti-mator be
Similar Sage-Husa wave filters estimate that statistical property is the method for time-varying noise, and introducing fades and remembers index dk-1Can
In formula:Forgetting factor b spans [0.95 0.995].
In described S2, six degree of freedom visual movement system full range modeling and measurement model, full state information bag are set up Include vision system state and characteristic point information.
(1) state model
General equivalent its motion state of photocentre motion state with vision system, the spatial position change of photocentre can pass through Translation and rotation are obtained, and translation and rotation can be obtained by linear velocity and angular speed, and be certainly existed in motion process Certain motion artifacts., using 13 motion models of variable description vision system, photocentre is under world coordinate system for the present invention Position rw, visual coordinate is tied to the attitude quaternion q of world coordinate systemwc, linear velocity v of the photocentre under world coordinate systemw, light The spin angle velocity ω of the heartc.For simplified model, to linear velocity VwAnd angular velocity vector ΩcAccelerated using uniform motion model, i.e. line Degree and angular acceleration meet zero-mean, and separate Gauss normal distribution.Therefore, the state model of vision system is:
K and k+1 represent kth moment and the moment of kth+1 respectively.In vision system motion process, whenever detecting new spy Levy yiWhen, just add it to constitute total state space in state vector.
xv=(xk y1k y2k ...)T (26)
(2) measurement model
Assuming that coordinates of the spatial point P under world coordinate system isWorld coordinates is tied to vision The spin matrix of coordinate system is Rcw, visual coordinate is tied to the translation vector of world coordinate system and is expressed as t in world coordinate system, Coordinate of the P points under visual coordinate system beCoordinate of the P points under image coordinates system is hi=(u v)T
M is the Intrinsic Matrix of vision system, and wherein f is focal length, and dx and dy is pixel lateral dimension and longitudinal size, u0 And v0It is the principle point location of image planes.
In described S3, feature extraction and description are carried out to target using ORB feature operators, improve image characteristics extraction Rapidity and accuracy, it is ensured that with certain rotational invariance and scale invariability.ORB feature detections are special using FAST Operator is levied, feature description describes son using BRIEF, and the speed of both algorithms is all very fast, is highly suitable for measurement system in real time System.The principle of FAST feature detections is:If certain pixel is in different regions from pixel enough in its surrounding neighbors, May be angle point in the pixel.For gray-scale map, if the gray value of the point pixel more enough than in its surrounding neighbors The gray value of point is big or small, then the point may be angle point.The border circular areas with any pixel as the center of circle are selected in the present invention As the neighborhood of a point.Selection is a certain to be waited to judge point p, centered on p, is being the discretization Bresenham of radius with 3 pixels On annulus, the gray value of 16 pixels in gray value I (u, v) and its neighborhood of p is compared, if there are 9 on annulus The gray value of continuous pixel adds th or subtracts th less than the gray value of p points more than the gray value of p points, then p points parallactic angle point.
In order to give the angle point price directional information for detecting, present invention employs a simple and effective angle point orientation Side --- gray scale centroid method, gray scale centroid method assumes there is a skew between the gray scale and barycenter of angle point, and this vector can be with For representing a direction.The present invention definition neighborhood away from for:
Barycenter is:
Characteristic point is defined as the direction of FAST characteristic points with the angle of barycenter:
θ=arctan (m01,m10) (33)
In order to improve the rotational invariance of method, therefore, to assure that u, v in the border circular areas that radius is 3, i.e. u, v ∈ [- 3, +3]。
The Image neighborhood criterion τ that BRIEF describes sub-definite S*S sizes is:
I (x) is gray value of the image block at x (u, v) place, and similarly I (y) is gray value of the image block at y (u, v) place.Choosing Select n (x, y) location of pixels pair, it is possible to obtain the binary bits string of BRIEF description:
N can be 128,256 and 512 etc. with value, select different values to influence the side such as speed, storage efficiency and discrimination Face.In order to solve the problems, such as the noise-sensitive of ORB operators, image is pre-processed using Gaussian kernel wave filter;In order to solve chi Constant sex chromosome mosaicism is spent, image pyramid is set up using 8 yardsticks, 1.2 times of factors, gaussian filtering is carried out to each layer, and extract ORB Feature.
By to process, ORB operators all have in terms of resisting illumination variation, antinoise, rotational invariance and scale invariability There is stronger robustness.
In described S4, matching and single point R ANSAC at the beginning of feature are completed using minimum Hamming distance criterion and completes feature essence Match somebody with somebody, and correspondence point coordinates in full range modeling is replaced renewal, while completing final time prediction and measurement renewal.The Chinese The bright quantity different away from two identical data correspondence positions of expression, we are with dHamming(a, b) represents the Chinese between data a and b It is bright away from the two data being carried out with XOR, and statistics is 1 quantity, then this number is exactly Hamming distance.The record Chinese Bright right less than the point of certain threshold value away from meeting, these points are to being initial matching point pair.
Status predication value this priori that make use of that CKF obtains, randomly selects a bit from initial matching point centering, enters The matching of row picture point.The parameter of RANSAC algorithms needs concern mainly has:The minimal data quantity m that model needs is built, is calculated Method iterations ik, model verification condition con, the rational number of data points num of judgment models.When w represents each iteration from A point is chosen in data set and does not cut the probability that the point is correct data point, the correct probability of m point chosen every time is wm, 1-wmMore at least it is the probability of abnormal data in expression m data.Therefore, always do not chosen in ik iterative process The probability that m data is correct data is 1-pr=(1-wm)ik.So iterations is:
Under normal circumstances, pr is constant given in advance.Verification condition con of the invention is characteristic point and the selection of prediction Characteristic point Euclidean distance.When there is enough points to meet verification condition, iteration terminates, and records the feature of final matching Point.After the characteristic point for recording final matching, correspondence point coordinates in full range modeling is replaced renewal, while completing most Whole time prediction and measurement updates.
In sum, the single-point random sampling vision guided navigation based on self adaptation volume Kalman filtering provided by the present invention Method, compared with prior art, with advantages below and beneficial effect:In wave filter design aspect, by camera noise side The analysis of difference battle array understands that camera imaging noise is the class noise related to imaging point position, i.e., as the difference of imaging point is surveyed Also there is corresponding change therewith in amount noise variance characteristic.Thus, using the self adaptation CKF algorithms for being proposed, can be more effective Ground overcomes serious non-linear and noise statisticses time-varying the situation of system.In terms of feature extraction operator, using FAST Operator extraction target signature point, and be described using Brief operators, ORB characteristic points are constituted, ensureing feature extraction real-time While, also ensure that the rotational invariance and scale invariability of characteristic point.In terms of feature registration, CKF is sufficiently used Status predication value this priori for obtaining, using single point R ANSAC technologies, it is only necessary to choose a number from observation data set According to can just complete accuracy registration iteration, and complete filter update.The achievement is studied the autonomous navigation technology of view-based access control model With important directive significance.
Although present disclosure is discussed in detail by above preferred embodiment, but it should be appreciated that above-mentioned Description is not considered as limitation of the present invention.After those skilled in the art have read the above, for of the invention Various modifications and substitutions all will be apparent.Therefore, protection scope of the present invention should be limited to the appended claims.

Claims (5)

1. a kind of vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling, it is characterised in that bag Containing following steps:
S1, set up self adaptation volume Kalman filter time prediction and measure more new model;
S2, the state model and measurement model of setting up vision system;
S3, feature extraction and description are carried out to target using ORB feature operators;
S4, matching and single point R ANSAC at the beginning of feature are completed using minimum Hamming distance criterion complete feature essence matching, and by total state Correspondence point coordinates is replaced renewal in model, while completing final time prediction and measurement renewal.
2. the vision navigation method of self adaptation volume Kalman filtering and single-point random sampling is based on as claimed in claim 1, Characterized in that, in described step S1, when being set up using suboptimum unbiased maximum posteriori estimator and volume Kalman filter Between forecast model, measure more new model, process noise model and measure noise model.
3. the vision navigation method of self adaptation volume Kalman filtering and single-point random sampling is based on as claimed in claim 1, Characterized in that, in described S2, setting up six degree of freedom vision system motion full range modeling and measurement model, full state information Including vision system state and characteristic point information.
4. the vision navigation method of self adaptation volume Kalman filtering and single-point random sampling is based on as claimed in claim 1, Characterized in that, in described step S3, feature extraction and description are carried out to target using ORB feature operators.
5. the vision navigation method of self adaptation volume Kalman filtering and single-point random sampling is based on as claimed in claim 1, Characterized in that, in described S4, completing the initial matching of ORB characteristic points using binary system minimum Hamming distance method, and use Single-point random sampling completes the matching of characteristic point, and correspondence point coordinates in full range modeling is replaced into renewal, while completing Final time prediction and measurement updates.
CN201611061729.5A 2016-11-25 2016-11-25 Vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling Pending CN106767841A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611061729.5A CN106767841A (en) 2016-11-25 2016-11-25 Vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611061729.5A CN106767841A (en) 2016-11-25 2016-11-25 Vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling

Publications (1)

Publication Number Publication Date
CN106767841A true CN106767841A (en) 2017-05-31

Family

ID=58910985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611061729.5A Pending CN106767841A (en) 2016-11-25 2016-11-25 Vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling

Country Status (1)

Country Link
CN (1) CN106767841A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959176A (en) * 2018-06-20 2018-12-07 西南交通大学 A kind of distributed vehicle speed estimation method based on adaptive volume Kalman filtering
CN110490933A (en) * 2019-09-18 2019-11-22 郑州轻工业学院 Non-linear state space Central Difference Filter method based on single point R ANSAC
CN112417948A (en) * 2020-09-21 2021-02-26 西北工业大学 Method for accurately guiding lead-in ring of underwater vehicle based on monocular vision
CN113587926A (en) * 2021-07-19 2021-11-02 中国科学院微小卫星创新研究院 Spacecraft space autonomous rendezvous and docking relative navigation method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103217175A (en) * 2013-04-10 2013-07-24 哈尔滨工程大学 Self-adaptive volume Kalman filtering method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103217175A (en) * 2013-04-10 2013-07-24 哈尔滨工程大学 Self-adaptive volume Kalman filtering method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
梁超等: "基于扩展卡尔曼滤波的实时视觉SLAM算法", 《计算机工程》 *
石章松等: "《目标跟踪与数据融合理论及方法》", 31 July 2010, 国防工业出版社 *
葛致磊等: "《导弹导引***原理》", 31 March 2016, 国防工业出版社 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959176A (en) * 2018-06-20 2018-12-07 西南交通大学 A kind of distributed vehicle speed estimation method based on adaptive volume Kalman filtering
CN110490933A (en) * 2019-09-18 2019-11-22 郑州轻工业学院 Non-linear state space Central Difference Filter method based on single point R ANSAC
CN112417948A (en) * 2020-09-21 2021-02-26 西北工业大学 Method for accurately guiding lead-in ring of underwater vehicle based on monocular vision
CN112417948B (en) * 2020-09-21 2024-01-12 西北工业大学 Method for accurately guiding lead-in ring of underwater vehicle based on monocular vision
CN113587926A (en) * 2021-07-19 2021-11-02 中国科学院微小卫星创新研究院 Spacecraft space autonomous rendezvous and docking relative navigation method
CN113587926B (en) * 2021-07-19 2024-02-09 中国科学院微小卫星创新研究院 Spacecraft space autonomous rendezvous and docking relative navigation method

Similar Documents

Publication Publication Date Title
CN111210477B (en) Method and system for positioning moving object
CN110763251B (en) Method and system for optimizing visual inertial odometer
CN103236064B (en) A kind of some cloud autoegistration method based on normal vector
Li et al. A 4-point algorithm for relative pose estimation of a calibrated camera with a known relative rotation angle
Wefelscheid et al. Three-dimensional building reconstruction using images obtained by unmanned aerial vehicles
CN109084746A (en) Monocular mode for the autonomous platform guidance system with aiding sensors
CN106556412A (en) The RGB D visual odometry methods of surface constraints are considered under a kind of indoor environment
CN109631911B (en) Satellite attitude rotation information determination method based on deep learning target recognition algorithm
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
US9761008B2 (en) Methods, systems, and computer readable media for visual odometry using rigid structures identified by antipodal transform
CN116205947B (en) Binocular-inertial fusion pose estimation method based on camera motion state, electronic equipment and storage medium
CN106767841A (en) Vision navigation method based on self adaptation volume Kalman filtering and single-point random sampling
McKinnon et al. Towards automated and in-situ, near-real time 3-D reconstruction of coral reef environments
CN108805987A (en) Combined tracking method and device based on deep learning
CN112837352A (en) Image-based data processing method, device and equipment, automobile and storage medium
CN108182695A (en) Target following model training method and device, electronic equipment and storage medium
CN112154448A (en) Target detection method and device and movable platform
Müller et al. Squeezeposenet: Image based pose regression with small convolutional neural networks for real time uas navigation
Prisacariu et al. Robust 3D hand tracking for human computer interaction
CN104166995B (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
Angelopoulou et al. Vision-based egomotion estimation on FPGA for unmanned aerial vehicle navigation
White et al. GPS-denied navigation using SAR images and neural networks
CN113436313B (en) Three-dimensional reconstruction error active correction method based on unmanned aerial vehicle
Del Pizzo et al. Reliable vessel attitude estimation by wide angle camera
Panahandeh et al. Vision-aided inertial navigation using planar terrain features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531

RJ01 Rejection of invention patent application after publication