CN111147868A - Free viewpoint video guide system - Google Patents

Free viewpoint video guide system Download PDF

Info

Publication number
CN111147868A
CN111147868A CN201811302152.1A CN201811302152A CN111147868A CN 111147868 A CN111147868 A CN 111147868A CN 201811302152 A CN201811302152 A CN 201811302152A CN 111147868 A CN111147868 A CN 111147868A
Authority
CN
China
Prior art keywords
video
scene
viewpoint
viewpoints
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811302152.1A
Other languages
Chinese (zh)
Inventor
叶荣华
刘松
张冲
韦梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Lingpai Technology Co Ltd
Original Assignee
Guangzhou Lingpai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Lingpai Technology Co Ltd filed Critical Guangzhou Lingpai Technology Co Ltd
Priority to CN201811302152.1A priority Critical patent/CN111147868A/en
Publication of CN111147868A publication Critical patent/CN111147868A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0077Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a free viewpoint video guide system, which comprises a sending end and a receiving end, wherein the sending end and the receiving end are connected and transmitted by adopting an RTP/RTCP communication protocol; the plurality of scene cameras are arranged in a certain way to sample a scene and collect video data; and the sending end is responsible for acquiring the depth information. The invention takes charge of the acquisition of the depth information by utilizing the sending end, and the combined compression and transmission of each path of video information and the corresponding depth, and the receiving end takes charge of generating a new viewpoint by utilizing the depth information, so as to reasonably distribute tasks, and reduce the transmission quantity of information as much as possible on the basis of reducing the calculation quantity of the receiving end.

Description

Free viewpoint video guide system
Technical Field
The invention relates to the technical field of video playing, in particular to a free viewpoint video guide system.
Background
The video/audio system has become the most influential mass media in the information-based society at present. In the conventional video/movie system, the viewer can only passively accept the view angle selection or shot switching of the photographer or director, and there is a jump in the viewpoint position and the video picture during the view angle or shot switching, resulting in lack of presence and presence when the viewer watches.
With the rapid development of digital multimedia technology in recent years, people have made higher demands on the autonomy and the sensory experience in the video viewing process. In this context, free-viewpoint video arises. The free viewpoint video can provide video information of any angle and scale of a scene through technologies such as multi-camera video synthesis and virtual viewpoint interpolation, so that the watching process has better freedom and three-dimensional immersion. Existing free-viewpoint video techniques can be classified into two categories according to basic principles. The first method is based on a three-dimensional model, and in order to obtain an image of any viewpoint in a scene, the scene needs to be subjected to three-dimensional modeling, and then a virtual viewpoint image is generated by a three-dimensional re-projection technology according to a selected viewpoint position. The second method is based on a two-dimensional image interpolation method, and the method selects adjacent viewpoint images from the obtained discrete viewpoint images according to the selected viewpoint position, and generates a virtual viewpoint image through the synthesis of an interpolation technology. However, the method has too large information transmission amount, high requirement on a user side, strong computing capability and increased computing burden.
Therefore, it is desirable to provide a free-viewpoint video-directing system to solve the above problems.
Disclosure of Invention
The invention aims to provide a free viewpoint video guide system, which utilizes a sending end to acquire depth information, combines and compresses each path of video information and corresponding depth together for transmission, and a receiving end to generate a new viewpoint by utilizing the depth information to reasonably distribute tasks, reduces the transmission quantity of the information as much as possible on the basis of reducing the calculation quantity of the receiving end, hands complex operation to the sending end with strong operation capacity, reduces the operation burden of the receiving end, and only needs a user end to bear a relatively light new viewpoint synthesis task so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a free viewpoint video guide system comprises a sending end and a receiving end, wherein the sending end and the receiving end are connected and transmitted by adopting an RTP/RTCP communication protocol, the sending end comprises a plurality of scene cameras and a coding end, the coding end is arranged at the output ends of the scene cameras, the receiving end comprises a decoding end and a user end, and the user end is arranged at the output end of the decoding end;
the plurality of scene cameras are arranged in a certain way to sample a scene and collect video data;
the sending end is responsible for acquiring depth information, combining all paths of video information and corresponding depths together, compressing and transmitting the video information and the corresponding depths together in a combined mode, and the receiving end is responsible for producing a new viewpoint by utilizing the depth information;
after completing the free viewpoint video acquisition of a scene, a plurality of scene cameras send each path of video stream to a coding end;
the method comprises the steps that after color correction is carried out on video information by the encoding end, the video is encoded, scene depth information is obtained at the same time, geometric correction is carried out on pictures of different viewpoints in the video information, a dense time difference field between the viewpoints of the scene is obtained by utilizing stereo matching, the relation between the time difference and the depth is converted mutually through scene camera calibration data, the depth information of each viewpoint is obtained, the depth information, the scene camera calibration data and a compressed free viewpoint video code stream are packaged together and sent to a receiving end, and the transmission is carried out to the receiving end;
the decoding end selects two known viewpoints which are closest to the left and right of the virtual viewpoint from the reference viewpoint code stream as reference viewpoints according to the viewing viewpoints selected by the user, code stream data of the two viewpoints are respectively extracted, decoding is carried out by using a JMVC decoding tool, a video picture and a depth value of the reference viewpoints are respectively obtained, an image of the virtual viewpoint is obtained by using a DIBR drawing method according to information of the two reference viewpoints and camera calibration data, and the image is transmitted to the user end to be played.
Preferably, the geometric correction mainly serves for stereo matching, and the original image is re-projected to a virtual horizontal camera position by using the scene camera calibration data, and the horizontal difference in the picture is reserved.
Preferably, the color correction is performed in the YUV color space based on a light imaging model of the scene camera.
Preferably, the user terminal is a general 2D display, a head/view tracking stereoscopic display or a 3D stereoscopic display.
The invention has the technical effects and advantages that:
1. according to the invention, complex operation is given to the sending end with strong operation capability, so that the operation burden of the receiving end is reduced, and the user end only needs to undertake a relatively light new viewpoint synthesis task, so that the practicability is improved;
2. the invention utilizes the sending end to acquire the depth information, and combines each path of video information and the corresponding depth together for compression and transmission, and the receiving end is responsible for generating a new viewpoint by utilizing the depth information, reasonably distributing tasks, and reducing the transmission quantity of information as much as possible on the basis of reducing the calculation quantity of the receiving end;
3. color correction is carried out on a light imaging model based on a scene camera, so that the colors of pictures acquired by the scene camera are consistent, the efficiency of free viewpoint coding can be improved, and the visual effect of a virtual viewpoint can be improved.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention.
FIG. 2 is a diagram of the structure of the encoding end of the present invention.
FIG. 3 is a diagram illustrating a decoding end structure according to the present invention.
In the figure: the system comprises a sending end 1, a receiving end 2, a scene camera 3, an encoding end 4, a decoding end 5 and a user end 6.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
A free viewpoint video guide system as shown in fig. 1, includes a sending end 1 and a receiving end 2, where the sending end 1 and the receiving end 2 are connected and transmitted by using an RTP/RTCP communication protocol, the sending end 1 includes a plurality of scene cameras 3 and a coding end 4, the coding end 4 is disposed at an output end of the plurality of scene cameras 3, the receiving end 2 includes a decoding end 5 and a user end 6, and the user end 6 is disposed at an output end of the decoding end 5;
the plurality of scene cameras 3 are arranged to sample a scene and collect video data, wherein the arrangement of the scene cameras 3 is one of rectangular arrangement, rhombic arrangement or semicircular arrangement.
The sending end 1 is responsible for obtaining depth information, combining all paths of video information and corresponding depths together, compressing and transmitting the video information and the corresponding depths together, and the receiving end 2 is responsible for producing a new viewpoint by utilizing the depth information;
after the scene cameras 3 finish the free viewpoint video acquisition, each path of video stream is sent to the encoding terminal 4.
Example 2
As shown in fig. 1-3, in the free viewpoint video guide system, the encoding end 4 performs color correction on video information, encodes video, and simultaneously acquires scene depth information, performs geometric correction on pictures of different viewpoints in the video information, obtains a dense time difference field between scene viewpoints by using stereo matching, performs interconversion on the relationship between time difference and depth by using the scene camera 3 to calibrate data, acquires depth information of each viewpoint, and packages and transmits the depth information, the scene camera 3 calibration data, and compressed free viewpoint video code stream together to the receiving end 2;
the decoding end 5 selects two known viewpoints which are closest to the virtual viewpoint from the reference viewpoint code stream as reference viewpoints according to the viewing viewpoints selected by the user, respectively extracts code stream data of the two viewpoints, decodes the code stream data by using a JMVC decoding tool to respectively obtain video pictures and depth values of the reference viewpoints, obtains images of the virtual viewpoints by using a DIBR rendering method according to information of the two reference viewpoints and camera calibration data, and transmits the images to the user end 6 for playing.
The geometric correction is mainly used for stereo matching, the original image is re-projected to the position of a virtual horizontal camera by utilizing the calibration data of the scene camera 3, the horizontal difference in the picture is reserved, polar lines in the picture are arranged into a horizontal state, the complexity of a stereo matching algorithm is greatly simplified, the color correction is carried out in a YUV color space based on a light imaging model of the scene camera 3, the colors of the picture obtained by the scene camera 3 are consistent, the free viewpoint coding efficiency can be improved, the visual effect of a virtual viewpoint can be improved, the user side 6 is a common 2D display, and a head/visual angle tracking stereo display or a 3D stereo display is adopted.
By giving complex operation to the transmitting end 1 with strong operation capability, the operation burden of the receiving end 2 is reduced, and the user end 6 only needs to undertake a relatively light new viewpoint synthesis task, so that the practicability is improved.
Example 3
As shown in fig. 2, in a free viewpoint video directing and broadcasting system, after a plurality of scene cameras 3 complete free viewpoint video acquisition on a scene, each path of video stream is sent to a coding end 4, and the coding end 4 mainly completes two functions, namely, free viewpoint video compression coding, and a reference software JMVC is adopted as a compression tool, since each scene camera 3 has physical property difference on an individual, such as brightness gain of a photoelectric sensor, corresponding inconsistency of each color component, and the like, which may cause inconsistency of video colors of the same scene, the color difference has a certain influence on efficiency of compression coding and a virtual viewpoint drawing picture of a decoding end 5, and thus, color correction between the cameras is required before multi-view video coding.
Another function of the encoder 4 is to obtain depth information of the scene, and currently, depth value estimation is mainly obtained by stereo matching, by which a dense disparity field between viewpoints, so-called disparity, the coordinate difference of the scene in the space on the different viewpoint pictures is actually the coordinate difference, the mutual conversion relation exists between the parallax and the depth through the calibration data of the scene camera 3, the parallax and the depth are essentially the same, before stereo matching, geometric correction is needed between pictures of different viewpoints, the purpose of geometric correction is to make the pictures of different viewpoints only have difference in horizontal direction, and through geometric correction, various stereo matching algorithms can be greatly simplified, the transmitting end 1 completes the depth estimation of the scene, and the existing various complex stereo matching algorithms can be conveniently completed in an off-line mode.
In the embodiment, the sending end 1 is used for acquiring the depth information, and combining each path of video information and the corresponding depth together for joint compression and transmission, and the receiving end 2 is used for generating a new viewpoint by using the depth information, so that tasks are reasonably distributed, and the transmission quantity of the information is reduced as much as possible on the basis of reducing the calculation quantity of the receiving end 2.
The working principle of the invention is as follows: a plurality of scene cameras 3 are arranged and configured in a certain way to sample a scene, collect video data and send each path of video stream into a coding end 4, the coding end 4 performs color correction on video information, codes the video and simultaneously acquires scene depth information, performs geometric correction on pictures of different viewpoints in the video information, obtains a dense time difference field between scene viewpoints by utilizing stereo matching, performs mutual conversion on the relation between time difference and depth through the calibration data of the scene cameras 3, acquires the depth information of each viewpoint, packs and sends the depth information, the calibration data of the scene cameras 3 and a compressed free viewpoint video code stream together, transmits the packed and sent video code stream to a receiving end 2, a decoding end 5 selects two known viewpoints which are closest to the virtual viewpoint from the reference viewpoint code stream as reference viewpoints according to a viewing viewpoint selected by a user, respectively extracting the code stream data of the two viewpoints, decoding by using a JMVC decoding tool to respectively obtain the video picture and the depth value of the reference viewpoint, obtaining the image of the virtual viewpoint by using a DIBR drawing method according to the information of the two reference viewpoints and the camera calibration data, and transmitting the image to the user side 6 for playing.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments or portions thereof without departing from the spirit and scope of the invention.

Claims (4)

1. The utility model provides a free viewpoint video director system, includes sending end (1) and receiving end (2), its characterized in that: the transmitting end (1) and the receiving end (2) are connected and transmitted by adopting an RTP/RTCP communication protocol, the transmitting end (1) comprises a plurality of scene cameras (3) and a coding end (4), the coding end (4) is arranged at the output ends of the scene cameras (3), the receiving end (2) comprises a decoding end (5) and a user end (6), and the user end (6) is arranged at the output end of the decoding end (5);
the scene cameras (3) are arranged in a certain way to sample the scene and collect video data;
the sending end (1) is responsible for obtaining depth information, combining all paths of video information and corresponding depths together, compressing and transmitting the video information and the corresponding depths together, and the receiving end (2) is responsible for producing new viewpoints by utilizing the depth information;
after the scene cameras (3) finish the free viewpoint video acquisition, all paths of video streams are sent to the encoding end (4);
the encoding end (4) performs color correction on video information, encodes the video, acquires scene depth information, performs geometric correction on pictures of different viewpoints in the video information, obtains a dense time difference field between scene viewpoints by utilizing stereo matching, performs mutual conversion on the relation between time difference and depth through the calibration data of the scene camera (3), acquires the depth information of each viewpoint, packages and sends the depth information, the calibration data of the scene camera (3) and a compressed free viewpoint video code stream, and transmits the depth information, the calibration data of the scene camera (3) and the compressed free viewpoint video code stream to the receiving end (2);
the decoding end (5) selects two known viewpoints which are closest to the left and right of the virtual viewpoint from the reference viewpoint code stream as reference viewpoints according to the viewing viewpoints selected by the user, respectively extracts code stream data of the two viewpoints, decodes the code stream data by using a JMVC decoding tool to respectively obtain a video picture and a depth value of the reference viewpoint, obtains an image of the virtual viewpoint by using a DIBR drawing method according to information of the two reference viewpoints and camera calibration data, and transmits the image to the user end (6) for playing.
2. The free-viewpoint video directing system of claim 1, wherein: the geometric correction is mainly used for stereo matching, the original image is re-projected to a virtual horizontal camera position by using the calibration data of the scene camera (3), and the horizontal difference in the image is reserved.
3. The free-viewpoint video directing system of claim 1, wherein: the color correction is performed in the YUV color space based on a light imaging model of the scene camera (3).
4. The free-viewpoint video directing system of claim 1, wherein: the user side (6) is a common 2D display, a head/view tracking stereoscopic display or a 3D stereoscopic display.
CN201811302152.1A 2018-11-02 2018-11-02 Free viewpoint video guide system Pending CN111147868A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811302152.1A CN111147868A (en) 2018-11-02 2018-11-02 Free viewpoint video guide system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811302152.1A CN111147868A (en) 2018-11-02 2018-11-02 Free viewpoint video guide system

Publications (1)

Publication Number Publication Date
CN111147868A true CN111147868A (en) 2020-05-12

Family

ID=70515490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811302152.1A Pending CN111147868A (en) 2018-11-02 2018-11-02 Free viewpoint video guide system

Country Status (1)

Country Link
CN (1) CN111147868A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738424A (en) * 2021-01-26 2021-04-30 张承斌 Multi-lens playing and presenting method based on video media
CN113905186A (en) * 2021-09-02 2022-01-07 北京大学深圳研究生院 Free viewpoint video picture splicing method, terminal and readable storage medium
US11343485B1 (en) * 2020-08-24 2022-05-24 Ambarella International Lp Virtual horizontal stereo camera
CN114760455A (en) * 2022-03-30 2022-07-15 广东博华超高清创新中心有限公司 Multi-channel video multi-view scene coding and decoding method based on AVS3 coding framework

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453662A (en) * 2007-12-03 2009-06-10 华为技术有限公司 Stereo video communication terminal, system and method
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation
US20110261050A1 (en) * 2008-10-02 2011-10-27 Smolic Aljosa Intermediate View Synthesis and Multi-View Data Signal Extraction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453662A (en) * 2007-12-03 2009-06-10 华为技术有限公司 Stereo video communication terminal, system and method
US20110261050A1 (en) * 2008-10-02 2011-10-27 Smolic Aljosa Intermediate View Synthesis and Multi-View Data Signal Extraction
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11343485B1 (en) * 2020-08-24 2022-05-24 Ambarella International Lp Virtual horizontal stereo camera
CN112738424A (en) * 2021-01-26 2021-04-30 张承斌 Multi-lens playing and presenting method based on video media
CN113905186A (en) * 2021-09-02 2022-01-07 北京大学深圳研究生院 Free viewpoint video picture splicing method, terminal and readable storage medium
WO2023029204A1 (en) * 2021-09-02 2023-03-09 北京大学深圳研究生院 Free viewpoint video screen splicing method, terminal, and readable storage medium
CN113905186B (en) * 2021-09-02 2023-03-10 北京大学深圳研究生院 Free viewpoint video picture splicing method, terminal and readable storage medium
CN114760455A (en) * 2022-03-30 2022-07-15 广东博华超高清创新中心有限公司 Multi-channel video multi-view scene coding and decoding method based on AVS3 coding framework
CN114760455B (en) * 2022-03-30 2023-10-13 广东博华超高清创新中心有限公司 Multi-channel video multi-view scene coding and decoding method based on AVS3 coding framework

Similar Documents

Publication Publication Date Title
Domański et al. Immersive visual media—MPEG-I: 360 video, virtual navigation and beyond
JP7320352B2 (en) 3D model transmission method, 3D model reception method, 3D model transmission device, and 3D model reception device
Stankiewicz et al. A free-viewpoint television system for horizontal virtual navigation
JP7277372B2 (en) 3D model encoding device, 3D model decoding device, 3D model encoding method, and 3D model decoding method
US10389994B2 (en) Decoder-centric UV codec for free-viewpoint video streaming
CN101453662B (en) Stereo video communication terminal, system and method
CN101472190B (en) Multi-visual angle filming and image processing apparatus and system
CN111147868A (en) Free viewpoint video guide system
US20120139906A1 (en) Hybrid reality for 3d human-machine interface
US5617334A (en) Multi-viewpoint digital video coder/decoder and method
CN101651841B (en) Method, system and equipment for realizing stereo video communication
KR20130053452A (en) Calculating disparity for three-dimensional images
JP2011523285A (en) Stereoscopic video data stream generation method and apparatus using camera parameters, and stereoscopic video restoration method and apparatus
CN101610421A (en) Video communication method, Apparatus and system
Carballeira et al. FVV live: A real-time free-viewpoint video system with consumer electronics hardware
US10404964B2 (en) Method for processing media content and technical equipment for the same
Tseng et al. Multiviewpoint video coding with MPEG-2 compatibility
EP3729805A1 (en) Method for encoding and decoding volumetric video data
JP7171169B2 (en) Method and Apparatus for Encoding Signals Representing Lightfield Content
JP7382186B2 (en) Encoding device, decoding device, and program
US20230115563A1 (en) Method for a telepresence system
KR102178947B1 (en) System and method for transmitting 360 multi view video using stitching
KR20220129866A (en) Apparatus and method for obtaining 3d image
Yang et al. Naked-eye 3D multi-viewpoint video display
Sohn et al. 3-D video processing for 3-D TV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512