CN112288737A - Super-resolution image-based porthole detection method - Google Patents

Super-resolution image-based porthole detection method Download PDF

Info

Publication number
CN112288737A
CN112288737A CN202011295022.7A CN202011295022A CN112288737A CN 112288737 A CN112288737 A CN 112288737A CN 202011295022 A CN202011295022 A CN 202011295022A CN 112288737 A CN112288737 A CN 112288737A
Authority
CN
China
Prior art keywords
super
image
resolution
number detection
string number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011295022.7A
Other languages
Chinese (zh)
Inventor
王懋
黄宏斌
刘洪江
刘丽华
吴继冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202011295022.7A priority Critical patent/CN112288737A/en
Publication of CN112288737A publication Critical patent/CN112288737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a chord number detection method based on a super-resolution image, which comprises the following steps of: inputting an original ship image into an SR-Net model to generate a super-resolution ship image, wherein the SR-Net is SISR-Net based on a ZSSR method; inputting the super-resolution ship image into a string number detection network, and outputting a string number detection result; when generating the super-resolution ship image, the original ship image is divided into k x k grids, then the k x k grids are subjected to super-resolution by using a ZSSR method in parallel, and then the super-resolution grids which are generated newly are synthesized into the super-resolution image according to the original sequence. The method can ensure that the detection of the side number is more complete, reduce the detection error and improve the detection precision; the time consumption of super-resolution of a single image can be reduced; the detected plurality of side characters may be enclosed in a word-level bounding box.

Description

Super-resolution image-based porthole detection method
Technical Field
The invention belongs to the technical field of natural scene text detection, relates to a method for detecting a side number in an image, and particularly relates to a super-resolution image-based side number detection method.
Background
Since the R-CNN uses deep learning for target detection, the target detection method based on the deep learning has great achievements in practical application, and text detection, instance segmentation, text recognition and other methods related to the target detection also have unprecedented achievements.
CRAFT is a text detection method that can accurately locate each character in a natural image, and references are as follows: baek Y, Lee B, Han D, et al, Character region aware for text detection [ A ]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition [ C ].2019: 9365-. The side number belongs to a text type target, and the efficiency of directly searching the side number on a ship image by naked eyes is low, so that the automatic detection of the side number by using a deep learning method has great application value in the requirements of maritime military affairs, shipping and the like. The ship image generally contains no other text information than the side number, and the side number is usually in the form of a number or a combination of a number and a letter. When the side number is a combination of numbers and letters, the spacing between the letters and the numbers is slightly larger, and the detected letters and the detected numbers can be contained in the same text box by adjusting the affinity threshold value between the characters. However, the ship side number has deformation, blurring and other conditions due to external factors such as weather, shooting distance and shooting angle, and the ship side number has a small size in the ship image, so that the ship side number detection effect of directly performing the ship side number detection on the ship image is poor.
The image quality affects the accuracy of text detection, and one way to improve the text detection accuracy is to implement SR on the original image. Super-Resolution (SR) is the creation of a High-Resolution (HR) image from a Low-Resolution (LR) image. The zsr is a method for training and generating an SR image by directly using a single image by using the repeatability of information of an internal region of a picture, and references are as follows: "zero-shot" super-resolution using internal learning [ A ] Proceedings of the IEEE conference on computer vision and pattern recognition [ C ] 2018: 3118-. Zsrs is the first unsupervised SR algorithm using Convolutional Neural Networks (CNNs) that directly trains a small CNN on a test image using the repeatability of internal information in a single image.
However, in the prior art, string number detection is performed, time consumption of a single image SR is too large, the detected string number is cut, and the like, so that the string number detection effect in the prior art is not ideal, and therefore, the string number detection method needs to be invented, so that string number detection is more complete, detection errors are smaller, and detection precision is higher.
Disclosure of Invention
In view of the above, the present invention provides a string number detection method based on super-resolution images, which comprehensively utilizes the advantages of the zsr method and the CRAFT method, and simultaneously provides that an original image is divided into a plurality of grid regions, and the region images SR are performed in parallel, so as to reduce the time consumption of a single image SR; the side number detection is carried out on the SR ship image, so that the detection precision is improved; and adjusting the affinity threshold value to enable all detected characters to be enclosed in a word-level bounding box, so that the string number detection is more complete.
Based on the purpose, the chord number detection method based on the super-resolution image comprises the following steps:
step 1, inputting an original ship image into an SR-Net model to generate a super-resolution ship image, wherein the SR-Net is SISR-Net based on a ZSSR method;
and 2, inputting the super-resolution ship image into a string number detection network, and outputting a string number detection result.
In step 1, in generating the super-resolution ship image, the original ship image is divided into k × k grids, then the k × k grids are super-resolved in parallel by using the zsr method, and then the super-resolution images are synthesized from these newly generated super-resolution grids in the original order.
The chord number detection network selects a CRAFT method, adopts a Gaussian heat map to encode the probability of the character center, and then uses the Gaussian heat map to represent to learn the region score and the affinity score.
Specifically, to generate the correct labeling of the region score and affinity score on the composite image, the chord number detection network follows the following steps: firstly, preparing a two-dimensional isotropic Gaussian mapping; secondly, calculating perspective transformation between the Gaussian mapping area and each character frame; and thirdly, warping the Gaussian mapping to a square area.
Furthermore, the string number detection network needs to generate a character boundary box from the label of each word level, and in order to reflect the reliability of the transition model prediction, the value of the confidence map on each word box is proportional to the number of detected characters divided by the number of real label characters, so that the confidence map is used for the learning rate in the training period.
Further, the post-processing procedure of the string number detection network to obtain the word-level bounding box includes three steps: firstly, initializing an image M covered by binary mapping; secondly, marking the connecting elements on the M; and thirdly, obtaining a bounding box by finding a rotating rectangle, wherein the minimum area of the rectangle surrounds the connecting component corresponding to each label.
In particular, the string number detection network trains the model using a composite image with character-level annotations and a real image with word-level annotations.
Compared with the prior art, the method has the following advantages and beneficial effects: the porthole detection network based on the super-resolution image can enable the porthole detection to be more complete, reduce detection errors and improve detection precision. By dividing the ship image into a plurality of grids and performing super-resolution on the plurality of grids in parallel, the time consumption for super-resolution of a single image can be reduced. In addition, by lowering the affinity threshold between characters, a plurality of detected side-word characters can be enclosed in a bounding box at the word level.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an embodiment of the present invention;
FIG. 3 is a diagram illustrating exemplary chord numbers according to an embodiment of the present invention;
FIG. 4 is an exemplary diagram of different numbers of text bounding boxes output by different affinity thresholds in the present embodiment.
Detailed Description
The invention is further described with reference to the accompanying drawings, but the invention is not limited in any way, and any alterations or substitutions based on the teaching of the invention are within the scope of the invention.
As shown in fig. 1, a method for detecting chord number based on super-resolution image includes the following steps:
step 1, inputting an original ship image into an SR-Net model to generate a super-resolution ship image, wherein the SR-Net is SISR-Net based on a ZSSR method;
and 2, inputting the super-resolution ship image into a string number detection network, and outputting a string number detection result.
In step 1, in generating the super-resolution ship image, the original ship image is divided into k × k grids, then the k × k grids are super-resolved in parallel by using the zsr method, and then the super-resolution images are synthesized from these newly generated super-resolution grids in the original order.
The chord number detection network selects a CRAFT method, adopts a Gaussian heat map to encode the probability of the character center, and then uses the Gaussian heat map to represent to learn the region score and the affinity score.
As shown in FIG. 2, the method of the present invention includes SR-Net and a text detection network (i.e. a side-port detection network). In view of the fact that it takes a long time to directly perform single image SR on an original image, the original image is divided into a plurality of grids, SR is performed on a plurality of grid regions based on ZSR independently and parallelly, then new SR images are synthesized on the basis of newly generated SR grid regions according to the original sequence, and finally the side number of the synthesized SR images is detected. Experiments prove that the detection precision of the porthole detection based on the SR image is obviously improved.
Zsrs is the first unsupervised SR algorithm using Convolutional Neural Networks (CNNs) that directly trains a small CNN on a test image using the repeatability of internal information in a single image. As the training set only consists of a single test image, the ZSSR firstly performs data enhancement on the test image, and extracts more LR-HR image data for training. Data enhancement is accomplished by down-sampling the test image into many smaller versions, these additional versions being referred to as the "parent HR". Each "parent HR" is then down-sampled by the desired SR scaling factor s to get a "child LR". Thus, the "parent HR" and "child LR" constitute the input training examples, and the resulting training set is composed of a number of specific LR-HR example pairs, on which the SR-Net can train. Furthermore, zsrs further enriches the training set by rotating 4 angles (0 °, 90 °, 180 °, 270 °) and flipping every LR-HR instance, adding x 8 specific training image instances, and finally taking the median of these 8 outputs as the final output.
The quality of the SR image can be judged by a peak signal-to-noise ratio (PSNR) or a Structural Similarity Index (SSIM) compared with the HR image corresponding to the original image. The higher the PSNR value, the higher the SR image quality, and generally a PSNR greater than 20db indicates that the SR image has a higher image quality. SSIM value is between 0 and 1, and the larger the same value is, the better the SR image quality is. From the evaluation values of the SR image quality (the HR image and the LR image tested are SET14 data SET images), it can be seen that the PSNR and SSIM values are substantially unchanged as the number of meshes increases. But both decrease as the SR image size scale increases.
The side number belongs to a text object and is generally composed of letters and numbers, and the characters of the side number of the same ship are basically consistent in color and font, as shown in fig. 3. When the porthole detection is realized by adopting a natural scene text detection method on an SR ship image, the method selects CRAFT, which is a text detection method based on character-level labeling and is the most advanced natural scene text detection method at present, and the method mainly aims at accurately positioning each character in the natural image. To do this, CRAFT trains a deep neural network to predict the character regions and the affinity between characters.
Specifically, to generate the correct labeling of the region score and affinity score on the composite image, the chord number detection network follows the following steps: firstly, preparing a two-dimensional isotropic Gaussian mapping; a second part for calculating perspective transformation between the Gaussian mapping region and each character frame; and thirdly, warping the Gaussian mapping to a square area.
Furthermore, the string number detection network needs to generate a character boundary box from the label of each word level, and in order to reflect the reliability of the transition model prediction, the value of the confidence map on each word box is proportional to the number of detected characters divided by the number of real label characters, so that the confidence map is used for the learning rate in the training period.
Further, the post-processing procedure of the string number detection network to obtain the word-level bounding box includes three steps: firstly, initializing an image M covered by binary mapping; secondly, marking the connecting elements on the M; and thirdly, obtaining a bounding box by finding a rotating rectangle, wherein the minimum area of the rectangle surrounds the connecting component corresponding to each label. In addition, it can also generate polygons in the whole character area for processing curved text.
In particular, the string number detection network trains the model using a composite image with character-level annotations and a real image with word-level annotations.
The use of CRAFT to detect the sponsons on the SR ship image without much modification other than adjusting the affinity threshold between the characters to be smaller allows all the detected characters to be enclosed in a word-level bounding box because when the sponsons contain both numeric and alphabetic characters, the gaps between the numeric and alphabetic characters are somewhat larger, as shown in fig. 4.
The experimental ship image data set in the examples was obtained from the internet and had an average size of about 500 x 800 (pixels). The experiment is operated in the Ubuntu system environment, and the hardware information is as follows: CPU (Intel CORE I9-9900K), GPU (RTX 2080-Ti) and RAM (64G). Since the original image has a large scale and the quality of the SR image decreases as the scale ratio increases, the ratio of SR is set to 2 in our experiment. The SR time consumption of a single image mesh region at different k values was studied experimentally, and when k was changed from 1 to 2 or from 2 to 3, the time consumption was much reduced, while when k was greater than 3, the time consumption was substantially unchanged, so k was set to 3. Text detection models were trained by CRAFT on the syntext, ICDAR2013, and ICDAR2017 datasets. By comparing the detection result data of CRAFT and SRHND-Net, it can be seen that the side number detection can be more complete by carrying out the side number detection on the SR ship image, the detection error is reduced, and the detection precision is improved.
The above embodiment is an implementation manner of the method of the present invention, but the implementation manner of the present invention is not limited by the above embodiment, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be regarded as equivalent replacements within the protection scope of the present invention.

Claims (5)

1. A chord number detection method based on a super-resolution image is characterized by comprising the following steps:
step 1, inputting an original ship image into an SR-Net model to generate a super-resolution ship image, wherein the SR-Net is SISR-Net based on a ZSSR method;
step 2, inputting the super-resolution ship image into a string number detection network, and outputting a string number detection result;
in step 1, when the super-resolution ship image is generated, dividing the original ship image into k x k grids, then performing super-resolution on the k x k grids in parallel by using a ZSR method, and then synthesizing the newly generated super-resolution grids into the super-resolution image according to the original sequence;
the chord number detection network selects a CRAFT method, adopts a Gaussian heat map to encode the probability of the character center, and then uses the Gaussian heat map to represent to learn the region score and the affinity score.
2. The string number detection method according to claim 1, wherein for generating correct labeling of region scores and affinity scores on a composite image, the string number detection network follows the following steps: firstly, preparing a two-dimensional isotropic Gaussian mapping; secondly, calculating perspective transformation between the Gaussian mapping area and each character frame; and thirdly, warping the Gaussian mapping to a square area.
3. The string number detection method according to claim 1, wherein the string number detection network needs to generate character bounding boxes from the labels at each word level, and in order to reflect the reliability of the transition model prediction, the value of the confidence map on each word box is proportional to the number of detected characters divided by the number of real labeled characters, and is used for the learning rate during the training period.
4. The string number detection method according to claim 2 or 3, wherein the post-processing procedure of obtaining the word-level bounding box by the string number detection network comprises three steps: firstly, initializing an image M covered by binary mapping; secondly, marking the connecting elements on the M; and thirdly, obtaining a bounding box by finding a rotating rectangle, wherein the minimum area of the rectangle surrounds the connecting component corresponding to each label.
5. The string number detection method of claim 4, wherein the string number detection network trains models using composite images with character-level annotations and real images with word-level annotations.
CN202011295022.7A 2020-11-18 2020-11-18 Super-resolution image-based porthole detection method Pending CN112288737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011295022.7A CN112288737A (en) 2020-11-18 2020-11-18 Super-resolution image-based porthole detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011295022.7A CN112288737A (en) 2020-11-18 2020-11-18 Super-resolution image-based porthole detection method

Publications (1)

Publication Number Publication Date
CN112288737A true CN112288737A (en) 2021-01-29

Family

ID=74397974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011295022.7A Pending CN112288737A (en) 2020-11-18 2020-11-18 Super-resolution image-based porthole detection method

Country Status (1)

Country Link
CN (1) CN112288737A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8938118B1 (en) * 2012-12-12 2015-01-20 Rajiv Jain Method of neighbor embedding for OCR enhancement
CN105069825A (en) * 2015-08-14 2015-11-18 厦门大学 Image super resolution reconstruction method based on deep belief network
CN109583451A (en) * 2018-11-28 2019-04-05 上海鹰觉科技有限公司 Automatic identifying method and system based on warship ship side number
CN110415176A (en) * 2019-08-09 2019-11-05 北京大学深圳研究生院 A kind of text image super-resolution method
CN111461134A (en) * 2020-05-18 2020-07-28 南京大学 Low-resolution license plate recognition method based on generation countermeasure network
CN111832556A (en) * 2020-06-04 2020-10-27 国家***南海调查技术中心(国家***南海浮标中心) Ship board character accurate detection method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8938118B1 (en) * 2012-12-12 2015-01-20 Rajiv Jain Method of neighbor embedding for OCR enhancement
CN105069825A (en) * 2015-08-14 2015-11-18 厦门大学 Image super resolution reconstruction method based on deep belief network
CN109583451A (en) * 2018-11-28 2019-04-05 上海鹰觉科技有限公司 Automatic identifying method and system based on warship ship side number
CN110415176A (en) * 2019-08-09 2019-11-05 北京大学深圳研究生院 A kind of text image super-resolution method
CN111461134A (en) * 2020-05-18 2020-07-28 南京大学 Low-resolution license plate recognition method based on generation countermeasure network
CN111832556A (en) * 2020-06-04 2020-10-27 国家***南海调查技术中心(国家***南海浮标中心) Ship board character accurate detection method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ASSAF SHOCHER ET AL.: "Zero-Shot Super-Resolution Using Deep Internal Learning", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
BAEK Y ET AL.: "Character Region Awareness for Text Detection", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
倪浩等: "基于图像块分类处理的快速单图超分辨率重建", 《电子技术应用》 *

Similar Documents

Publication Publication Date Title
CN111723585B (en) Style-controllable image text real-time translation and conversion method
CN110738207B (en) Character detection method for fusing character area edge information in character image
Mahmoud Recognition of writer-independent off-line handwritten Arabic (Indian) numerals using hidden Markov models
CN107729865A (en) A kind of handwritten form mathematical formulae identified off-line method and system
CN112580507B (en) Deep learning text character detection method based on image moment correction
CN112070658A (en) Chinese character font style migration method based on deep learning
CN111523622B (en) Method for simulating handwriting by mechanical arm based on characteristic image self-learning
CN111191649A (en) Method and equipment for identifying bent multi-line text image
CN112069900A (en) Bill character recognition method and system based on convolutional neural network
CN112800955A (en) Remote sensing image rotating target detection method and system based on weighted bidirectional feature pyramid
CN113421318B (en) Font style migration method and system based on multitask generation countermeasure network
CN113177503A (en) Arbitrary orientation target twelve parameter detection method based on YOLOV5
Gomez et al. Selective style transfer for text
CN116311310A (en) Universal form identification method and device combining semantic segmentation and sequence prediction
CN114882204A (en) Automatic ship name recognition method
CN110852102B (en) Chinese part-of-speech tagging method and device, storage medium and electronic equipment
Raj et al. Grantha script recognition from ancient palm leaves using histogram of orientation shape context
CN112288737A (en) Super-resolution image-based porthole detection method
Singh et al. A comprehensive survey on Bangla handwritten numeral recognition
Gao et al. Recurrent calibration network for irregular text recognition
Li et al. Generative character inpainting guided by structural information
Assabie et al. Hmm-based handwritten amharic word recognition with feature concatenation
CN113420760A (en) Handwritten Mongolian detection and identification method based on segmentation and deformation LSTM
Zheng et al. A New Strategy for Improving the Accuracy in Scene Text Recognition
Zulkarnain et al. bbocr: An open-source multi-domain ocr pipeline for bengali documents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210129