CN108257175A - A kind of underwater mating system of view-based access control model control - Google Patents

A kind of underwater mating system of view-based access control model control Download PDF

Info

Publication number
CN108257175A
CN108257175A CN201810077728.2A CN201810077728A CN108257175A CN 108257175 A CN108257175 A CN 108257175A CN 201810077728 A CN201810077728 A CN 201810077728A CN 108257175 A CN108257175 A CN 108257175A
Authority
CN
China
Prior art keywords
docking
layer
module
underwater
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810077728.2A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201810077728.2A priority Critical patent/CN108257175A/en
Publication of CN108257175A publication Critical patent/CN108257175A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The underwater mating system of a kind of view-based access control model control proposed in the present invention, main contents include:Underwater carrier device, docking website location estimation, docking pose estimation, its process is, in a torpedo-shaped pipe shaft placed inside and assembled multifunctional block combiner, and in pipe shaft forepart carry monocular video camera one, simultaneously in central processing unit built in pipe shaft and storage element, for receiving image recording and docking convolutional neural networks being trained to it, then under the constraint of target loss function, the docking location of underwater carrier and website is estimated;Further according to the coordinate of two dimensional image, by the matrix manipulation of mapping, position and the direction of present carrier device are obtained, physical interface docking is carried out with accurate and submerged stations.The present invention visually can realize that unmanned machine is docked with the physical interface of platform under water non-, provide a depth convolutional network training and identification of the study to image coordinate, improve the operating accuracy of docking system.

Description

A kind of underwater mating system of view-based access control model control
Technical field
The present invention relates to computer vision field, more particularly, to a kind of underwater mating system of view-based access control model control.
Background technology
Ocean is richly stored with mineral, living resources and the energy, is the important wealth of human social.With The reach of science, the mankind constantly extend the exploration and research of ocean to deep-sea and off-lying sea, and underwater research vehicle is as people Sole support of the class into deep-sea and off-lying sea exploration, plays the important function that can not be substituted.As one of underwater research vehicle Untethered Autonomous Underwater Vehicle carrier arrangement is a kind of important underwater operation carrier, is widely used in Marine Sciences Numerous career fields such as technical research, sea floor exploration, oil field prospecting.Underwater carrier device, by included when working under water Rechargeable battery, fuel cell, enclosed diesel engine etc. provide the energy, and these mode energy storage are limited, and therefore, carrier arrangement needs It recycles to supplement energy, read information and maintaining.The Successful Operation of underwater carrier can be the exploration and exploitation of deep sea energy source Technical research provides effective experiment porch, carries out the marine environment of large area, continental shelf form, ocean geography, geology, life The science investigation of object, mineral;Underwater operation and the corresponding operation module of control centre's carrying can be used as, implements the examination of deep-sea resources The operation of the property tested exploitation engineering or all kinds of latent devices of manipulation carry out undersea device repair, can also carry out raising of a wreck and engage in archaeological studies with ocean Wait operations.It is used widely by land however, computer vision provides the rich of information with it, but application is under water Current challenging research topic.Since Underwater Imaging obscures, lacks the features such as texture and uneven illumination are even, while Slightly complicated docking facilities in docking operation further include pose adjustment, buffering, locking and other functions, to accurately to taking over Journey brings difficulty.
The present invention proposes the underwater mating system of a kind of view-based access control model control proposed in the present invention, in a torpedo-shaped Pipe shaft placed inside and assembled multifunctional block combiner, and in pipe shaft forepart carry monocular video camera one, while in pipe shaft Central processing unit and storage element are put, for receiving image recording and docking convolutional neural networks being trained to it, then in mesh Under the constraint for marking loss function, the docking location of underwater carrier and website is estimated;Further according to the coordinate of two dimensional image, lead to The matrix manipulation of mapping is crossed, obtains position and the direction of present carrier device, physics is carried out with accurate and submerged stations Interface docks.The present invention visually can realize that unmanned machine is docked with the physical interface of platform under water non-, provide one Depth convolutional network trains and learns the identification to image coordinate, improves the operating accuracy of docking system.
Invention content
For docking for platform and carrier in operation under water is solved the problems, such as, the purpose of the present invention is to provide one kind to be based on The underwater mating system of visual spatial attention, in a torpedo-shaped pipe shaft placed inside and assembled multifunctional block combiner, and in pipe shaft Forepart carry monocular video camera one, while in central processing unit built in pipe shaft and storage element, for receive image recording with And docking convolutional neural networks are trained to it, then under the constraint of target loss function, to docking for underwater carrier and website Estimated position;Further according to the coordinate of two dimensional image, by the matrix manipulation of mapping, present carrier device is obtained Position and direction carry out physical interface docking with accurate and submerged stations.
To solve the above problems, the present invention provides a kind of underwater mating system of view-based access control model control, main contents packet It includes:
(1) underwater carrier device;
(2) website location estimation is docked;
(3) pose estimation is docked.
Wherein, the underwater carrier device, including function module and assembling mode.
The function module, mainly includes:1) module is shot with video-corder:To colored monocular camera one before carrying, frame per second is set It is set to 20 frames/second;2) CPU module:Embedded 64 bit processor one with 8Gb memories, for building carrier and control The communication of device processed;3) lighting module:It is respectively several equipped with blue LED before and after pipe shaft;4) other function modules:Electricity Pond group, Inertial Measurement Unit and control unit.
Function module is sequentially placed into the tubular body of torpedo shape by the assembling mode according to physical space characteristic.
The docking website location estimation persistently records two dimensional image by underwater carrier device by shooting with video-corder module, and By image feedback to docking neural network;Image sequence is trained, design object loss function makes the e-learning image Representative feature, so as to obtain the final predicted value of docking site location.
The docking neural network, by structures groups such as input layer, several convolution modules, full articulamentum and output layers Into specially:
1) input layer:Two dimensional image input is received, and the image is divided into G × G grid, then be input to convolution mould Then block obtains B candidate bounding box to show position and size;
2) convolution module:Share 7 convolution modules, wherein, for the 1st to the 6th convolution module, each convolution module according to It is secondary to contain 1 convolutional layer, 1 activation primitive layer and 1 pond layer;For the 7th convolution module, containing 3 convolutional layers, 1 Activation primitive layer and 1 pond layer, for all convolution modules, activation primitive layer is all using line rectification function, pond layer All use step-length for 2 maximum value pond mode;
3) convolutional layer:From the 1st to the 7th convolution module contains the 1st to the 9th convolutional layer successively, their convolution kernel is big Small by all 3 × 3, output characteristic pattern number is then followed successively by 16,32,64,128,256,512,1024,1024,1024;
4) full articulamentum:3 full articulamentums are set, neuron number is respectively 256,4096, G × G × B × 5;
5) output layer:Complete last layer of articulamentum is taken as output layer, for predicting.
The target loss function in iterative process trained every time, constantly reduces the mesh tab each divided Difference between value and actual value is realized especially by target loss function:
Wherein,Penalty term is represented, for promoting the bounding box of predictionWith practical bounding boxBetween unite One:
In formula (2), xi,bAnd yi,bThe two-dimensional center coordinate of bounding box, w are represented respectivelyi,bAnd hi,bBounding box is represented respectively Width and height;
In addition, ld(θ) andIt represents penalty term, whether is respectively used to weigh in confidence level containing docking site location letter Breath, specially:
Wherein, Si,bRepresent confidence level;
In addition, weight coefficientλd=0.5,
The final predicted value, the position for docking website are characterized by the segmentation grid in bounding box, are finally predicted Grid is:
Wherein,
The docking pose estimation according to obtained two dimensional image, estimates that underwater carrier fills by matrix manipulation Three-dimensional position and direction are put, is docked so as to which the device and submerged stations be enabled to carry out physical interface.
The matrix manipulation represents the two-dimensional coordinate of image, transformation side using the three-dimensional coordinate of video camera Formula is:
Wherein,Expression needs the point measured in two dimensional image,It is inclination factor,It is the three-dimensional coordinate position of video camera,It is two-dimensional image coordinate,WithPoint Not Biao Shi transformed space scale to pixel unit when transformation coefficient,It is the coefficient matrix built in camera.
Description of the drawings
Fig. 1 is a kind of system flow chart of the underwater mating system of view-based access control model control of the present invention.
Fig. 2 is a kind of docking convolutional neural networks structure chart of the underwater mating system of view-based access control model control of the present invention.
Fig. 3 is a kind of training effect process schematic of the underwater mating system of view-based access control model control of the present invention.
Specific embodiment
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the application can phase It mutually combines, the present invention is described in further detail in the following with reference to the drawings and specific embodiments.
Fig. 1 is a kind of system flow chart of the underwater mating system of view-based access control model control of the present invention.It is main to include underwater carry Body device;Dock website location estimation;Dock pose estimation.
Wherein, underwater carrier device, including function module and assembling mode.
Function module mainly includes:1) module is shot with video-corder:To colored monocular camera one before carrying, frame per second is set as 20 Frame/second;2) CPU module:Embedded 64 bit processor one with 8Gb memories, for building carrier and controller Communication;3) lighting module:It is respectively several equipped with blue LED before and after pipe shaft;4) other function modules:Battery pack is used to Property measuring unit and control unit.
Function module is sequentially placed into the tubular body of torpedo shape by assembling mode according to physical space characteristic.
Website location estimation is docked, two dimensional image is persistently recorded by shooting with video-corder module by underwater carrier device, and by image Feed back to docking neural network;Image sequence is trained, design object loss function makes representated by the e-learning image Feature, so as to obtain docking site location final predicted value.
Neural network is docked, by structure compositions such as input layer, several convolution modules, full articulamentum and output layers.
Target loss function in iterative process trained every time, constantly reduces the mesh tab value and reality each divided Difference between actual value is realized especially by target loss function:
Wherein,Penalty term is represented, for promoting the bounding box of predictionWith practical bounding boxBetween unite One:
In formula (2), xi,bAnd yi,bThe two-dimensional center coordinate of bounding box, w are represented respectivelyi,bAnd hi,bBounding box is represented respectively Width and height;
In addition, ld(θ) andIt represents penalty term, whether is respectively used to weigh in confidence level containing docking site location letter Breath, specially:
Wherein, Si,bRepresent confidence level;
In addition, weight coefficientλd=0.5,
Final predicted value, the position for docking website are characterized by the segmentation grid in bounding box, and the grid finally predicted is:
Wherein,
Pose estimation is docked, according to obtained two dimensional image, estimates that underwater carrier device is three-dimensional by matrix manipulation Position and orientation are docked so as to which the device and submerged stations be enabled to carry out physical interface.
Matrix manipulation, the two-dimensional coordinate of image is represented using the three-dimensional coordinate of video camera, and mapping mode is:
Wherein,Expression needs the point measured in two dimensional image,It is inclination factor,It is the three-dimensional coordinate position of video camera,It is two-dimensional image coordinate,WithPoint Not Biao Shi transformed space scale to pixel unit when transformation coefficient,It is the coefficient matrix built in camera.
Fig. 2 is a kind of docking convolutional neural networks structure chart of the underwater mating system of view-based access control model control of the present invention, by The structure compositions such as input layer, several convolution modules, full articulamentum and output layer, specially:
1) input layer:Two dimensional image input is received, and the image is divided into G × G grid, then be input to convolution mould Then block obtains B candidate bounding box to show position and size;
2) convolution module:Share 7 convolution modules, wherein, for the 1st to the 6th convolution module, each convolution module according to It is secondary to contain 1 convolutional layer, 1 activation primitive layer and 1 pond layer;For the 7th convolution module, containing 3 convolutional layers, 1 Activation primitive layer and 1 pond layer, for all convolution modules, activation primitive layer is all using line rectification function, pond layer All use step-length for 2 maximum value pond mode;
3) convolutional layer:From the 1st to the 7th convolution module contains the 1st to the 9th convolutional layer successively, their convolution kernel is big Small by all 3 × 3, output characteristic pattern number is then followed successively by 16,32,64,128,256,512,1024,1024,1024;
4) full articulamentum:3 full articulamentums are set, neuron number is respectively 256,4096, G × G × B × 5;
5) output layer:Complete last layer of articulamentum is taken as output layer, for predicting.
Fig. 3 is a kind of training effect process schematic of the underwater mating system of view-based access control model control of the present invention, as schemed institute Show, in given input picture, i.e., the image of current video camera Underwater Recording, by docking the training of convolutional neural networks, by Gradually determine the grid that docking website is located at, so that it is determined that its position coordinates, accurately to carry out physical interface docking.
For those skilled in the art, the present invention is not limited to the details of above-described embodiment, in the essence without departing substantially from the present invention In the case of refreshing and range, the present invention can be realized in other specific forms.In addition, those skilled in the art can be to this hair Bright to carry out various modification and variations without departing from the spirit and scope of the present invention, these improvements and modifications also should be regarded as the present invention's Protection domain.Therefore, appended claims are intended to be construed to include preferred embodiment and fall into all changes of the scope of the invention More and change.

Claims (10)

1. a kind of underwater mating system of view-based access control model control, which is characterized in that mainly include underwater carrier device (one);Docking Site location estimates (two);Dock pose estimation (three).
2. based on the underwater carrier device (one) described in claims 1, which is characterized in that including function module and assembling side Formula.
3. based on the function module described in claims 2, which is characterized in that mainly include:1) module is shot with video-corder:To coloured silk before carrying Color monocular camera one, frame per second is set as 20 frames/second;2) CPU module:Embedded 64 processing with 8Gb memories Device one, for building the communication of carrier and controller;3) lighting module:Blue LED is respectively housed before and after pipe shaft It is several;4) other function modules:Battery pack, Inertial Measurement Unit and control unit.
4. based on the assembling mode described in claims 2, which is characterized in that by function module according to physical space characteristic successively It is put into the tubular body of torpedo shape.
5. based on the docking website location estimation (two) described in claims 1, which is characterized in that passed through by underwater carrier device It shoots with video-corder module and persistently records two dimensional image, and by image feedback to docking neural network;Image sequence is trained, designs mesh Mark loss function makes the feature representated by the e-learning image, so as to obtain the final predicted value of docking site location.
6. based on the docking neural network described in claims 5, which is characterized in that by input layer, several convolution modules, complete The structure compositions such as articulamentum and output layer, specially:
1) input layer:Two dimensional image input is received, and the image is divided into G × G grid, then be input to convolution module, Then B candidate bounding box is obtained to show position and size;
2) convolution module:7 convolution modules are shared, wherein, for the 1st to the 6th convolution module, each convolution module contains successively There are 1 convolutional layer, 1 activation primitive layer and 1 pond layer;For the 7th convolution module, contain 3 convolutional layers, 1 activation Function layer and 1 pond layer, for all convolution modules, activation primitive layer all using line rectification function, all adopt by pond layer With the maximum value pond mode that step-length is 2;
3) convolutional layer:From the 1st to the 7th convolution module contains the 1st to the 9th convolutional layer successively, their convolution kernel size is complete Portion is 3 × 3, and output characteristic pattern number is then followed successively by 16,32,64,128,256,512,1024,1024,1024;
4) full articulamentum:3 full articulamentums are set, neuron number is respectively 256,4096, G × G × B × 5;
5) output layer:Complete last layer of articulamentum is taken as output layer, for predicting.
7. based on the target loss function described in claims 5, which is characterized in that in iterative process trained every time, no The disconnected difference reduced between the mesh tab value and actual value each divided, is realized especially by target loss function:
Wherein,Penalty term is represented, for promoting the bounding box of predictionWith practical bounding boxBetween it is unified:
In formula (2), xi,bAnd yi,bThe two-dimensional center coordinate of bounding box, w are represented respectivelyi,bAnd hi,bThe width of bounding box is represented respectively Degree and height;
In addition, ld(θ) andIt represents penalty term, is respectively used to weigh and whether contains docking site location information in confidence level, Specially:
Wherein, Si,bRepresent confidence level;
In addition, weight coefficientλd=0.5,
8. the final predicted value described in based on claims 5, which is characterized in that dock the position of website by point in bounding box It cuts grid to be characterized, the grid finally predicted is:
Wherein,
9. based on the docking pose estimation (three) described in claims 1, which is characterized in that according to obtained two dimensional image, lead to Matrix manipulation estimation underwater carrier device three-dimensional position and direction are crossed, is connect so as to which the device and submerged stations be enabled to carry out physics Mouth docking.
10. based on the matrix manipulation described in claims 9, which is characterized in that carry out table using the three-dimensional coordinate of video camera The two-dimensional coordinate of diagram picture, mapping mode are:
Wherein,Expression needs the point measured in two dimensional image,It is inclination factor,It is the three-dimensional coordinate position of video camera,It is two-dimensional image coordinate,WithPoint Not Biao Shi transformed space scale to pixel unit when transformation coefficient,It is the coefficient matrix built in camera.
CN201810077728.2A 2018-01-26 2018-01-26 A kind of underwater mating system of view-based access control model control Withdrawn CN108257175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810077728.2A CN108257175A (en) 2018-01-26 2018-01-26 A kind of underwater mating system of view-based access control model control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810077728.2A CN108257175A (en) 2018-01-26 2018-01-26 A kind of underwater mating system of view-based access control model control

Publications (1)

Publication Number Publication Date
CN108257175A true CN108257175A (en) 2018-07-06

Family

ID=62742508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810077728.2A Withdrawn CN108257175A (en) 2018-01-26 2018-01-26 A kind of underwater mating system of view-based access control model control

Country Status (1)

Country Link
CN (1) CN108257175A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117590867A (en) * 2024-01-18 2024-02-23 吉林大学 Underwater autonomous vehicle connection control method and system based on deep reinforcement learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140224167A1 (en) * 2011-05-17 2014-08-14 Eni S.P.A. Autonomous underwater system for a 4d environmental monitoring
CN204775914U (en) * 2015-06-29 2015-11-18 青岛市光电工程技术研究院 Can be from underwater mating of master -control gesture platform
CN106314732A (en) * 2016-10-14 2017-01-11 中国船舶科学研究中心(中国船舶重工集团公司第七0二研究所) Underwater docking, recycling and laying device for AUV

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140224167A1 (en) * 2011-05-17 2014-08-14 Eni S.P.A. Autonomous underwater system for a 4d environmental monitoring
CN204775914U (en) * 2015-06-29 2015-11-18 青岛市光电工程技术研究院 Can be from underwater mating of master -control gesture platform
CN106314732A (en) * 2016-10-14 2017-01-11 中国船舶科学研究中心(中国船舶重工集团公司第七0二研究所) Underwater docking, recycling and laying device for AUV

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHUANG LIU ET.AL: "A vision based system for underwater docking", 《ARXIV:1712.04138V1 [CS.CV]》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117590867A (en) * 2024-01-18 2024-02-23 吉林大学 Underwater autonomous vehicle connection control method and system based on deep reinforcement learning
CN117590867B (en) * 2024-01-18 2024-03-26 吉林大学 Underwater autonomous vehicle connection control method and system based on deep reinforcement learning

Similar Documents

Publication Publication Date Title
Vedachalam et al. Autonomous underwater vehicles-challenging developments and technological maturity towards strategic swarm robotics systems
Ryuh et al. A school of robotic fish for mariculture monitoring in the sea coast
Drap et al. Photogrammetry for virtual exploration of underwater archeological sites
Gracias et al. Mapping the Moon: Using a lightweight AUV to survey the site of the 17th century ship ‘La Lune’
Bruno et al. Development and integration of digital technologies addressed to raise awareness and access to European underwater cultural heritage. An overview of the H2020 i-MARECULTURE project
Bruno et al. Virtual and augmented reality tools to improve the exploitation of underwater archaeological sites by diver and non-diver tourists
CN110053743A (en) A kind of remote-controlled robot for accurately measuring under water
Maki et al. Volumetric mapping of tubeworm colonies in Kagoshima Bay through autonomous robotic surveys
CN110095120A (en) Biology of the Autonomous Underwater aircraft under ocean circulation inspires Self-organizing Maps paths planning method
Roman et al. Lagrangian floats as sea floor imaging platforms
Nocerino et al. 3D virtualization of an underground semi-submerged cave system
CN108803659A (en) The heuristic three-dimensional path planing method of multiwindow based on magic square model
Bruno et al. Enhancing learning and access to Underwater Cultural Heritage through digital technologies: The case study of the “Cala Minnola” shipwreck site
Jin et al. Hovering control of UUV through underwater object detection based on deep learning
CN114692520B (en) Multi-scene-oriented unmanned ship virtual simulation test platform and test method
CN210235294U (en) Bionic flexible wire-driven manta ray based on marine ranch underwater environment detection
CN206400846U (en) A kind of navigation teaching analogue means
CN108257175A (en) A kind of underwater mating system of view-based access control model control
González et al. AUV based multi-vehicle collaboration: Salinity studies in Mar Menor Coastal lagoon
CN107703509B (en) System and method for selecting optimal fishing point by detecting fish shoal through sonar
Brown et al. An overview of autonomous underwater vehicle research and testbed at PeRL
Shah Design considerations for engineering autonomous underwater vehicles
Nedelcu et al. A survey of autonomous vehicles in scientific applications
CN105931268A (en) Mean shift tracking method based on scale adaption in UUV underwater recovery process
CN206579824U (en) The autonomous navigation unit by water of wind light mutual complementing supply-type

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20180706