CN114549569B - House contour extraction method based on two-dimensional orthographic image - Google Patents

House contour extraction method based on two-dimensional orthographic image Download PDF

Info

Publication number
CN114549569B
CN114549569B CN202210198442.6A CN202210198442A CN114549569B CN 114549569 B CN114549569 B CN 114549569B CN 202210198442 A CN202210198442 A CN 202210198442A CN 114549569 B CN114549569 B CN 114549569B
Authority
CN
China
Prior art keywords
house
image
contour
area
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210198442.6A
Other languages
Chinese (zh)
Other versions
CN114549569A (en
Inventor
何维
刘昊
高毓欣
谭可成
刘承照
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PowerChina Zhongnan Engineering Corp Ltd
Original Assignee
PowerChina Zhongnan Engineering Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PowerChina Zhongnan Engineering Corp Ltd filed Critical PowerChina Zhongnan Engineering Corp Ltd
Priority to CN202210198442.6A priority Critical patent/CN114549569B/en
Publication of CN114549569A publication Critical patent/CN114549569A/en
Application granted granted Critical
Publication of CN114549569B publication Critical patent/CN114549569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/043Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Analysis (AREA)
  • Fuzzy Systems (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Mathematical Optimization (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a house contour extraction method based on two-dimensional orthographic images, which comprises the steps of firstly obtaining an original image of a house contour area to be extracted, obtaining sub-images covering all areas of the original image through image traversal according to the original image, loading the sub-images into a house detection model one by one to obtain rectangular images of each house in each sub-image, inputting the obtained rectangular area images of each house into a fuzzy contour extraction network model to obtain a fuzzy contour gray level image of each image, setting pixels in a sixteen rectangular area in the middle of the obtained fuzzy contour gray level image to be zero, obtaining a binary image of the house area by adopting a water injection filling method, enlarging the contour by a certain pixel by adopting an equidistant amplification method, obtaining a final rectangular contour of the house by adopting a minimum rectangular fitting mode, and finally calculating the pixel area of the contour obtained in step 8, so that the accurate extraction of the house contour is realized, and the statistical area of the house is improved.

Description

House contour extraction method based on two-dimensional orthographic image
Technical Field
The invention relates to the technical field of house contour extraction and image processing, in particular to a house contour extraction method based on two-dimensional orthographic images.
Background
In the research stage of non-urban land-feature immigrants projects involving wide areas and more houses, decision makers have difficulty in making correct decisions, budgets and investments on land-feature benefits and matters thereof in the situation that the overall number and the overall area of the houses in the feature areas are not clear, and even wrong decisions are easy to make, so that the projects are delayed or a large amount of resources and funds are wasted, and the two-dimensional orthographic image-based house contour extraction calculation area is often adopted by the technicians in the field.
The present invention mainly adopts traditional image edge detection algorithms, such as canny edge detection and sobel edge detection, etc., wherein the traditional edge detection algorithm is used for extracting all contours of the whole image, such as a remote sensing image roof contour extraction method (authorized bulletin number: CN 107092877B) based on building base vectors in the prior art.
Disclosure of Invention
The invention aims to provide a house contour extraction method based on a two-dimensional orthographic image, so as to realize accurate extraction of house contours.
Based on the above object, the present invention provides a method as follows:
a house contour extraction method based on two-dimensional orthographic images comprises the following specific steps:
step 1: acquiring an original image of a house outline area to be extracted, wherein the original image is a two-dimensional orthographic image shot on the area at a preset height;
step 2: traversing the original image in the step 1 through images to obtain sub-images covering all areas of the original image;
step 3: loading the sub-images obtained in the step 2 into a house detection model one by one to obtain rectangular area images of each house in each sub-image;
step 4: inputting the rectangular area image of each house obtained in the step 3 into a fuzzy contour extraction network model to obtain a fuzzy contour gray level map of each image;
step 5: setting the pixels in a sixteen rectangular area in the middle of the fuzzy contour gray map obtained in the step 4 to be zero;
step 6: acquiring a binary image of the house area from the gray image obtained in the step 5 by adopting a water filling method;
step 7: extracting the outline of the binary image obtained in the step 6, and expanding the outline by a certain pixel by an equidistant amplification method;
step 8: obtaining a final rectangular outline of the house by adopting a minimum rectangular fitting mode to the house outline obtained in the step 7;
step 9: calculating the pixel area S of the contour obtained in step 8 pixel And (2) calculating the real area S of the house by combining the actual distance d represented by the image unit pixels in the step (1), wherein the calculation formula is as follows:
S=S pixel *d*d。
the method comprises the steps of firstly obtaining an original image of a house outline area to be extracted, obtaining sub-images covering all areas of the original image through image traversal according to the original image, loading the sub-images into a house detection model one by one to obtain rectangular images of each house in each sub-image, inputting the obtained rectangular area images of each house into a fuzzy outline extraction network model to obtain a fuzzy outline gray level image of each image, setting pixels in one sixteenth of rectangular areas in the middle of the obtained fuzzy outline gray level image to be zero, obtaining a binary image of the house area by adopting a water injection filling method, enlarging the outline by a certain pixel by adopting an equidistant amplifying method, obtaining a final rectangular outline of the house by adopting a minimum rectangular fitting mode, and finally calculating the pixel area of the outline obtained in the step 8, thereby realizing accurate extraction of the house outline and improving the statistical area of the house.
As a further way, the specific implementation steps of the image traversal in the step 2 are: first determine the output positionThe size (width, height) = (W, H) of the sub-image, then the (width, height) = (W, H) of the input original image is acquired, and then the size H of the original image to be expanded is calculated according to the following formula down 、W right
The% in the formula is the remainder symbol, and after the size to be expanded is obtained, H is filled at the bottom and the right side of the original image respectively down 、W right And traversing and cutting pixels one by one from left to right and from top to bottom according to the set size of the sub-image from the left upper corner of the original image to obtain the sub-image covering all areas of the original image.
As a further way, the house detection model in step 3 is the target detection network model fast_rcnn in the deep learning field. The target detection network model may be R-CNN, fast RCNN or Faster RCNN, but the preferred approach is to select Faster RCNN.
As a further way, the fuzzy contour extraction network model in step 4 is an HED general contour extraction network model in the deep learning field. Integral nested edge detection (holisticaly-Nested Edge Detection, HED is an algorithm for edge extraction of deep learning, and the HED automatically learns rich hierarchical representation by using a full convolution network, and has two characteristics of (1) integral image training and prediction and (2) multi-scale and multi-layer feature learning.
As a further mode, the water filling method in step 6 specifically comprises the following steps: and 5, taking the central point of the gray level image obtained in the step as an initial anchor point, and diffusing the central point to the periphery from the anchor point position according to the gray level value of the peripheral pixels to obtain a binary image of the house area. Gradually spreading out from a central point into increasingly larger circles, resulting in a dotted area.
As a further mode, the specific steps of the binary image of the house area are as follows:
first, the diffusion threshold is set to G t ,G t Less thanPixel values of house contour points; then spreading from the anchor point to the eight directions of up, down, left, right, up, down, up, right and down, if the eight diffused pixels do not meet the contour, then spreading continuously to the surrounding direction by taking the eight pixels as anchor points, if the contour pixel point is encountered in a certain spreading direction, namely the gray value P of the pixel point is more than G t Stopping diffusion in the direction, otherwise continuing to diffuse in the same way until all the pixel points in the house outline are covered, and obtaining a binary image covering the house area.
The beneficial effects realized by the invention are as follows:
the invention realizes that pure house contours are extracted from the two-dimensional orthophoto in the ultra-large scale scene of the non-urban land-sign immigration region, the house contours are accurately extracted, and the statistical area of houses is improved.
Drawings
FIG. 1 is a house profile extraction process according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of water filling in accordance with an embodiment of the present invention;
FIG. 3 is a center pixel zeroing diagram of an embodiment of the present invention;
FIG. 4 is an isometric enlarged schematic drawing of an embodiment of the invention;
FIG. 5 is a regional orthographic image of an embodiment of the present invention;
FIG. 6 is a partial sub-image of an embodiment of the present invention;
FIG. 7 is a minimal, non-rotating rectangular view of a house according to an embodiment of the present invention;
FIG. 8 is a blurred profile gray scale map in accordance with an embodiment of the present invention;
FIG. 9 is a broken figure of a chevron top profile according to an embodiment of the present invention;
FIG. 10 is a diagram of a single building area according to an embodiment of the present invention;
FIG. 11 is an isometric enlarged view of a house area according to an embodiment of the present invention;
FIG. 12 is a comparison of the house profile of an embodiment of the present invention before and after enlargement;
FIG. 13 is a graph showing house contour fitting results according to an embodiment of the present invention;
fig. 14 is a diagram showing the overall effect of house extraction in the area according to the embodiment of the present invention.
Detailed Description
For a better understanding of the technical solution of the present invention, the following describes embodiments of the present invention with reference to the accompanying drawings:
the program running platform is a 64-bit ubuntu18.04 system, a processor is an Intel (R) Xeon (R) Gold 6133 CPU 20 core, a display card is NVIDIA TITAN RTX, a programming development environment is a PyCharm 2018 professional edition, an image processing library opencv4.5.1, and a deep learning framework is pytorch 1.4.0.
Step 1: an aerial image of an unmanned aerial vehicle was prepared, as shown in fig. 5, with the dimensions of the image (width, height) = (7431, 7999), and the actual distance represented by each pixel was h=0.056 meters.
Step 2: the size of the output sub-image of the image traversing module is set to 2048x2048, and the unmanned aerial vehicle aerial image is input into the image traversing module to obtain a sub-image covering the whole image, as shown in fig. 6.
After the image is input into the traversing module, the original image expansion size is calculated according to the following formula:
the size of the image after being expanded is 8192x8192, and 16 sub-images with uniform size are output through the traversing module, as shown in fig. 2.
Step 3: and (3) loading the 16 sub-images obtained in the step (2) into a trained house detection model Faster_Rcnn one by one to obtain a minimum non-rotation rectangular area image of each house in each sub-image. Fig. 7 shows rectangular images of 7 typical houses in the selected area of this embodiment. The process of extracting the outline of the house of this patent is next demonstrated with these 7 representative house images.
Step 4: and (3) loading the house rectangular images obtained in the step (3) into the HED universal fuzzy contour extraction network model one by one to obtain a fuzzy contour gray level map of each image, wherein the fuzzy contour gray level map of a typical house image is shown in fig. 8.
Step 5: and (3) setting the pixels in the middle sixteen rectangular area of the fuzzy contour gray map of the house obtained in the step (4) to be zero. The size of the zero-setting region of the image is one quarter of the width and height of the house image respectively, the center point of the zero-setting region coincides with the center point of the house image, as shown in fig. 3, the gray region represents a fuzzy contour gray level map, and the black region represents the zero-setting region.
The effect of this step is to artificially break the contour of the gable roof in that area, and to merge the two areas of the gable roof into one. Because of the very obvious boundary line of the herringbone roof, as shown in fig. 6, the boundary line is extracted as a contour in step 4, so that a house is divided into two areas, and if the middle contour is not broken, serious interference is caused to the subsequent contour extraction and area calculation of the house. Fig. 9 shows an image of a broken house blur profile gray scale.
Step 6: setting the diffusion threshold to G t Taking the central point of the image obtained in the step 5 as an initial anchor point, diffusing the image in four weeks, and combining the gray value P of the diffused pixel point with a diffusion threshold G t Comparing, when P > G t When the anchor points meet the outline of the house in the direction, stopping diffusion; when P is less than or equal to G t When the surface anchor points do not meet the outline in the diffusion direction, continuing to diffuse according to the same principle until all pixel points in the outline of the house are covered, and finally obtaining a binary image covering the house area, wherein the binary image of the house is shown in fig. 10, white represents the house area, and black represents the non-house area.
Step 7: setting the equidistant expansion value as 4 pixel points, extracting the house outline of the binary image obtained in the step 6, expanding the house outline by a distance of 4 pixels by adopting an equidistant expansion method to obtain a more accurate house outline, wherein the house outline is an expanded house area binary image as shown in fig. 11, the house outline is a comparison image before and after expansion, the green is before expansion, and the red is after expansion.
Step 8: and (3) obtaining the final rectangular outline of the house by adopting a minimum rectangular fitting mode from the enlarged house outline obtained in the step (7), as shown in fig. 13. Fig. 14 is a diagram showing the overall effect of house extraction in the area according to the embodiment of the present invention.
Step 9: calculating the pixel area S of the contour obtained in the step 8 pixel And (2) calculating the real area of the house by combining the actual distance represented by the image unit pixels in the step (1), wherein the calculation formula is as follows:
S=S pixel *d*d=S pixel *0.056*0.056
finally, it should be noted that the above-mentioned embodiments illustrate rather than limit the scope of the invention, and that modifications of the invention, which are equivalent to those skilled in the art, will fall within the scope of the appended claims after reading the invention.

Claims (6)

1. The house contour extraction method based on the two-dimensional orthographic image is characterized by comprising the following specific steps of:
step 1: acquiring an original image of a house outline area to be extracted, wherein the original image is a two-dimensional orthographic image shot on the area at a preset height;
step 2: traversing the original image in the step 1 through images to obtain sub-images covering all areas of the original image;
step 3: loading the sub-images obtained in the step 2 into a house detection model one by one to obtain rectangular area images of each house in each sub-image;
step 4: inputting the rectangular area image of each house obtained in the step 3 into a fuzzy contour extraction network model to obtain a fuzzy contour gray level map of each image;
step 5: setting the pixels in a sixteen rectangular area in the middle of the fuzzy contour gray map obtained in the step 4 to be zero;
step 6: acquiring a binary image of the house area from the gray image obtained in the step 5 by adopting a water filling method;
step 7: extracting the outline of the binary image obtained in the step 6, and expanding the outline by a certain pixel by an equidistant amplification method;
step 8: obtaining a final rectangular outline of the house by adopting a minimum rectangular fitting mode to the house outline obtained in the step 7;
step 9: calculating the pixel area S of the contour obtained in step 8 pixel And (2) calculating the real area S of the house by combining the actual distance d represented by the image unit pixels in the step (1), wherein the calculation formula is as follows:
S=S pixel *d*d。
2. the method for extracting a house contour based on two-dimensional orthographic images according to claim 1, wherein,
the specific implementation steps of the image traversal in the step 2 are as follows: first, the size (width, height) = (W, H) of the sub-image to be output is determined, then the size (width, height) = (W, H) of the input original image is acquired, and then the size H of the original image to be expanded is calculated according to the following formula down 、W right
Taking the% of the formula (1) as the remainder symbol, and filling H at the bottom and the right side of the original image after the size to be expanded is obtained down 、W right And traversing and cutting pixels one by one from left to right and from top to bottom according to the set size of the sub-image from the left upper corner of the original image to obtain the sub-image covering all areas of the original image.
3. The method for extracting a house contour based on two-dimensional orthographic images according to claim 1, wherein,
the house detection model in the step 3 is a target detection network model Faster_Rcnn in the field of deep learning.
4. The method for extracting a house contour based on two-dimensional orthographic images according to claim 1, wherein,
the fuzzy contour extraction network model in the step 4 is an HED general contour extraction network model in the field of deep learning.
5. The method for extracting a house contour based on two-dimensional orthographic images according to claim 1, wherein,
the water injection filling method in the step 6 comprises the following specific steps: and 5, taking the central point of the gray level image obtained in the step as an initial anchor point, and diffusing the central point to the periphery from the anchor point position according to the gray level value of the peripheral pixels to obtain a binary image of the house area.
6. The house contour extraction method based on two-dimensional orthographic images according to claim 5, wherein the specific steps of the binary image of the house area are as follows:
first, the diffusion threshold is set to G t ,G t Pixel values less than house contour points; then spreading from the anchor point to the eight directions of up, down, left, right, up, down, up, right and down, if the eight diffused pixels do not meet the contour, then spreading continuously to the surrounding direction by taking the eight pixels as anchor points, if the contour pixel point is encountered in a certain spreading direction, the gray value P of the pixel point is larger than G t Stopping diffusion in the direction, otherwise continuing to diffuse in the same way until all the pixel points in the house outline are covered, and obtaining a binary image covering the house area.
CN202210198442.6A 2022-03-01 2022-03-01 House contour extraction method based on two-dimensional orthographic image Active CN114549569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210198442.6A CN114549569B (en) 2022-03-01 2022-03-01 House contour extraction method based on two-dimensional orthographic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210198442.6A CN114549569B (en) 2022-03-01 2022-03-01 House contour extraction method based on two-dimensional orthographic image

Publications (2)

Publication Number Publication Date
CN114549569A CN114549569A (en) 2022-05-27
CN114549569B true CN114549569B (en) 2024-04-09

Family

ID=81661395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210198442.6A Active CN114549569B (en) 2022-03-01 2022-03-01 House contour extraction method based on two-dimensional orthographic image

Country Status (1)

Country Link
CN (1) CN114549569B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574521A (en) * 2016-02-25 2016-05-11 民政部国家减灾中心 House contour extraction method and apparatus thereof
CN106096497A (en) * 2016-05-28 2016-11-09 安徽省(水利部淮河水利委员会)水利科学研究院 A kind of house vectorization method for polynary remotely-sensed data
CN109934110A (en) * 2019-02-02 2019-06-25 广州中科云图智能科技有限公司 A kind of river squatter building house recognition methods nearby
CN113139453A (en) * 2021-04-19 2021-07-20 中国地质大学(武汉) Orthoimage high-rise building base vector extraction method based on deep learning
WO2022007431A1 (en) * 2020-07-07 2022-01-13 广东奥普特科技股份有限公司 Positioning method for micro qr two-dimensional code

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574521A (en) * 2016-02-25 2016-05-11 民政部国家减灾中心 House contour extraction method and apparatus thereof
CN106096497A (en) * 2016-05-28 2016-11-09 安徽省(水利部淮河水利委员会)水利科学研究院 A kind of house vectorization method for polynary remotely-sensed data
CN109934110A (en) * 2019-02-02 2019-06-25 广州中科云图智能科技有限公司 A kind of river squatter building house recognition methods nearby
WO2022007431A1 (en) * 2020-07-07 2022-01-13 广东奥普特科技股份有限公司 Positioning method for micro qr two-dimensional code
CN113139453A (en) * 2021-04-19 2021-07-20 中国地质大学(武汉) Orthoimage high-rise building base vector extraction method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高分辨率航拍灰度图像中的房屋主轮廓识别;张栋, 刘允才;上海交通大学学报;20031130(第11期);全文 *

Also Published As

Publication number Publication date
CN114549569A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN109978839B (en) Method for detecting wafer low-texture defects
CN105279787B (en) The method that three-dimensional house type is generated based on the floor plan identification taken pictures
CN111415363B (en) Image edge identification method
CN110727747B (en) Paper map rapid vectorization method and system based on longitude and latitude recognition
CN110286126A (en) A kind of wafer surface defects subregion area detecting method of view-based access control model image
CN111611643A (en) Family type vectorization data obtaining method and device, electronic equipment and storage medium
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
CN111242026B (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN113052170B (en) Small target license plate recognition method under unconstrained scene
CN113298809B (en) Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation
CN111027538A (en) Container detection method based on instance segmentation model
CN115641415B (en) Method, device, equipment and medium for generating three-dimensional scene based on satellite image
CN111695373A (en) Zebra crossing positioning method, system, medium and device
CN116740758A (en) Bird image recognition method and system for preventing misjudgment
CN114549569B (en) House contour extraction method based on two-dimensional orthographic image
CN113744142A (en) Image restoration method, electronic device and storage medium
JP2014106713A (en) Program, method, and information processor
CN111047614B (en) Feature extraction-based method for extracting target corner of complex scene image
CN114943770B (en) Visual positioning method and system based on artificial intelligence and building information
CN116309284A (en) Slope top/bottom line extraction system and method
CN111563883A (en) Screen visual positioning method, positioning device and storage medium
CN113450311B (en) Pin screw defect detection method and system based on semantic segmentation and spatial relationship
JP2004151815A (en) Method, device and program for extracting specific area, and recording medium recording this program
CN116777905B (en) Intelligent industrial rotation detection method and system based on long tail distribution data
CN117292159B (en) Automatic optimization method and system for textures of building model signboards

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant