CN114511674A - Method and device for generating house type graph - Google Patents

Method and device for generating house type graph Download PDF

Info

Publication number
CN114511674A
CN114511674A CN202210101097.XA CN202210101097A CN114511674A CN 114511674 A CN114511674 A CN 114511674A CN 202210101097 A CN202210101097 A CN 202210101097A CN 114511674 A CN114511674 A CN 114511674A
Authority
CN
China
Prior art keywords
house
detection model
image
corner
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210101097.XA
Other languages
Chinese (zh)
Inventor
黄韬
宋瑾
马岳文
费义云
胡晓航
胡伟雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210101097.XA priority Critical patent/CN114511674A/en
Publication of CN114511674A publication Critical patent/CN114511674A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

According to the method and the device for generating the house layout picture, the original house corner point detection model is trained and compressed through the server, the compressed house corner point detection model is deployed on the terminal, the terminal can directly call the local house corner point detection model to perform corner point detection processing on a house picture obtained through real-time shooting, and the house layout picture is created according to a processing result. Compared with the prior art, the corner position detection and the house type graph generation provided by the embodiment of the application are realized based on the terminal locally, the detection efficiency and the generation efficiency are better, and the user experience is better.

Description

Method and device for generating house type graph
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for generating a house layout.
Background
With the improvement of living standard, no matter the house is subjected to hard-package construction or soft-package design, a real house type graph in the house needs to be obtained in advance. When a house layout is obtained, obtaining the corner point positions of corner points including wall corner points and the like in a house becomes an indispensable link.
For the acquisition of house layout and the angular point detection, in the prior art, a model in a server is generally adopted to realize related functions, wherein a user can upload a current image to a cloud server through a terminal, so that the cloud server performs operation processing on the image, and the angular point in the image is identified to obtain a corresponding house layout.
However, because the number of the corner points to be identified in the actual scene is large, the operation pressure and the communication overhead of the cloud server are very high, which causes that a user needs to wait for a long time to obtain the corner point detection result returned by the cloud server and the corresponding house floor plan, and the user experience is very poor.
Disclosure of Invention
The embodiment of the application provides a generation method and device of a house layout, and solves the problems of high server operation pressure, poor instantaneity and poor user experience in the generation process of the generation method of the house layout.
In a first aspect, an embodiment of the present application provides a method for generating a house layout, where the method for generating a house layout is applied to a terminal;
the generation method comprises the following steps:
collecting a house image; utilizing a house corner detection model which is pre-deployed at the local terminal to perform house corner detection processing on a house image to obtain a house corner position; the house corner detection model is obtained by performing model compression based on model parameters on a trained original house corner detection model by a server; determining the outer contour of the house according to the corner position of the house; and constructing a house floor plan by using the house outer contour.
It can be known that, because the server compresses the original house corner detection model, the compressed house corner detection model can be directly deployed on the terminal for use, so that the terminal can directly detect the corner position in the image by using the locally deployed model, and then a corresponding house floor plan is created. Compared with the prior art, the corner point position detection efficiency provided by the embodiment of the application is high, the generation efficiency of the house layout is better, and the user experience is better.
Optionally, the house corner detection model includes a compression coding module, and the compression coding module includes at least one compressed mobile rollover bottleneck convolution sub-module; the compressed mobile turnover bottleneck convolution submodule is obtained by compressing the mobile turnover bottleneck convolution submodule in the trained original house corner detection model by the server; wherein, the compression processing of the convolution submodule of the mobile switching bottleneck comprises the following steps: and performing parameter fusion on the convolution operation and batch normalization operation performed in pairs in the mobile turning bottleneck convolution submodule.
It can be known that, the convolution operation and the batch normalization operation which are carried out in pairs in the mobile turnover bottleneck convolution submodule of the original house corner detection model are compressed, so that the convolution operation and the batch normalization operation which are carried out in pairs are compressed to be realized by the same operation, and the compressed house corner detection model is obtained. By the aid of the method, parameters of the compressed house corner detection model can be greatly reduced, the calculation amount, parameters and memory consumption of the terminal can be remarkably reduced, and the detection and identification efficiency of the diagonal positions of the terminal is further improved.
Optionally, acquiring a house image includes: shooting to obtain a real scene image of the house; carrying out pixel image compression processing on the house live-action image to obtain a pixel compressed image of the house live-action image; and carrying out normalization processing on pixel values of the pixel compressed image on an RGB channel to obtain a house image.
It can be known that, before the terminal calls the model to perform the corner detection processing, the house live-action image obtained by the original acquisition is subjected to image preprocessing including image compression, normalization processing and the like, so that the house image meeting the requirement of the model input image is obtained, and meanwhile, the preprocessing can also reduce the processing operand of the model to the image to a certain extent, and further improve the real-time performance of the detection.
Optionally, the generating method further includes: horizontally calibrating the terminal, determining a reference horizontal plane of the house and establishing a space coordinate system of the house; determining the house outline according to the house corner position, comprising the following steps: mapping the position of the corner point of the house to a space coordinate system of the house to obtain the space coordinate size of the outer contour of the house; building a house floor plan by using house outer contours, comprising the following steps: and constructing a house floor plan according to the space coordinate size of the house outline.
It can be known that, by adopting a mode of horizontally calibrating the terminal, the reference horizontal plane on which the terminal is currently based can be quickly determined and a house space coordinate system is created, so that coordinate mapping of corner point positions can be realized on the basis of the space coordinate system, the space size of the house outline is obtained, and the house floor plan can be conveniently created.
Optionally, the generating method further includes: carrying out door and window detection processing on the collected house image to obtain the door and window position; mapping the door and window positions to a space coordinate system of the house to obtain the space coordinate size of the door and window of the house; and marking the doors and windows of the house on the house floor plan according to the space coordinate size of the doors and windows of the house.
It can be known that, by further identifying the door and window of the house included in the image of the house, the information related to the door and window of the house is also marked on the created house floor plan, which is convenient for the user to use.
In a second aspect, an embodiment of the present application provides a generation method for a house corner detection model, where the generation method is applied to a server, and the generation method includes:
acquiring an original house corner detection model to be trained and a training image set; training an original house corner point detection model to be trained by using a training image set to obtain a trained original house corner point detection model; and carrying out model compression based on model parameters on the trained original house corner point detection model, sending and deploying the house corner point detection model obtained by compression to a terminal, wherein the house corner point detection model is used for carrying out house corner point detection processing on a house image acquired by the terminal so as to construct a house type image.
The terminal can directly call the local house corner detection model to perform corner detection processing on a house picture obtained by real-time shooting, and the terminal can conveniently create a house floor plan based on the corner position. As the house corner detection is performed locally at the terminal, compared with the corner detection mode in the prior art, the method has the advantages of higher detection efficiency, better real-time performance and better user experience.
Optionally, the original house corner detection model includes a coding module for performing feature coding on house corner features in the image; the encoding module includes at least one moving rollover bottleneck convolution sub-module.
It can be known that, when the coding module carries out feature coding to the house corner feature, because the coding module has adopted the removal to overturn the bottleneck convolution submodule and has realized, this will make the whole parameter of coding module effectively reduce, the lightweight of the model of being convenient for is deployed.
Optionally, performing model compression on the trained original house corner detection model includes: determining convolution operation and batch normalization operation which are carried out in pairs in the trained original house corner detection model, determining processing parameter pairs corresponding to each pair of convolution operation and batch normalization operation, and carrying out parameter fusion processing on the processing parameter pairs to obtain fusion parameters corresponding to the processing parameter pairs; and generating a compressed house corner detection model according to the fusion parameters.
It can be known that, by compressing the convolution operation and batch normalization operation performed in pairs in the original house corner detection model, the convolution operation and batch normalization operation performed in pairs are compressed to the same operation to be realized, so that the compressed house corner detection model is obtained. By the method, the parameter quantity of the compressed house corner detection model can be greatly reduced, and the calculation quantity, parameter quantity and memory consumption of the terminal can be remarkably reduced.
Optionally, the original house corner detection model to be trained is trained by using the training image set, so as to obtain the trained original house corner detection model, including: determining a loss function used for training, wherein the loss function is a dice loss function; and training the original house corner point detection model to be trained by utilizing the loss function in a multi-output joint training mode to obtain the trained original house corner point detection model.
It can be known that, the original house corner detection model is trained by using the dice loss function and combining a multi-output joint training mode, so that the original house corner detection model obtained after training can focus on the pixel area around the corner, the pixels around the corner in the output detection result are close to the true value, and the confidence coefficient of the model is improved.
Optionally, acquiring a training image set, including:
acquiring a training image of a house; determining a marking position obtained by marking the house corner points in the training image; determining the position of the house corner point in the training image according to the marked position; and carrying out pixel value setting processing on the training image according to the position of the corner point of the house to obtain a training true value image corresponding to the training image.
Therefore, the number of training images in the training image set can be increased by using a true value labeling mode for the training images, and the corner detection accuracy of the trained model is improved.
In a third aspect, an embodiment of the present application provides a device for generating a house layout, where the device for generating a house layout is deployed in a terminal; the generation device of the house layout comprises:
the image acquisition module is used for acquiring a house image;
the image detection module is used for detecting house corner points of the house image by using a house corner point detection model which is deployed in the local terminal in advance to obtain the position of the house corner points; the house corner detection model is obtained by performing model compression based on model parameters on a trained original house corner detection model by a server;
and the house type generation module is used for determining the house outline according to the house corner point position and constructing a house type graph by using the house outline.
In a fourth aspect, an embodiment of the present application provides a device for generating a house corner detection model, where the device for generating a house corner detection model is deployed in a server; the generation device of the house corner detection model comprises:
the training module is used for acquiring an original house corner detection model to be trained and a training image set; training the original house corner point detection model to be trained by using the training image set to obtain a trained original house corner point detection model;
and the compression module is used for carrying out model compression based on model parameters on the trained original house corner detection model, sending and deploying the house corner detection model obtained by compression to the terminal, and the house corner detection model is used for carrying out house corner detection processing on the house image acquired by the terminal so as to construct a house type image.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor; at least one processor; and
a memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory, causing the at least one processor to perform a method as in the first aspect or the second aspect.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, in which computer-executable instructions are stored, and when the processor executes the computer-executable instructions, the method according to the first aspect or the second aspect is implemented.
In a seventh aspect, the present application provides a computer program product, which includes a computer program, and when executed by a processor, the computer program implements the method according to the first aspect or the second aspect.
According to the method and the device for generating the house layout picture, the original house corner point detection model is trained and compressed through the server, the compressed house corner point detection model is deployed on the terminal, the terminal can directly call the local house corner point detection model to perform corner point detection processing on a house picture obtained through real-time shooting, and the house layout picture is created according to a processing result. Compared with the prior art, the corner position detection and the house type graph generation provided by the embodiment of the application are realized based on the terminal locally, the detection efficiency and the generation efficiency are better, and the user experience is better.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of a method for generating a house layout provided in the present application;
FIG. 2 is a schematic diagram of a network architecture upon which the present application is based;
fig. 3 is a schematic flowchart of a method for generating a house layout according to an embodiment of the present application;
fig. 4 is an interface schematic diagram of a house corner point detection method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a method for generating a house corner point detection model according to an embodiment of the present application;
fig. 6 is a schematic network structure diagram of an original house corner point detection model according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a mobile flip bottleneck convolution sub-module according to an embodiment of the present application;
FIG. 8 is a diagram illustrating operation compression in the moving rollover bottleneck convolution sub-module provided in the present application;
fig. 9 is a block diagram of a device for generating a house layout according to an embodiment of the present application;
fig. 10 is a block diagram of a structure of a device for generating a house corner point detection model according to an embodiment of the present application;
fig. 11 is a schematic diagram of a hardware structure of an electronic device provided in the present application.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the technical scheme of the application, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the related user behavior information all accord with the regulations of related laws and regulations, and do not violate the good custom of the public order.
With the improvement of living standard, no matter the house is subjected to hard-package construction or soft-package design, a real indoor pattern in the house needs to be obtained in advance. When a house layout is obtained, obtaining the corner point positions of corner points including wall corner points and the like in a house becomes an indispensable link.
The detection of the house corner points and the creation of the house layout can be realized through corresponding model algorithms, a model in the prior art is erected in a cloud server, a user can upload a current image to the cloud server through a terminal, so that the cloud server firstly carries out operation processing on the image to generate a detection result of the corner points, and then the house layout is created by using the detection result of the corner points. The terminal acquires the created house layout from the server through the communication network.
Fig. 1 is a schematic diagram of a house layout generating method provided by the present application, where as shown in fig. 1, a server is preset with a trained network and a related algorithm, a terminal uploads a whole house panorama or a perspective view to the server to process the whole house panorama or the perspective view by using the network, and extracts house edge features and corner features of the pictures respectively, and finally the server generates 3D layout parameters and a 3D house structure corresponding to the whole house panorama or the perspective view according to the house edge features and the corner features and creates a corresponding house layout. The house type graph is sent to the terminal so that the terminal can display the house type graph.
However, in this scheme, since the preset trained network employs a relatively complex Resnet network structure, and the network consumes a large amount of computing resources during operation, the network can only be set at the server side and cannot be deployed at the local terminal. Therefore, the terminal needs to communicate with the server to obtain the result corresponding to the image when detecting the house image every time, the operation pressure of the server is increased, the terminal cannot obtain the detection result in real time, and the user experience is poor.
In the face of the problem that the existing house corner detection scheme is low in effectiveness, the original house corner detection model is trained and compressed through the server, the compressed house corner detection model is deployed on the terminal, the terminal can directly call the local house corner detection model to perform corner detection processing on a house picture obtained through real-time shooting, and a processed processing result is displayed.
Compared with the scheme shown in fig. 1, on one hand, after the server in the application completes training of the original house corner detection model, the model itself is also subjected to model compression, so that the lightweight degree of a compressed model network is high, the operand of the model when the model processes an image is small, and further the compressed model can be deployed locally at a terminal, so that the terminal can directly detect the corner position in the image by using the locally deployed model, and further a corresponding house user type graph is created. On the other hand, the terminal in the application carries out corner detection processing on the house picture obtained by real-time shooting by directly calling the local house corner detection model, and displays the created house floor plan, so that the terminal is high in real-time performance and good in user experience.
Referring to fig. 2, fig. 2 is a schematic diagram of a network architecture based on the present application, and the network architecture shown in fig. 2 may specifically include a server 1 and a terminal 2.
The server 1 may specifically be a server cluster disposed at a cloud end, and training data for training a model may be stored in the server 1. By presetting the operation logic in the server 1, the server 1 can realize various processes such as model training, model compression and the like.
The terminal 2 may specifically be a hardware device having a network communication function, an operation function, an image acquisition function, and an information display function, and includes, but is not limited to, a smart phone, a tablet computer, a desktop computer, an internet of things device, and the like.
Through communication interaction with the server 1, the terminal 2 deploys the house corner detection model trained and compressed by the server 1, and the terminal 2 can call the model in real time to process the image to be detected, so as to obtain the detection result of the house corner.
The scheme provided by the application is explained in detail by specific examples. The following embodiments may be combined with each other and may not be described in detail in some embodiments for the same or similar concepts or processes.
It should be noted that, in the method for generating a house layout provided in this embodiment, an execution main body of the method for generating a house layout is the aforementioned terminal, and fig. 3 is a flowchart of the method for generating a house layout provided in this embodiment. As shown in fig. 3, the generation method of the house layout may include the following steps:
and 301, acquiring a house image.
Step 302, utilizing a house corner detection model which is pre-deployed at the local terminal to perform house corner detection processing on a house image to obtain a house corner position; the house corner detection model is obtained by the server performing model compression based on model parameters on the trained original house corner detection model.
Step 303, determining the outer contour of the house according to the corner position of the house; and constructing a house floor plan by using the house outer contour.
Specifically, the house corner detection model in the embodiment includes a compression coding module, and the compression coding module includes at least one compressed mobile turning bottleneck convolution sub-module; the compressed mobile turnover bottleneck convolution submodule is obtained by compressing the mobile turnover bottleneck convolution submodule in the trained original house corner detection model by the server; wherein, the compression processing of the convolution submodule of the mobile switching bottleneck comprises the following steps: and performing parameter fusion on the convolution operation and batch normalization operation performed in pairs in the mobile turning bottleneck convolution submodule.
It can be known that, the convolution operation and the batch normalization operation which are carried out in pairs in the mobile turnover bottleneck convolution submodule of the original house corner detection model are compressed, so that the convolution operation and the batch normalization operation which are carried out in pairs are compressed to be realized by the same operation, and the compressed house corner detection model is obtained. By the method, the parameter quantity of the compressed house corner detection model can be greatly reduced, and the calculation quantity, parameter quantity and memory consumption of the terminal can be remarkably reduced.
The house corner detection model in this embodiment is obtained by compressing the original house corner detection model by the server, and the specific compression method thereof may refer to the related embodiments on the subsequent server side, which is not described in detail in this embodiment.
In other embodiments, the compressed house corner detection model according to the present application may directly perform detection processing on an RGB-type house image captured by a terminal, specifically, the terminal may capture the house live-action image first, and then perform pixel image compression processing on the house live-action image to obtain a pixel compressed image of the house live-action image; and carrying out normalization processing on pixel values of the pixel compressed image on an RGB channel to obtain a house image. After the house image is obtained, the terminal will also call the compressed house detection model to detect the image position of the corner points in the identified image to create the house floor plan.
In one optional implementation manner, in order to create the house layout, the terminal further performs horizontal calibration on a reference horizontal plane where the terminal is located, and then creates the house layout according to a calibration result.
Fig. 4 is an interface schematic diagram of a method for detecting a house corner point according to an embodiment of the present application, and as shown in fig. 4, first, the terminal performs a horizontal calibration process, determines a reference horizontal plane of the house, and establishes a spatial coordinate system of the house. The horizontal calibration processing can be specifically realized based on the SLAM module, and through the positioning and mapping technology, the terminal can quickly determine the position of a reference horizontal plane in a house and construct a space coordinate system which can be used for describing the spatial layout of the house based on the position of the reference horizontal plane. Of course, at this stage, only the spatial coordinate axis information is included in the spatial coordinate system of the house, and the spatial coordinate points of the house layout are not included therein.
In the subsequent process, the terminal acquires real-time pose information of the terminal in a space coordinate system, namely space coordinate information and direction information of the terminal in the space coordinate system.
Then, the terminal calls the compressed house corner detection model to perform corner identification on the house image acquired currently as described above, and identifies an image position of a corner point from the house image, where the image position is specifically a pixel coordinate of a pixel coordinate system based on the image.
At the moment, the terminal maps the position of the corner point of the house to a space coordinate system of the house to obtain the space coordinate size of the outer contour of the house. That is, the terminal determines the spatial coordinates of the house image by using the pose information, and performs coordinate mapping on the image coordinates of the corner point to map the corner point position from the pixel coordinate system of the house image to the spatial coordinate system of the house to acquire the spatial coordinates of the corner point under the spatial coordinate system. And then, drawing the outer contour of the house including the wall lines, the wall corners, the ceiling lines and the ceiling corners of the house by using the space coordinates of the corner points in the house. And finally, the terminal can directly construct house types according to the spatial coordinate size of the house outline to obtain an interface shown in figure 4.
On the basis of the embodiment, the terminal can also perform door and window detection processing on the collected house image to obtain the door and window position; mapping the door and window positions to a space coordinate system of the house to obtain the space coordinate size of the door and window of the house; according to the space coordinate size of the doors and windows of the house, the doors and windows of the house are marked on the house layout. And further identifying the house door and window included in the house image so as to mark the information related to the house door and window on the created house floor plan for the convenience of the user.
In the embodiment, the original house corner point detection model is trained and compressed through the server, the compressed house corner point detection model is deployed on the terminal, and the terminal can directly call the local house corner point detection model to perform corner point detection processing on a house picture obtained by real-time shooting and generate a corresponding house floor plan. As the house corner detection is performed locally at the terminal, compared with the detection mode in the prior art, the detection method has the advantages of higher detection precision, better real-time performance, lower pixel error and better user experience.
An execution main body of the house corner point detection model generation method provided in this embodiment is the aforementioned server, and fig. 5 is a flowchart of the house corner point detection model generation method provided in this embodiment. As shown in fig. 5, the method for generating the house corner point detection model may include the following steps:
and 501, acquiring an original house corner detection model to be trained and a training image set.
In particular, in order to ensure that the model can be compressed and deployed at the terminal, the server should ensure the lightweight of the model as much as possible when constructing the original house corner detection model to be trained. Design points of the network of the model include moderate network depth, multiplexing of feature data in the network as much as possible, and the number of feature layers in the network as small as possible.
In order to realize the function of the model, the network of the original house corner detection model provided by the application comprises an encoding module and a decoding module.
The encoding module is used for performing feature encoding processing including feature extraction, feature compression and the like on house corner features in a house image input to a network, and the decoding module is used for performing feature multiplexing and decoding processing based on deconvolution operation on each encoding feature output by the encoding module.
On the premise of meeting the model function, in order to obtain a lightweight model which is convenient to compress and arrange at a terminal, the network of the model is designed by considering the following factors:
first, the network needs to be moderately deep: for the depth of the network, if the depth is too shallow, poor network fitting capability is caused; if the depth is too deep, the amount of parameters and operations of the network may be too large. Therefore, the proper depth is set for the model, so that the model can keep a certain fitting capacity and can keep more proper parameters and calculation amount. Illustratively, the network depth of the encoding module is 4 layers, and the network depth of the decoding module is also 4 layers.
Second, the number of feature layers in the network is as small as possible: the smaller the data of the characteristic layer is, the smaller the memory consumed by the network under the same spatial resolution is, so that convenience is provided for integrated deployment to the terminal.
Then, the characteristic data in the network is multiplexed as much as possible: the feature data generated by the encoding module in the middle process of encoding the features should be reused by the decoding module as much as possible, and the processing mode can improve the interactive flow capacity among the network features, relieve the disappearance of gradients and reduce the difficulty of network training.
For example, based on the above design factors, fig. 6 is a schematic network structure diagram of an original house corner point detection model provided in an embodiment of the present application. As shown in fig. 6, the network will adopt a pyramid structure, which is embodied in that the number of feature layers gradually increases with the increase of depth in the encoding module; conversely, the number gradually decreases as the feature layer size increases in the decoding module. That is, the encoding module and the decoding module adopt U-shaped structures, so that the network is trained in a multi-output joint training mode, the trained network keeps balance between resolution and depth, and real-time performance and precision.
Optionally, in order to further reduce the computation and parameter of the model, a mobile inverted bottle neck convolution sub-module (MBConv module for short) is used in the coding module of the model to implement the related functions. When the coding module carries out feature coding on the house corner feature, the coding module is realized by adopting a mobile turnover bottleneck convolution sub-module, so that the whole parameters of the coding module are effectively reduced, and lightweight deployment of a model is facilitated.
Specifically, fig. 7 is a schematic structural diagram of a mobile flip bottleneck convolution sub-module according to an embodiment of the present application. As shown in fig. 7, MBConv (e, r) is a convolution structure based on the EfficientNet network, where e and r in MBConv (e, r) represent the scale-up coefficient and scale-down coefficient of the convolution structure, respectively. Because the MBConv (e, r) adopts the structure of a depth separable Convolution (Depthwise Convolution) sub-module and a SEnet (Squeze-and-excitation NET) sub-module, compared with the traditional Convolution structure, the MBConv module has smaller parameter quantity and operation quantity, and further has lower memory consumption in operation when the data under the smaller feature layer number (c) is operated.
Of course, it should be noted that, in order to encode the module functions of the module, one or more shift-flip bottleneck convolution sub-modules as shown in fig. 7 may be included in the encoding module. The specific values of the amplification coefficient e and the reduction coefficient r of each mobile turning bottleneck convolution submodule can be determined according to the network requirements, and the method is not limited at all.
Illustratively, as shown in fig. 6, the main operator in the decoding module is the deconvolution operation. On the basis that the decoding module calculates to obtain the output characteristics of each layer, the decoding module multiplexes the characteristics generated in the middle process of the encoding module, and deconvolves the low-resolution characteristics to the high-resolution characteristics through the pyramid structure of the decoding module. Similarly, in order to reduce the number of parameters and the amount of computation, a separable convolution operation is adopted in the deconvolution.
Step 502, training the original house corner detection model to be trained by using the training image set to obtain the trained original house corner detection model.
Specifically, the training image set includes a plurality of training images and a corresponding training true value image obtained by performing corner point labeling on each training image. The training set and the validation set may be obtained by scaling the pair of images of the training image and the training truth map. The training image may specifically be an image based on an RGB pixel format.
In the training process, the image pair of the training image and the training true value image in the training set is used for training each structural parameter in the original house corner point detection model to be trained, and the image pair of the training image and the training true value image in the verification set is used for verifying each structural parameter in the original house corner point detection model obtained through training. And repeating the training and verifying processes until the corner detection accuracy of the original house corner detection model obtained by training is greater than the preset accuracy, or the prediction accuracy is converged within a certain time. At this time, the original house corner point detection model obtained by training is used as the original house corner point detection model after training.
In an optional embodiment, because the training image adopted in the present application is an image based on an RGB pixel channel of a house, it is difficult to use the training image in supervision of information such as an edge of the house, a corner point of the house, and the like, and the problem of low model accuracy is easily caused. Based on the above, in order to improve the detection precision and recall rate of the model, the original house corner point detection model to be trained is trained in a multi-output joint training mode based on the dice loss function.
Specifically, firstly, the server determines a loss function used for training, wherein the loss function is a dice loss function; and then, training the original house corner point detection model to be trained by utilizing the loss function in a multi-output joint training mode to obtain the trained original house corner point detection model.
Illustratively, the dice loss function is expressed as
Figure BDA0003492496700000131
Wherein, X is used for representing a prediction corner thermodynamic diagram output by the current model according to the training RGB image, and Y is used for representing a training true value diagram corresponding to the training image.
By adopting a dice loss function (as the formula above), the network focuses on the pixel area around the corner point, so that the pixel around the corner point in the output prediction corner point thermodynamic diagram approaches to 1 as much as possible. This also enables the model to set the pixel value of the image position to 1 or a value very close to 1 when the network determines that a certain pixel point may correspond to a corner point when such a model is used for detecting the house corner point of the image. Otherwise, the model sets the pixel values of the non-image locations to 0. In the result output by the model, the difference between the pixel values of the pixel points is large, and a user does not need to set a pixel value distinguishing threshold value when using the model so as to distinguish angular points from non-angular points.
In other words, compared with the commonly used binary cross entry loss function in the prior art, the fact that the threshold value of the selected pixel value is too small to cause low model output precision or the fact that the threshold value of the selected pixel value is too high to cause low model recall rate and the like can be avoided by adopting the dice loss function to train the model.
Meanwhile, a training mode of multi-output joint training is also adopted in the embodiment, that is, in the model training process, when the model calculates the loss function, a plurality of output feature layers in the decoding module and the training true value graph used in the training process all perform corresponding loss calculation of the dice loss, and the loss calculation results of the plurality of output feature layers participate in the updating process of the network parameters, so that the model can accelerate the training convergence and improve the training efficiency.
In an optional embodiment, in order to further improve the detection accuracy of the model, the server may further perform data enhancement processing on the training image in advance, where the data enhancement processing includes, but is not limited to, normalization operation, random rotation, random stretching, sharpening, saturation modification, and the like, and by enhancing the data of the training image, the data distribution of the training image has diversity. Therefore, the overfitting phenomenon of the model in the training process can be well inhibited, and the accuracy and the generalization of the network model are further improved.
Illustratively, the obtaining of the training image set may specifically include the following steps: firstly, acquiring a training image of a house; then, determining a marking position obtained by marking the house corner points in the training image; determining the position of the house corner point in the training image according to the marked position; and finally, carrying out pixel value setting processing on the training image according to the house corner position to obtain a training true value image corresponding to the training image.
It should be noted that, in the process of determining the house corner position in the training image according to the labeled position, a convolution process based on a gaussian convolution kernel may be performed in a 3 × 3 pixel region around the labeled position of the house corner, so as to determine some random positions in the region. The combination of the annotation position and the random position constitutes the aforementioned house corner point position. Meanwhile, when the pixel value setting processing is carried out, if the pixel point to be set of the training image corresponds to the house corner position, the pixel value setting value of the pixel point corresponding to the house corner position is 1; and if the pixel point to be set of the training image corresponds to the non-house angular point position, setting the pixel value of the pixel point corresponding to the non-house angular point position to be 0. By the method, sample data of positive samples and negative samples in the training image set can be effectively improved, and the training effect is improved.
Step 503, performing model compression based on model parameters on the trained original house corner point detection model, sending and deploying the house corner point detection model obtained through compression to the terminal, wherein the house corner point detection model is used for performing house corner point detection processing on a house image acquired by the terminal to construct a house type image.
As described above, in order to enable the model to be deployed on the terminal, the trained original house corner detection model needs to be compressed, so that when the terminal performs image processing by using the compressed model, the required computation, parameter and memory consumption are low.
Specifically, as above, the compression process may include: determining convolution operation and batch normalization operation which are carried out in pairs in the trained original house corner detection model, determining processing parameter pairs corresponding to each pair of convolution operation and batch normalization operation, and carrying out parameter fusion processing on the processing parameter pairs to obtain fusion parameters corresponding to the processing parameter pairs; and generating a compressed house corner detection model according to the fusion parameters.
The following will describe the compression process in this embodiment by taking the compression of the moving flip-bottleneck convolution sub-module as an example.
Fig. 8 is a schematic diagram of operation compression in the mobile flip bottleneck convolution sub-module provided in the present application, and it can be known from fig. 7 and 8 that, in the mobile flip bottleneck convolution sub-module including the mobile flip bottleneck convolution sub-module shown in fig. 7, a plurality of convolution operations occurring in pairs and batch normalization operations (e.g., a structure of Conv 1X1 BN shown in a dashed line frame of fig. 7) are included, since the convolution operations and the batch normalization operations are both linear, in this embodiment, the convolution operations occurring in pairs and the batch normalization operations are subjected to parameter fusion, so that parameters w, b, mean, var, beta, and gamma in the original house corner detection model are fused to obtain w 'and b' in the compressed house corner detection model, and the purpose of compressing the model is achieved.
Referring to the processing procedure above fig. 8, in the model training phase, for the structure of "Conv 1X1 BN" appearing in pairs in any moving flip-flop neck convolution submodule, at the time of training, the feature data M × N × C1 will be input into the structure of "Conv 1X1 BN" and pass through "f 0: w "," f 1: + b "," f 2: (f1-mean)/sqrt (var & beta) "and" f 3: the feature operation processing of f2+ gamma "obtains operation result feature data M × N × C2 corresponding to the feature data M × N × C1.
After the model training phase is finished, all parameters in the original house corner point detection model after training should be confirmed, that is, parameters in the structure of any one of the moving turning bottleneck convolution submodules in the model, which appears in pairs, of the "Conv 1X1 BN" are all constant values.
In the present embodiment, the operations in the "Conv 1X1 BN" structure are compressed so that the multiple operations of "f 0" to "f 4" are compressed to two operations, and the parameters after compression are changed accordingly. Referring to the processing procedure in fig. 8, in the "Conv 1X1 BN" structure of the compressed moving flip bottleneck convolution sub-module, the feature data mxnxc 1 is input into the "Conv 1X1 BN" structure of the compressed moving flip bottleneck convolution sub-module, and is sequentially subjected to feature operation processing of "× w '" and "+ b '", so as to obtain operation result feature data mxnxc 2 corresponding to the feature data mxnxc 1, where w ' ═ w/sqrt (var) beta; and b ═ b-mean)/sqrt (var) beta + gamma. Wherein, the above mentioned w, w ', b', mean, sqrt (var), beta and gamma are parameters used by the model, which can be referred to in the prior art specifically, and are not described in detail in this application.
As for the compression process in other modules in the original house corner point detection model shown in fig. 6, which is similar to the compression process of the above-mentioned moving flip bottleneck convolution sub-module, no further description is given here.
The trained original house corner detection model can be subjected to model compression through the method, and the compressed house corner detection model is sent to the terminal and is deployed at the terminal. When the terminal needs to generate the house layout according to the user needs, the compressed house corner detection model can be directly called from the terminal locally, so that the house layout is constructed by using the corner detection result including the corner position output by the house corner detection model.
According to the embodiment of the application, the original house corner detection model is trained and compressed through the server, the compressed house corner detection model is deployed on the terminal, and the terminal can directly call the local house corner detection model to perform corner detection processing on a house picture obtained by real-time shooting so as to construct a house type picture. As the house corner detection is performed locally at the terminal, compared with the detection mode in the prior art, the detection efficiency is higher, the real-time performance is better, and the user experience is better.
Fig. 9 is a block diagram of a structure of a device for generating a house layout pattern according to an embodiment of the present application, which corresponds to the method for detecting a house corner provided in the foregoing embodiment. For convenience of explanation, only portions related to the embodiments of the present application are shown. Referring to fig. 10, the apparatus for generating the house layout is disposed at the terminal, and the apparatus for detecting the house corner point includes:
an image acquisition module 910, configured to acquire a house image;
the image detection module 920 is configured to perform house corner detection processing on the house image by using a house corner detection model pre-deployed in the local terminal to obtain a house corner position; the house corner detection model is obtained by performing model compression based on model parameters on a trained original house corner detection model by a server;
and the house type generation module 930 is configured to determine a house outer contour according to the house corner point position, and construct a house type graph by using the house outer contour.
In an optional embodiment, the house corner point detection model includes a compression coding module, and the compression coding module includes at least one compressed mobile rollover bottleneck convolution sub-module; the compressed mobile turnover bottleneck convolution sub-module is obtained by compressing the mobile turnover bottleneck convolution sub-module in the trained original house corner detection model by the server; wherein the compressing the moving rollover bottleneck convolution sub-module comprises: and performing parameter fusion on the convolution operation and batch normalization operation performed in pairs in the mobile turning bottleneck convolution submodule.
In an optional embodiment, the image obtaining module 910 is specifically configured to capture and obtain a real-scene image of a house; performing pixel image compression processing on the house live-action image to obtain a pixel compressed image of the house live-action image; and carrying out normalization processing on the pixel values of the pixel compressed image on an RGB channel to obtain the house image.
In an optional embodiment, the apparatus for generating a house layout further includes: a calibration module;
the calibration module is used for carrying out horizontal calibration on the terminal, determining a reference horizontal plane of a house and establishing a space coordinate system of the house;
the house type generating module 930 is specifically configured to map the house corner point position to a spatial coordinate system of the house, so as to obtain a spatial coordinate size of the house outer contour; and constructing the house floor plan according to the space coordinate size of the house outline.
In an optional embodiment, the image detection module 920 is further configured to perform door and window detection processing on the acquired house image to obtain a door and window position; mapping the door and window positions to a space coordinate system of the house to obtain the space coordinate size of the door and window of the house;
and the house type generation module is also used for marking the house doors and windows on the house type graph according to the space coordinate size of the house doors and windows.
Corresponding to the method for generating the house corner point detection model provided in the foregoing embodiment, fig. 10 is a block diagram of a device for generating the house corner point detection model provided in the embodiment of the present application. For convenience of explanation, only portions related to the embodiments of the present application are shown. Referring to fig. 10, the apparatus for generating a house corner point detection model is provided in a server, and includes:
the training module 1010 is used for acquiring an original house corner detection model to be trained and a training image set; training the original house corner point detection model to be trained by using the training image set to obtain a trained original house corner point detection model;
and the compression module 1020 is configured to perform model compression based on model parameters on the trained original house corner detection model, send and deploy the house corner detection model obtained through compression to a terminal, where the house corner detection model is used to perform house corner detection processing on a house image acquired by the terminal to construct a house layout.
In an optional embodiment, the original house corner point detection model includes a coding module for performing feature coding on house corner point features in an image; the encoding module includes at least one moving rollover bottleneck convolution sub-module.
In an optional embodiment, the compression module 1020 is specifically configured to determine convolution operations and batch normalization operations performed in pairs in the trained original house corner detection model, determine a processing parameter pair corresponding to each pair of convolution operation and batch normalization operation, and perform parameter fusion processing on the processing parameter pair to obtain a fusion parameter corresponding to the processing parameter pair; and generating a compressed house corner detection model according to the fusion parameters.
In an alternative embodiment, the training module 1010 is specifically configured to determine a loss function used for training, where the loss function is a dice loss function; and training the original house corner detection model to be trained by utilizing the loss function in a multi-output joint training mode to obtain the trained original house corner detection model.
In an optional embodiment, the training module 1010 is specifically configured to obtain a training image of a house; determining a marking position obtained by marking the house corner points in the training image; determining the position of the house corner point in the training image according to the marked position; and carrying out pixel value setting processing on the training image according to the house corner position to obtain a training true value image corresponding to the training image.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device provided in the present application, and as shown in fig. 11, an embodiment of the present application provides an electronic device, a memory of the electronic device may be configured to store at least one program instruction, and a processor is configured to execute the at least one program instruction, so as to implement the technical solution of the foregoing method embodiment. The implementation principle and technical effect are similar to those of the embodiments related to the method, and are not described herein again.
The embodiment of the application provides a chip. The chip comprises a processor for calling a computer program in a memory to execute the technical solution in the above embodiments. The principle and technical effects are similar to those of the related embodiments, and are not described herein again.
The embodiment of the present application provides a computer program product, which, when the computer program product runs on an electronic device, enables the electronic device to execute the technical solutions in the above embodiments. The principle and technical effects are similar to those of the related embodiments, and are not described herein again.
The embodiment of the present application provides a computer-readable storage medium, on which program instructions are stored, and when the program instructions are executed by an electronic device, the electronic device is enabled to execute the technical solutions of the above embodiments. The principle and technical effects are similar to those of the related embodiments, and are not described herein again.
The above embodiments are provided to explain the purpose, technical solutions and advantages of the present application in further detail, and it should be understood that the above embodiments are merely illustrative of the present application and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present application.

Claims (12)

1. A generation method of a house type graph is characterized in that the generation method of the house type graph is applied to a terminal;
the generation method comprises the following steps:
collecting a house image;
utilizing a house corner detection model which is deployed in the local terminal in advance to perform house corner detection processing on the house image to obtain a house corner position; the house corner detection model is obtained by performing model compression based on model parameters on a trained original house corner detection model by a server;
determining the outer contour of the house according to the position of the corner point of the house;
and constructing a house floor plan by using the house outer contour.
2. The generation method according to claim 1, wherein the house corner point detection model comprises a compression coding module, and the compression coding module comprises at least one compressed moving roll-over bottleneck convolution submodule;
the compressed mobile turnover bottleneck convolution sub-module is obtained by compressing the mobile turnover bottleneck convolution sub-module in the trained original house corner detection model by the server;
wherein the compressing the moving rollover bottleneck convolution sub-module comprises: and performing parameter fusion on the convolution operation and batch normalization operation performed in pairs in the mobile turnover bottleneck convolution submodule.
3. The generation method according to claim 1, wherein the capturing of the house image comprises:
shooting to obtain a real scene image of the house;
performing pixel image compression processing on the house live-action image to obtain a pixel compressed image of the house live-action image;
and carrying out normalization processing on the pixel values of the pixel compressed image on an RGB channel to obtain the house image.
4. The generation method according to any one of claims 1 to 3, characterized by further comprising:
horizontally calibrating the terminal, determining a reference horizontal plane of a house and establishing a space coordinate system of the house;
the method for determining the house outline according to the house corner position comprises the following steps:
mapping the position of the corner point of the house to a space coordinate system of the house to obtain a space coordinate size of the outer contour of the house;
the building house layout by using the house outline comprises the following steps:
and constructing the house floor plan according to the space coordinate size of the house outer contour.
5. A generation method of a house corner detection model is characterized in that the generation method of the house corner detection model is applied to a server;
the generation method comprises the following steps:
acquiring an original house corner detection model to be trained and a training image set;
training the original house corner point detection model to be trained by using the training image set to obtain a trained original house corner point detection model;
and carrying out model compression based on model parameters on the trained original house corner point detection model, sending and deploying the house corner point detection model obtained by compression to a terminal, wherein the house corner point detection model is used for carrying out house corner point detection processing on a house image acquired by the terminal so as to construct a house type image.
6. The generation method according to claim 5, wherein the original house corner point detection model comprises a coding module for feature coding house corner point features in an image; the encoding module includes at least one moving rollover bottleneck convolution sub-module.
7. The generation method according to claim 5, wherein the model compression based on model parameters on the trained original house corner point detection model comprises:
determining convolution operation and batch normalization operation which are carried out in pairs in the trained original house corner detection model, determining processing parameter pairs corresponding to each pair of convolution operation and batch normalization operation, and carrying out parameter fusion processing on the processing parameter pairs to obtain fusion parameters corresponding to the processing parameter pairs;
and generating a compressed house corner detection model according to the fusion parameters.
8. The generation method according to claim 5, wherein the training the original house corner point detection model to be trained by using the training image set to obtain a trained original house corner point detection model comprises:
determining a loss function used for training, wherein the loss function is a dice loss function;
and training the original house corner detection model to be trained by utilizing the loss function in a multi-output joint training mode to obtain the trained original house corner detection model.
9. The generation method according to any one of claims 5 to 8, wherein the acquiring of the training image set comprises:
acquiring a training image of a house;
determining a marking position obtained by marking the house corner points in the training image;
determining the position of the house corner point in the training image according to the marked position;
and carrying out pixel value setting processing on the training image according to the house corner position to obtain a training true value image corresponding to the training image.
10. An electronic device, comprising:
at least one processor; and
a memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method of any of claims 1-4 or claims 5-9.
11. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the method of any of claims 1-4 or claims 5-9.
12. A computer program product comprising computer instructions, characterized in that the computer instructions, when executed by a processor, implement the method according to any of claims 1-4 or claims 5-9.
CN202210101097.XA 2022-01-27 2022-01-27 Method and device for generating house type graph Pending CN114511674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210101097.XA CN114511674A (en) 2022-01-27 2022-01-27 Method and device for generating house type graph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210101097.XA CN114511674A (en) 2022-01-27 2022-01-27 Method and device for generating house type graph

Publications (1)

Publication Number Publication Date
CN114511674A true CN114511674A (en) 2022-05-17

Family

ID=81549939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210101097.XA Pending CN114511674A (en) 2022-01-27 2022-01-27 Method and device for generating house type graph

Country Status (1)

Country Link
CN (1) CN114511674A (en)

Similar Documents

Publication Publication Date Title
JP6789402B2 (en) Method of determining the appearance of an object in an image, equipment, equipment and storage medium
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
KR20200100558A (en) Image processing method and apparatus, electronic device and computer readable storage medium
CN113034358B (en) Super-resolution image processing method and related device
CN111985281B (en) Image generation model generation method and device and image generation method and device
CN113505799B (en) Significance detection method and training method, device, equipment and medium of model thereof
CN110598139A (en) Web browser augmented reality real-time positioning method based on 5G cloud computing
WO2014120281A1 (en) Increasing frame rate of an image stream
WO2021035627A1 (en) Depth map acquisition method and device, and computer storage medium
CN112001285B (en) Method, device, terminal and medium for processing beauty images
CN111753782A (en) False face detection method and device based on double-current network and electronic equipment
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN111626241A (en) Face detection method and device
CN117274514A (en) Remote sensing image generation method and device based on ground-air visual angle geometric transformation
CN112036342A (en) Document snapshot method, device and computer storage medium
CN114511674A (en) Method and device for generating house type graph
CN113256484B (en) Method and device for performing stylization processing on image
CN115082960A (en) Image processing method, computer device and readable storage medium
CN115205634A (en) Model training method, device, equipment and readable storage medium
CN115063303A (en) Image 3D method based on image restoration
KR102146839B1 (en) System and method for building real-time virtual reality
CN112529943A (en) Object detection method, object detection device and intelligent equipment
CN112115831B (en) Living body detection image preprocessing method
CN112991210B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN116740261B (en) Image reconstruction method and device and training method and device of image reconstruction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination