CN117437164A - Three-dimensional model texture enhancement method and device, electronic equipment and medium - Google Patents

Three-dimensional model texture enhancement method and device, electronic equipment and medium Download PDF

Info

Publication number
CN117437164A
CN117437164A CN202311491092.3A CN202311491092A CN117437164A CN 117437164 A CN117437164 A CN 117437164A CN 202311491092 A CN202311491092 A CN 202311491092A CN 117437164 A CN117437164 A CN 117437164A
Authority
CN
China
Prior art keywords
vegetation coverage
coverage area
dimensional model
enhancement
vegetation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311491092.3A
Other languages
Chinese (zh)
Inventor
刘贝宁
刘文轩
韩祥磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhai Dashi Intelligence Technology Co ltd
Original Assignee
Wuhai Dashi Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhai Dashi Intelligence Technology Co ltd filed Critical Wuhai Dashi Intelligence Technology Co ltd
Priority to CN202311491092.3A priority Critical patent/CN117437164A/en
Publication of CN117437164A publication Critical patent/CN117437164A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional model texture enhancement method, a three-dimensional model texture enhancement device, electronic equipment and a three-dimensional model texture enhancement medium, wherein the three-dimensional model texture enhancement method comprises the following steps: performing ground object category segmentation on the three-dimensional model to be enhanced to obtain a vegetation coverage area and a non-vegetation coverage area; and carrying out specific channel color enhancement on the vegetation coverage area, and carrying out color optimization on the non-vegetation coverage area to obtain a texture enhanced three-dimensional model. The invention improves the precision and efficiency of improving the visual effect of the three-dimensional model, enhances the space perception capability of the three-dimensional model, and reduces the workload of manually processing the color enhancement of the model.

Description

Three-dimensional model texture enhancement method and device, electronic equipment and medium
Technical Field
The invention relates to the technical field of live-action three-dimensional model processing and optimization, in particular to a three-dimensional model texture enhancement method, a device, electronic equipment and a medium.
Background
With the rapid development of the fields of computer graphics, computer vision, machine learning and the like, the technology for generating the live-action three-dimensional model is greatly improved. A large amount of real-scene two-dimensional data can be rapidly acquired through modern data acquisition technologies such as laser scanning, photogrammetry, unmanned aerial vehicle aerial photography and the like. With the advent of automated and semi-automated model generation tools, feature geometries and textures can be reconstructed from real world data without excessive human intervention, generating high quality realistic three-dimensional models. However, due to environmental influences such as illumination conditions and material properties, improper setting of the photographing electronic device, inaccurate exposure, or limited performance of the collecting electronic device, low contrast and darkness effects of texture colors may be caused.
The texture color enhancement of the live-action three-dimensional model plays an important role in the field of live-action three-dimensional construction, can improve the visual effect of the model and enhance the space perception, and is also beneficial to increasing the interactivity and immersion in the use process of a user. Image segmentation refers to the process of dividing an image into regions with unique attributes or features. Common image segmentation methods include threshold segmentation, edge detection, region growing, graph-based segmentation, and the like. These methods divide an image into different regions by measuring and analyzing the similarity between pixels or regions based on different principles and criteria. Methods of image color enhancement can be generally classified into spatial domain methods, frequency domain methods, and mixed domain methods. The spatial domain method is a method for operating and processing at the pixel level of an original image, and comprises histogram equalization, linear filtering and the like; the frequency domain method generally performs image processing by performing fourier transform or other frequency domain transform on an image; the hybrid domain method combines the spatial domain method and the frequency domain method, and uses the advantages of the spatial domain method and the frequency domain method to enhance the image, including bidirectional filtering, bidirectional random walk and the like. Both frequency domain and mixed domain methods typically require enhancement processing using local or global statistics of the image. Since texture images are stitched from scattered and irregularly shaped texture maps according to a minimized storage space, and are usually unordered, i.e. without continuity in the general image content, no context information in the image can be provided, the processing of the texture images requires direct processing of pixel values. The colors and textures of the model usually need to be manually edited to achieve the ideal effect, which is time-consuming and labor-consuming and difficult to ensure.
Disclosure of Invention
The invention aims to overcome the technical defects, and provides a three-dimensional model texture enhancement method, a device, electronic equipment and a medium, which solve the technical problems of low efficiency and poor effect caused by manually adjusting the color of a three-dimensional model in model editing software in the prior art.
In order to solve the technical problems, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a three-dimensional model texture enhancement method, including:
performing ground object category segmentation on the three-dimensional model to be enhanced to obtain a vegetation coverage area and a non-vegetation coverage area;
and carrying out specific channel color enhancement on the vegetation coverage area, and carrying out color optimization on the non-vegetation coverage area to obtain a texture enhanced three-dimensional model.
In some embodiments, the performing the ground object classification segmentation on the three-dimensional model to be enhanced to obtain a vegetation coverage area and a non-vegetation coverage area includes:
determining an overgreen gray image of the three-dimensional model to be enhanced based on an overgreen index;
and carrying out gray segmentation on the over-green gray level image by adopting a preset maximum inter-class method difference to obtain a vegetation coverage area and a non-vegetation coverage area.
In some embodiments, the performing gray segmentation on the over-green gray image by using a preset maximum inter-class method difference to obtain a vegetation coverage area and a non-vegetation coverage area includes:
traversing all preset thresholds, and determining the preset threshold corresponding to the smallest variance in the weighted class as the optimal threshold;
and carrying out binarization segmentation on the over-green gray level image by adopting the optimal threshold value, and determining that an area corresponding to a pixel with the gray level larger than the optimal threshold value in the over-green gray level image is a vegetation coverage area and an area corresponding to a pixel with the gray level smaller than the optimal threshold value in the over-green gray level image is a non-vegetation coverage area.
In some embodiments, the performing channel-specific color enhancement on the vegetation coverage comprises:
and carrying out color enhancement on the vegetation coverage area by adopting preset gamma conversion to obtain an enhanced vegetation area diagram.
In some embodiments, the performing color enhancement on the vegetation coverage by using a preset gamma transformation to obtain an enhanced vegetation area map includes:
and transforming the gray level image of the vegetation coverage based on a gamma transformation function to change the brightness distribution of a green channel of the vegetation coverage of the image, so as to obtain an enhanced vegetation area diagram.
In some embodiments, the color optimizing the non-vegetation coverage comprises:
and performing color optimization on the non-vegetation coverage area by adopting preset improved gamma transformation to obtain an optimized non-vegetation coverage area diagram.
In some embodiments, the improved gamma transformation may be expressed by the following formula:
wherein, O is a gamma value, gamma is a gamma parameter, and I is an output parameter gray value.
In a second aspect, the present invention further provides a three-dimensional model texture enhancement device, including:
the segmentation module is used for carrying out ground object category segmentation on the three-dimensional model to be enhanced to obtain a vegetation coverage area and a non-vegetation coverage area;
the texture enhancement module is used for carrying out specific channel color enhancement on the vegetation coverage area and carrying out color optimization on the non-vegetation coverage area to obtain a texture enhancement three-dimensional model.
In a third aspect, the present invention also provides an electronic device, including: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps in the three-dimensional model texture enhancement method as described above.
In a fourth aspect, the present invention also provides a computer readable storage medium storing one or more programs executable by one or more processors to implement the steps in the three-dimensional model texture enhancement method as described above.
Compared with the prior art, the three-dimensional model texture enhancement method, device, electronic equipment and medium provided by the invention have the advantages that firstly, the ground object category segmentation is carried out on the three-dimensional model to be enhanced to obtain the vegetation coverage area and the non-vegetation coverage area, the large color difference between the background and the real scene in the ground object scene is fully considered, the image is segmented, the vegetation coverage area is subjected to specific channel color enhancement, and the non-vegetation coverage area is subjected to color optimization to obtain the texture enhancement three-dimensional model, so that the precision and the efficiency of improving the visual effect of the three-dimensional model are further improved, the space perception capability of the three-dimensional model is enhanced, and the workload of manually processing the color enhancement of the model is reduced.
Drawings
FIG. 1 is a flow chart of an embodiment of a three-dimensional model texture enhancement method provided by the present invention;
FIG. 2 is a schematic diagram of an embodiment of a step S101 in the three-dimensional model texture enhancement method according to the present invention;
FIG. 3 is a schematic diagram of an embodiment of a texture image vegetation information extraction result in the three-dimensional model texture enhancement method according to the present invention;
FIG. 4 is a schematic diagram of an embodiment of gamma conversion of gray values under different parameters in the three-dimensional model texture enhancement method according to the present invention;
FIG. 5 is a schematic diagram of a three-dimensional model texture enhancement apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an operating environment of an embodiment of an electronic device provided by the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Image segmentation refers to the process of dividing an image into regions with unique attributes or features, and common image segmentation methods include threshold segmentation, edge detection, region growing, and graph-based segmentation, which divide an image into different regions by measuring and analyzing similarities between pixels or regions based on different principles and criteria. Methods of image color enhancement can be generally classified into spatial domain methods, frequency domain methods, and mixed domain methods. The spatial domain method is a method for operating and processing at the pixel level of an original image, and comprises histogram equalization, linear filtering and the like; the frequency domain method generally performs image processing by performing fourier transform or other frequency domain transform on an image; the hybrid domain method combines the spatial domain method and the frequency domain method, and uses the advantages of the spatial domain method and the frequency domain method to enhance the image, including bidirectional filtering, bidirectional random walk and the like. Both frequency domain and mixed domain methods typically require enhancement processing using local or global statistics of the image. Since texture images are stitched from scattered and irregularly shaped texture maps according to a minimized storage space, and are usually unordered, i.e. without continuity in the general image content, no context information in the image can be provided, the processing of the texture images requires direct processing of pixel values. The patent focuses on the problems of low contrast, dark effect and the like of a texture image of a live-action three-dimensional model, and proposes a airspace processing method aiming at different ground object categories of a texture atlas. The prior art has relatively few researches on directly carrying out color enhancement on texture images, and has wide research space and application value.
An embodiment of the present invention provides a three-dimensional model texture enhancement method, please refer to fig. 1, including:
s101, performing ground object category segmentation on a three-dimensional model to be enhanced to obtain a vegetation coverage area and a non-vegetation coverage area;
s102, performing specific channel color enhancement on the vegetation coverage area, and performing color optimization on the non-vegetation coverage area to obtain a texture enhanced three-dimensional model.
In the embodiment, firstly, the three-dimensional model to be enhanced is subjected to ground object category segmentation to obtain a vegetation coverage area and a non-vegetation coverage area, the large color difference between the background and the real scene in the ground object scene is fully considered, the three-dimensional model is enhanced by segmenting an image, carrying out specific channel color enhancement on the vegetation coverage area and carrying out color optimization on the non-vegetation coverage area, the precision and the efficiency of improving the visual effect of the three-dimensional model are further improved, the space perception capability of the three-dimensional model is enhanced, and the workload of relying on artificial processing of model color enhancement is reduced.
It should be noted that the ground object category refers to different types of ground objects or ground feature in the geographic space, and is used to describe different objects or scenes on the ground surface. The ground object category is generally used in the fields of Geographic Information Systems (GIS), remote sensing image analysis, map making and the like. The ground object category can be natural geographic elements such as water bodies, forests, grasslands, mountains, rivers and the like; or artificial construction or facilities such as buildings, roads, bridges, farmlands, cities and the like, and different ground object categories have different characteristics and properties and can be classified by a specific classification system. In this embodiment, in order to better repair the problems of low contrast and darkness of the image, the image is divided into a green plant coverage area and a non-green plant coverage area according to the characteristics of the ground object image, so that the image is better enhanced in color, and the efficiency and effect of enhancing the image in color are improved.
Further, image segmentation refers to a process of dividing an image into different areas with unique attributes, and the embodiment of the invention performs threshold segmentation on the image based on the over-green index of the texture space, and divides a vegetation coverage area and a non-vegetation coverage area by a binary method so as to facilitate color enhancement processing for different types of ground features. In the image color enhancement process, a gamma conversion method processed at a pixel level is a common color enhancement technology, and the appearance and visual effect of an image are improved by adjusting the brightness and contrast of the image, specifically, the brightness distribution of the image is changed by a nonlinear gray level conversion function, so that the texture details are more vivid. The embodiment of the invention improves gamma conversion on the basis of image segmentation, and realizes texture image color enhancement processing considering the category of ground objects.
In a specific embodiment, the performing the classification segmentation of the three-dimensional model to be enhanced to obtain a vegetation coverage area and a non-vegetation coverage area, referring to fig. 2, includes:
s201, determining an overgreen gray image of the three-dimensional model to be enhanced based on an overgreen index;
s202, carrying out gray segmentation on the over-green gray image by adopting a preset maximum inter-class method difference to obtain a vegetation coverage area and a non-vegetation coverage area.
In this embodiment, unlike the feature classification segmentation of hyperspectral or full-color images, the visible light camera used in oblique photography measurement can only acquire images of three channels of red, green and blue, and cannot acquire the conventional common indexes of remote sensing image classification such as normalized vegetation index NDVI. Therefore, the index is constructed based on three band channels of visible light, and the model texture image is divided into a vegetation coverage area and a non-vegetation coverage area. In the oblique photography model, the texture space of the image is composed of color images of three color channels of red, green and blue, and the vegetation coverage area and the non-vegetation coverage area have different characteristics in the values of the three color components. By combining the different color components, each pixel point in the image can be converted to enhance the contrast between vegetation and non-vegetation coverage, and the vegetation and non-vegetation coverage categories can be better identified. In the present embodiment, the visible light band index, the overgreen index (ExG), is used:
ExG=2g -r -b
wherein r is 、g 、b Normalized values for red, green, and blue three channel gray values in the image RGB color space, respectively, i.e., r =R/(R+G+B),g =G/(R+G+B),b =b (r+g+b), R, G, B is the pixel value of the red, green, blue 3 band, respectively.
And after the gray level graph with the calculated green index is obtained, gray level segmentation is carried out by using a maximum inter-class variance method. The maximum inter-class variance algorithm is a global binarization algorithm that automatically determines a threshold based on maximizing the inter-class variance between the target and the background, the algorithm trying to find a threshold t that minimizes the weighted intra-class variance given by equation (2).
Wherein the method comprises the steps ofWeighted intra-class variance for gray-scale map, q 1 (t) and->Respectively, the proportion of foreground target pixels to total pixel number and gray variance, q 2 (t) and->The proportion of background pixels to the total number of pixels and the gray variance are respectively. The algorithm finds the ++by traversing all possible thresholds (0-255) and calculating the corresponding intra-class variance>And the minimum optimal threshold t is used for binarizing the image, pixels with gray level smaller than or equal to the threshold are set as the background, pixels with gray level smaller than or equal to the threshold are set as the foreground, and the segmentation result is that the binarized image is shown in figure 3.
In some embodiments, the performing channel-specific color enhancement on the vegetation coverage comprises:
and carrying out color enhancement on the vegetation coverage area by adopting preset gamma conversion to obtain an enhanced vegetation area diagram.
The color of the vegetation is enhanced by adjusting the pixel gray value of the green channel of the vegetation coverage. The green color of the vegetation coverage area can be emphasized, for example, by adjusting the color gain parameter or directly multiplying a gain factor, typically by increasing the gain of the green channel. In order to prevent the overflow of gray values or the distortion of images during color enhancement, in this embodiment, the enhancement process is performed by using a gamma conversion based on gray values, the function of the gamma conversion is shown in fig. 4, and the brightness distribution of the green channel of the vegetation coverage area of the image is changed by a nonlinear gray conversion function, so that the overall contrast can be reduced by adjusting a lower gamma value, the green brightness can be improved, and the vegetation texture can be more vivid and soft.
In some embodiments, the color optimizing the non-vegetation coverage comprises:
and performing color optimization on the non-vegetation coverage area by adopting preset improved gamma transformation to obtain an optimized non-vegetation coverage area diagram.
It should be noted that, the color enhancement mode of the non-vegetation coverage is to process the pixels of the area in the texture space, including enhancement of contrast, saturation and brightness.
Contrast stretching increases the difference between pixels through linear or nonlinear transformation, improves the gamma transformation aiming at the characteristics of low contrast and darkness in an original texture image, and for an input gray value I, a mapping output value O of the nonlinear transformation is calculated by a formula (3), and is specifically as follows:
wherein, gamma is gamma parameter, I is output parameter gray value.
The enhancement of saturation and brightness can make the building area more vivid in color. Using color space conversion, an image is converted from RGB color space to HSV (hue, saturation, brightness) color space, then the pixel value of the saturation channel is increased by an interval function (formula 4) to achieve saturation enhancement, the pixel value of the brightness channel is modified by gamma conversion to achieve brightness enhancement, and after enhancement is completed, the color space is converted back to RGB space.
The formula (4) is specifically as follows:
based on the above three-dimensional model texture enhancement method, the embodiment of the present invention further provides a three-dimensional model texture enhancement device 500, referring to fig. 5, where the three-dimensional model texture enhancement device 500 includes a segmentation module 510 and a texture enhancement module 520.
The segmentation module 510 is configured to segment the three-dimensional model to be enhanced by using a land feature class to obtain a vegetation coverage area and a non-vegetation coverage area;
the texture enhancement module 520 is configured to perform color enhancement on the vegetation coverage in a specific channel, and perform color optimization on the non-vegetation coverage to obtain a texture enhanced three-dimensional model.
As shown in fig. 6, based on the three-dimensional model texture enhancement method, the invention further provides an electronic device, which can be a mobile terminal, a desktop computer, a notebook computer, a palm computer, a server and other computing devices. The electronic device includes a processor 610, a memory 620, and a display 630. Fig. 6 shows only some of the components of the electronic device, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may alternatively be implemented.
The memory 620 may be an internal storage unit of the electronic device, such as a hard disk or memory of the electronic device, in some embodiments. The memory 620 may also be an external storage electronic device of the electronic device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like. Further, the memory 620 may also include both internal storage units and external storage electronic devices. The memory 620 is used for storing application software installed on the electronic device and various data, such as program codes for installing the electronic device. The memory 620 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 620 has stored thereon a three-dimensional model texture enhancement program 640, and the three-dimensional model texture enhancement program 640 is executable by the processor 610 to implement the three-dimensional model texture enhancement methods of embodiments of the present application.
The processor 610 may be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip in some embodiments for executing program code or processing data stored in the memory 620, such as performing three-dimensional model texture enhancement methods, etc.
The display 630 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like in some embodiments. The display 630 is used to display information at the three-dimensional model texture enhancement electronic device and to display a visual user interface. The components 610-630 of the electronic device communicate with each other over a system bus.
Of course, those skilled in the art will appreciate that implementing all or part of the above-described methods may be implemented by a computer program for instructing relevant hardware (e.g., a processor, a controller, etc.), where the program may be stored in a computer-readable storage medium, and where the program may include the steps of the above-described method embodiments when executed. The storage medium may be a memory, a magnetic disk, an optical disk, or the like.
The above-described embodiments of the present invention do not limit the scope of the present invention. Any other corresponding changes and modifications made in accordance with the technical idea of the present invention shall be included in the scope of the claims of the present invention.

Claims (10)

1. A method for texture enhancement of a three-dimensional model, comprising:
performing ground object category segmentation on the three-dimensional model to be enhanced to obtain a vegetation coverage area and a non-vegetation coverage area;
and carrying out specific channel color enhancement on the vegetation coverage area, and carrying out color optimization on the non-vegetation coverage area to obtain a texture enhanced three-dimensional model.
2. The method for enhancing texture of a three-dimensional model according to claim 1, wherein the performing the classification segmentation of the ground object on the three-dimensional model to be enhanced to obtain a vegetation coverage area and a non-vegetation coverage area comprises:
determining an overgreen gray image of the three-dimensional model to be enhanced based on an overgreen index;
and carrying out gray segmentation on the over-green gray level image by adopting a preset maximum inter-class method difference to obtain a vegetation coverage area and a non-vegetation coverage area.
3. The method of claim 2, wherein the performing gray scale segmentation on the over-green gray scale image using a preset maximum inter-class method difference to obtain a vegetation coverage area and a non-vegetation coverage area comprises:
traversing all preset thresholds, and determining the preset threshold corresponding to the smallest variance in the weighted class as the optimal threshold;
and carrying out binarization segmentation on the over-green gray level image by adopting the optimal threshold value, and determining that an area corresponding to a pixel with the gray level larger than the optimal threshold value in the over-green gray level image is a vegetation coverage area and an area corresponding to a pixel with the gray level smaller than the optimal threshold value in the over-green gray level image is a non-vegetation coverage area.
4. The method of claim 3, wherein the channel-specific color enhancement of the vegetation coverage comprises:
and carrying out color enhancement on the vegetation coverage area by adopting preset gamma conversion to obtain an enhanced vegetation area diagram.
5. The method of claim 4, wherein the performing color enhancement on the vegetation coverage using a predetermined gamma transformation to obtain an enhanced vegetation area map comprises:
and transforming the gray level image of the vegetation coverage based on a gamma transformation function to change the brightness distribution of a green channel of the vegetation coverage of the image, so as to obtain an enhanced vegetation area diagram.
6. The method of claim 1, wherein the color optimizing the non-vegetation coverage comprises:
and performing color optimization on the non-vegetation coverage area by adopting preset improved gamma transformation to obtain an optimized non-vegetation coverage area diagram.
7. The method of claim 6, wherein the modified gamma transformation is formulated by:
wherein, O is a gamma value, gamma is a gamma parameter, and I is an output parameter gray value.
8. A three-dimensional model texture enhancement device, comprising:
the segmentation module is used for carrying out ground object category segmentation on the three-dimensional model to be enhanced to obtain a vegetation coverage area and a non-vegetation coverage area;
the texture enhancement module is used for carrying out specific channel color enhancement on the vegetation coverage area and carrying out color optimization on the non-vegetation coverage area to obtain a texture enhancement three-dimensional model.
9. An electronic device, comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps of the three-dimensional model texture enhancement method as claimed in any one of claims 1-7.
10. A computer readable storage medium storing one or more programs executable by one or more processors to implement the steps in the three-dimensional model texture enhancement method of any one of claims 1-7.
CN202311491092.3A 2023-11-08 2023-11-08 Three-dimensional model texture enhancement method and device, electronic equipment and medium Pending CN117437164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311491092.3A CN117437164A (en) 2023-11-08 2023-11-08 Three-dimensional model texture enhancement method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311491092.3A CN117437164A (en) 2023-11-08 2023-11-08 Three-dimensional model texture enhancement method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117437164A true CN117437164A (en) 2024-01-23

Family

ID=89553235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311491092.3A Pending CN117437164A (en) 2023-11-08 2023-11-08 Three-dimensional model texture enhancement method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117437164A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814715A (en) * 2020-07-16 2020-10-23 武汉大势智慧科技有限公司 Ground object classification method and device
CN114359305A (en) * 2021-12-31 2022-04-15 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114612387A (en) * 2022-02-16 2022-06-10 珠江水利委员会珠江水利科学研究院 Remote sensing image fusion method, system, equipment and medium based on characteristic threshold
CN114897706A (en) * 2021-09-23 2022-08-12 武汉九天高分遥感技术有限公司 Full-color multispectral image fusion green vegetation enhancement method
KR20230140832A (en) * 2022-03-30 2023-10-10 창원대학교 산학협력단 UAV and Artificial intelligence-based urban spatial information mapping Method and Apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814715A (en) * 2020-07-16 2020-10-23 武汉大势智慧科技有限公司 Ground object classification method and device
CN114897706A (en) * 2021-09-23 2022-08-12 武汉九天高分遥感技术有限公司 Full-color multispectral image fusion green vegetation enhancement method
CN114359305A (en) * 2021-12-31 2022-04-15 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114612387A (en) * 2022-02-16 2022-06-10 珠江水利委员会珠江水利科学研究院 Remote sensing image fusion method, system, equipment and medium based on characteristic threshold
KR20230140832A (en) * 2022-03-30 2023-10-10 창원대학교 산학협력단 UAV and Artificial intelligence-based urban spatial information mapping Method and Apparatus

Similar Documents

Publication Publication Date Title
Huang et al. An efficient visibility enhancement algorithm for road scenes captured by intelligent transportation systems
He et al. Haze removal using the difference-structure-preservation prior
CN111798467B (en) Image segmentation method, device, equipment and storage medium
Jiang et al. Image dehazing using adaptive bi-channel priors on superpixels
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
US9633263B2 (en) Appearance modeling for object re-identification using weighted brightness transfer functions
CN109753878B (en) Imaging identification method and system under severe weather
CN108399424B (en) Point cloud classification method, intelligent terminal and storage medium
US20130342694A1 (en) Method and system for use of intrinsic images in an automotive driver-vehicle-assistance device
CN110675334A (en) Image enhancement method and device
Cui et al. Single image dehazing by latent region‐segmentation based transmission estimation and weighted L1‐norm regularisation
Srinivas et al. Remote sensing image segmentation using OTSU algorithm
Femiani et al. Shadow-based rooftop segmentation in visible band images
Fu et al. Multi-feature-based bilinear CNN for single image dehazing
Wang et al. Haze removal algorithm based on single-images with chromatic properties
CN112949617B (en) Rural road type identification method, system, terminal equipment and readable storage medium
CN114187515A (en) Image segmentation method and image segmentation device
He et al. Effective haze removal under mixed domain and retract neighborhood
CN110059704B (en) Intelligent extraction method of remote sensing information of rare earth mining area driven by visual attention model
CN116543325A (en) Unmanned aerial vehicle image-based crop artificial intelligent automatic identification method and system
CN111062341A (en) Video image area classification method, device, equipment and storage medium
CN117437164A (en) Three-dimensional model texture enhancement method and device, electronic equipment and medium
CN115631108A (en) RGBD-based image defogging method and related equipment
CN113034555B (en) Feature fine matching method based on minimum spanning tree and application
Guo et al. Single Image Dehazing Using Adaptive Sky Segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination