CN113570694A - Model point location rendering method and device, storage medium and electronic equipment - Google Patents

Model point location rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113570694A
CN113570694A CN202110873568.4A CN202110873568A CN113570694A CN 113570694 A CN113570694 A CN 113570694A CN 202110873568 A CN202110873568 A CN 202110873568A CN 113570694 A CN113570694 A CN 113570694A
Authority
CN
China
Prior art keywords
point
determining
color
depth information
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110873568.4A
Other languages
Chinese (zh)
Inventor
朱毅
胡洋
谢独放
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beike Technology Co Ltd
Original Assignee
Beijing Fangjianghu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fangjianghu Technology Co Ltd filed Critical Beijing Fangjianghu Technology Co Ltd
Priority to CN202110873568.4A priority Critical patent/CN113570694A/en
Publication of CN113570694A publication Critical patent/CN113570694A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the disclosure discloses a rendering method and device of model point locations, a storage medium and electronic equipment, wherein the method comprises the following steps: acquiring at least two color panoramas corresponding to at least two known point positions in the three-dimensional model; determining first depth information of a first color panorama corresponding to a first point location in at least two known point locations in the direction of a target point location, and determining second depth information of a second color panorama corresponding to a second point location in the direction of the target point location; determining third depth information of the target point position relative to the first point position in the three-dimensional model and fourth depth information of the target point position relative to the second point position in the three-dimensional model; determining color information corresponding to the target point according to the matching relationship between the first depth information and the third depth information and/or the matching relationship between the second depth information and the fourth depth information; this embodiment reduces erroneous residual phases.

Description

Model point location rendering method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to a rendering method and device of model point locations, a storage medium and an electronic device.
Background
When acquiring colors of a three-dimensional space, a common method is to take a plurality of panoramic pictures at a plurality of acquisition points in a three-dimensional scene; at the moment, the color information of the panoramic image at the acquisition point is complete, and the corresponding image information can be correctly displayed; however, since the collection points are distributed in isolation, there is some color information of positions between the collection points and the collection points are not collected, so that when rendering is performed on a three-dimensional space, some calculation is required to obtain color information of positions which are not collected.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a rendering method and device of model point positions, a storage medium and electronic equipment.
According to an aspect of the embodiments of the present disclosure, there is provided a rendering method of model point locations, including:
acquiring at least two color panoramas corresponding to at least two known point positions in the three-dimensional model;
determining first depth information of a first color panorama corresponding to a first point location in the at least two known point locations in a direction of a target point location, and determining second depth information of a second color panorama corresponding to a second point location in the at least two known point locations in the direction of the target point location; wherein the first point location and the second point location are adjacent points of the at least two known point locations;
determining third depth information of the target point position relative to the first point position in the three-dimensional model and fourth depth information of the target point position relative to the second point position in the three-dimensional model;
and determining color information corresponding to the target point according to the matching relationship between the first depth information and the third depth information and/or the matching relationship between the second depth information and the fourth depth information.
Optionally, the determining, according to a matching relationship between the first depth information and the third depth information, and/or a matching relationship between the second depth information and the fourth depth information, color information corresponding to the target point includes:
determining a first difference between the first depth information and the third depth information, and determining color information corresponding to the target point in a first direction of the target point relative to the first point and a second direction of the target point relative to the second point based on a relationship between the first difference and a first set threshold, or in the second direction; and/or the presence of a gas in the gas,
determining a second difference between the second depth information and the fourth depth information, and determining color information corresponding to the target point in the first direction and the second direction or in the first direction based on a relationship between the second difference and a second set threshold.
Optionally, the determining a first difference between the first depth information and the third depth information, and based on a relationship between the first difference and a first set threshold, determining color information corresponding to the target point in a first direction of the target point with respect to the first point and a second direction of the target point with respect to the second point, or in the second direction, includes:
determining the first difference;
in response to the first difference being greater than the first set threshold, determining color information corresponding to the target point in the second direction;
and in response to the first difference being smaller than or equal to the first set threshold, determining color information corresponding to the target point in the first direction and the second direction.
Optionally, the determining a second difference between the second depth information and the fourth depth information, and determining color information corresponding to the target point in the first direction and the second direction or in the first direction based on a relationship between the second difference and a second set threshold includes:
determining the second difference;
in response to the second difference being greater than the second set threshold, determining color information corresponding to the target point in the first direction;
and in response to the second difference being smaller than or equal to the second set threshold, determining color information corresponding to the target point in the first direction and the second direction.
Optionally, the determining the color information corresponding to the target point in the first direction and the second direction includes:
obtaining first color information from a first color panorama corresponding to the first point based on the first direction;
obtaining second color information from a second color panorama corresponding to the second point location based on the second direction;
based on the first color information and the second color information, obtaining color information corresponding to the target point by using a synthesis method;
the determining the color information corresponding to the target point in the first direction includes:
obtaining first color information from a first color panorama corresponding to the first point based on the first direction, wherein the first color information is used as color information corresponding to the target point;
the determining the color information corresponding to the target point in the second direction includes:
and obtaining second color information from a second color panorama corresponding to the second point location based on the second direction, wherein the second color information is used as color information corresponding to the target point location.
Optionally, the determining first depth information of the first color panorama corresponding to the first point of the at least two known point locations in the direction of the target point location includes:
in the three-dimensional model, a first connection line section from the first point position to the target point position is established by taking the first point position as a starting point and the target point position as an ending point;
determining a first direction of the target point location relative to the first point location based on the first point location and the first connection segment;
and determining a first pixel point from the first color panorama based on the first direction, and determining the first depth information according to the depth information of the first pixel point.
Optionally, the determining second depth information of a second color panorama corresponding to a second point location of the at least two known point locations in the target point location direction includes:
in the three-dimensional model, with the second point position as a starting point and the target point position as an ending point, establishing a second connecting line segment from the second point position to the target point position;
determining a second direction of the target point location relative to the second point location based on the second point location and the second connecting line segment;
and determining a second pixel point from the second color panorama based on the second direction, and determining the second depth information according to the depth information of the second pixel point.
Optionally, the determining third depth information of the target point location in the three-dimensional model and fourth depth information of the target point location relative to the second point location in the three-dimensional model includes:
determining the third depth information based on the position relation of the target point location and the first point location in the three-dimensional model;
and determining the fourth depth information based on the position relation of the target point position and the second point position in the three-dimensional model.
According to another aspect of the embodiments of the present disclosure, there is provided an apparatus for rendering model point locations, including:
the panorama acquisition module is used for acquiring at least two color panoramas corresponding to at least two known point positions in the three-dimensional model;
a map depth determining module, configured to determine first depth information of a first color panorama corresponding to a first point of the at least two known points in a target point direction, and determine second depth information of a second color panorama corresponding to a second point of the at least two known points in the target point direction; wherein the first point location and the second point location are adjacent points of the at least two known point locations;
the model depth determining module is used for determining third depth information of the target point position relative to the first point position in the three-dimensional model and fourth depth information of the target point position relative to the second point position in the three-dimensional model;
and the target color determining module is used for determining the color information corresponding to the target point according to the matching relationship between the first depth information and the third depth information and/or the matching relationship between the second depth information and the fourth depth information.
Optionally, the target color determination module includes:
a first color determining unit, configured to determine a first difference between the first depth information and the third depth information, and determine, based on a relationship between the first difference and a first set threshold, color information corresponding to the target point in a first direction of the target point with respect to the first point and a second direction of the target point with respect to the second point, or in the second direction; and/or the presence of a gas in the gas,
a second color determining unit, configured to determine a second difference between the second depth information and the fourth depth information, and determine color information corresponding to the target point in the first direction and the second direction or in the first direction based on a relationship between the second difference and a second set threshold.
Optionally, the first color determining unit is specifically configured to determine the first difference value; in response to the first difference being greater than the first set threshold, determining color information corresponding to the target point in the second direction; and in response to the first difference being smaller than or equal to the first set threshold, determining color information corresponding to the target point in the first direction and the second direction.
Optionally, the second color determining unit is specifically configured to determine the second difference value; in response to the second difference being greater than the second set threshold, determining color information corresponding to the target point in the first direction; and in response to the second difference being smaller than or equal to the second set threshold, determining color information corresponding to the target point in the first direction and the second direction.
Optionally, the first color determining unit or the second color determining unit is configured to obtain first color information from a first color panorama corresponding to the first point based on the first direction when determining the color information corresponding to the target point in the first direction and the second direction; obtaining second color information from a second color panorama corresponding to the second point location based on the second direction; based on the first color information and the second color information, obtaining color information corresponding to the target point by using a synthesis method;
the second color determining unit is configured to obtain first color information from a first color panorama corresponding to the first dot bit based on the first direction when determining the color information corresponding to the target dot bit in the first direction, and use the first color information as the color information corresponding to the target dot bit;
the first color determining unit is configured to, when determining the color information corresponding to the target point in the second direction, obtain second color information from a second color panorama corresponding to the second point based on the second direction, and use the second color information as the color information corresponding to the target point.
Optionally, the map depth determining module is specifically configured to establish a first connection segment from the first point location to the target point location in the three-dimensional model by using the first point location as a starting point and the target point location as an ending point; determining a first direction of the target point location relative to the first point location based on the first point location and the first connection segment; and determining a first pixel point from the first color panorama based on the first direction, and determining the first depth information according to the depth information of the first pixel point.
Optionally, the map depth determining module is further configured to establish a second connecting line segment from the second point location to the target point location in the three-dimensional model by using the second point location as a starting point and the target point location as an ending point; determining a second direction of the target point location relative to the second point location based on the second point location and the second connecting line segment; and determining a second pixel point from the second color panorama based on the second direction, and determining the second depth information according to the depth information of the second pixel point.
Optionally, the model depth determining module is specifically configured to determine the third depth information based on a position relationship between the target point location and the first point location in the three-dimensional model; and determining the fourth depth information based on the position relation of the target point position and the second point position in the three-dimensional model.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, where a computer program is stored, where the computer program is configured to execute the method for rendering model point locations according to any of the embodiments.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instruction from the memory, and execute the instruction to implement the rendering method of the model point location according to any of the embodiments.
Based on the rendering method and device for model point locations, the storage medium and the electronic device provided by the embodiment of the disclosure, at least two color panoramas corresponding to at least two known point locations in a three-dimensional model are obtained; determining first depth information of a first color panorama corresponding to a first point location in the at least two known point locations in a direction of a target point location, and determining second depth information of a second color panorama corresponding to a second point location in the at least two known point locations in the direction of the target point location; wherein the first point location and the second point location are adjacent points of the at least two known point locations; determining third depth information of the target point position relative to the first point position in the three-dimensional model and fourth depth information of the target point position relative to the second point position in the three-dimensional model; determining color information corresponding to the target point according to a first matching relationship between the first depth information and the third depth information and a second matching relationship between the second depth information and the fourth depth information, and the first matching relationship and/or the second matching relationship; in the embodiment, how to determine the color information of the target point location is determined through the matching relationship between the depth information of the target point location in the color panorama and the depth information in the model, so that errors caused by determining the color information by combining the color panorama when the target point location is blocked in the color panorama corresponding to a certain known point location are avoided.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic flowchart of a rendering method of model point locations according to an exemplary embodiment of the present disclosure.
Fig. 2 is a schematic diagram of known point locations in an alternative example of a rendering method of model point locations according to an exemplary embodiment of the disclosure.
Fig. 3 is a schematic flow chart of step 104 in the embodiment shown in fig. 1 of the present disclosure.
Fig. 4 is another flow chart illustrating step 104 in the embodiment shown in fig. 1 of the present disclosure.
Fig. 5 is a schematic diagram of depth information determination in an alternative example of a rendering method of model point locations according to an exemplary embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of a rendering apparatus for model point locations according to an exemplary embodiment of the present disclosure.
Fig. 7 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship. The data referred to in this disclosure may include unstructured data, such as text, images, video, etc., as well as structured data.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
In the process of implementing the present disclosure, the inventors found that, in the prior art, when rendering a three-dimensional space, a transition point between acquisition points is determined by color synthesis of two acquisition points corresponding to the transition point, but the prior art has at least the following problems: when the transition point is blocked in the panorama corresponding to one acquisition point, the transition point cannot be rendered correctly.
Exemplary method
Fig. 1 is a schematic flowchart of a rendering method of model point locations according to an exemplary embodiment of the present disclosure. The embodiment can be applied to an electronic device, as shown in fig. 1, and includes the following steps:
and 102, acquiring at least two color panoramas corresponding to at least two known point positions in the three-dimensional model.
Wherein, the three-dimensional model may be a model acquired in any three-dimensional space, optionally, the method for obtaining the three-dimensional model may include but is not limited to: acquiring an image of a three-dimensional space through a depth camera capable of acquiring depth information to construct a three-dimensional model according to the acquired image and depth; the method comprises the steps of collecting at least two color panoramic pictures from at least two point locations by a panoramic camera, carrying out depth prediction on the color panoramic pictures by utilizing a pre-trained neural network model for predicting the depth of the pictures to obtain the depth information of each pixel point in the color panoramic pictures, obtaining a point cloud of a three-dimensional space by using the color panoramic pictures with the depth information, converting the point cloud into a point cloud model (mesh), obtaining a three-dimensional model and the like. Alternatively, the known point locations in this embodiment may be location points of an image pickup apparatus that collects a color panorama.
And 104, determining first depth information of a first color panorama corresponding to a first point of the at least two known points in the direction of the target point, and determining second depth information of a second color panorama corresponding to a second point of the at least two known points in the direction of the target point.
Wherein the first point location and the second point location are adjacent points of at least two known point locations. The target point may be a point on the three-dimensional model corresponding to any position between the first point and the second point, for example, in an example, when the three-dimensional model is a house type diagram as shown in fig. 2 (the positional relationship is shown only in a top view), the first point is a point a in the diagram, the second point is a point B in the diagram, a point C in the diagram is a transition point from the point a to the point B, and the target point may be a point on the model corresponding to the point C as the observation point; by rendering a plurality of model point positions corresponding to a plurality of transition points between the point A and the point B, smooth transition on the viewed color of the three-dimensional model can be realized when the point A moves to the point B.
And 106, determining third depth information of the target point position relative to the first point position in the three-dimensional model and fourth depth information of the target point position relative to the second point position in the three-dimensional model.
In this embodiment, since the target point location is a point on the three-dimensional model, the third depth information may be determined based on a position relationship between the target point location and the first point location in the three-dimensional model, and the fourth depth information may be determined based on a position relationship between the target point location and the second point location in the three-dimensional model.
And step 108, determining color information corresponding to the target point according to the matching relationship between the first depth information and the third depth information and/or the matching relationship between the second depth information and the fourth depth information.
In this embodiment, a plurality of schemes for determining color information of the target point location are provided, and different schemes are selected according to different situations, so that more accurate color information is determined for the target point location.
According to the rendering method for the model point locations provided by the embodiment of the disclosure, at least two color panoramas corresponding to at least two known point locations in a three-dimensional model are obtained; determining first depth information of a first color panorama corresponding to a first point location in the at least two known point locations in a direction of a target point location, and determining second depth information of a second color panorama corresponding to a second point location in the at least two known point locations in the direction of the target point location; wherein the first point location and the second point location are adjacent points of the at least two known point locations; determining third depth information of the target point position relative to the first point position in the three-dimensional model and fourth depth information of the target point position relative to the second point position in the three-dimensional model; determining color information corresponding to the target point according to a matching relationship between the first depth information and the third depth information and/or a matching relationship between the second depth information and the fourth depth information; in the embodiment, how to determine the color information of the target point location is determined through the matching relationship between the depth information of the target point location in the color panorama and the depth information in the model, so that errors caused by determining the color information by combining the color panorama when the target point location is blocked in the color panorama corresponding to a certain known point location are avoided.
In some optional embodiments, step 108 in the above embodiments may include:
determining a first difference between the first depth information and the third depth information, and determining color information corresponding to the target point in a first direction of the target point relative to the first point and a second direction of the target point relative to the second point or in the second direction based on a relation between the first difference and a first set threshold; and/or the presence of a gas in the gas,
and determining a second difference value between the second depth information and the fourth depth information, and determining color information corresponding to the target point in the first direction and the second direction or in the first direction based on a relation between the second difference value and a second set threshold value.
In this embodiment, in order to ensure the accuracy of the relationship between the first difference, the second difference and the corresponding set threshold, the absolute value of the first difference may be compared with the first set threshold, and the absolute value of the second difference may be compared with the second set threshold; optionally, the relationship between the first difference and the first set threshold and the relationship between the second difference and the second set threshold may be determined respectively, or the relationship between the first difference and the first set threshold and the relationship between the second difference and the second set threshold may be determined successively; for example, the first direction and the second direction are determined in a relationship between the first difference and a first set threshold (for example, when the absolute value of the first difference is equal to or less than the first set threshold), or the color information corresponding to the target point is determined in the second direction (for example, when the absolute value of the first difference is greater than the first set threshold); or, determining the color information corresponding to the target point in the first direction (for example, when the absolute value of the second difference is greater than or equal to the second set threshold) or determining the color information corresponding to the target point in the first direction (for example, when the absolute value of the second difference is greater than the second set threshold) in a relationship between the second difference and the second set threshold; for another example, the color information corresponding to the target point is determined in the second direction according to the relationship between the first difference and the first set threshold (for example, when the absolute value of the first difference is greater than the first set threshold), or the color information corresponding to the target point is determined in the first direction according to the relationship between the second difference and the second set threshold (for example, when the absolute value of the first difference is equal to or less than the first set threshold).
Optionally, determining a first difference between the first depth information and the third depth information, and determining color information corresponding to the target point in the first direction and the first direction, or in the second direction based on a relationship between the first difference and a first set threshold, includes:
determining a first difference value;
determining color information corresponding to the target point in a second direction in response to the first difference being greater than a first set threshold;
and determining color information corresponding to the target point position in the first direction and the second direction in response to the first difference value being smaller than or equal to the first set threshold value.
In this embodiment, the first difference may be an absolute value of a difference between the first depth information and the third depth information, and the absolute value is compared with a first set threshold, and when the first difference is greater than the first set threshold, it indicates that the difference between the depth information of the target point in the model and the depth information in the first color panorama is greater, and there may be a block in the first color panorama, and at this time, if the first direction and the second direction determine color information corresponding to the target point, a residual phase that should not occur may occur in the color information of the target point, resulting in rendering errors of the three-dimensional model; therefore, in this embodiment, when the first difference is greater than the first set threshold, the color information of the target point location is determined by using the second depth information corresponding to the second point location; and when the first difference is smaller than or equal to the first set threshold, the target point is indicated to be not blocked in the first color panorama, and the color information corresponding to the target point determined in the first direction and the second direction is more accurate.
Optionally, determining a second difference between the second depth information and the fourth depth information, and determining color information corresponding to the target point in the first direction and the second direction, or in the first direction based on a relationship between the second difference and a second set threshold, includes:
determining a second difference;
in response to the second difference being larger than a second set threshold, determining color information corresponding to the target point in the first direction and the second direction;
and determining the color information corresponding to the target point in the first direction in response to the second difference being less than or equal to a second set threshold.
In this embodiment, the second difference may be an absolute value of a difference between the second depth information and the fourth depth information, the absolute value is compared with a second set threshold, and when the second difference is greater than the second set threshold, it indicates that the difference between the depth information of the target point in the model and the depth information of the second color panorama is greater, and there may be a block in the second color panorama, and at this time, if the first direction and the second direction determine color information corresponding to the target point, a residual phase that should not occur may occur in the color information of the target point, resulting in a rendering error of the three-dimensional model; therefore, in this embodiment, when the second difference is greater than the second set threshold, the color information of the target point location is determined in the first direction corresponding to the first point location; and when the second difference is smaller than or equal to a second set threshold, the target point is indicated to be not blocked in the second color panorama, and the color information corresponding to the target point determined in the first direction and the second direction is more accurate.
According to the technical characteristics, the technical problem of how to determine the color information of the target point more accurately when the target point in the model is shielded is solved, and the problem of incomplete phase in the rendering result caused by comprehensively determining the color information only by using two pieces of depth information in the prior art is solved.
Optionally, determining color information corresponding to the target point location in the first direction and the second direction includes:
obtaining first color information from a first color panorama corresponding to a first point based on a first direction; second color information is obtained from a second color panorama corresponding to the second point location based on the second direction.
Alternatively, after determining the direction, a corresponding location point may be determined in the first color panorama based on the first direction, with color information of the location point as the first color information, and correspondingly, a corresponding location point may be determined in the second color panorama based on the second direction, with color information of the location point as the second color information.
And obtaining color information corresponding to the target point by using a synthesis method based on the first color information and the second color information.
In this embodiment, the manner of obtaining the color information corresponding to the target point based on the first color information and the second color information may be implemented based on a color synthesis manner in the prior art, for example, transparent synthesis (also called alpha synthesis, or alpha blending) is a process of combining a semi-transparent foreground color with a background color, a new mixed color may be obtained, the transparency of the foreground color is not limited, and the new mixed color may be from completely transparent to completely opaque; in this embodiment, the first color information and the second color information are used as foreground colors and background colors, and are both opaque, and the color information of the target point location can be obtained through transparent synthesis.
Optionally, determining color information corresponding to the target point in the first direction includes:
obtaining first color information from a first color panorama corresponding to a first point based on a first direction, wherein the first color information is used as color information corresponding to the target point;
determining color information corresponding to the target point in a second direction, including:
and obtaining second color information from a second color panorama corresponding to the second point based on the second direction, wherein the second color information is used as color information corresponding to the target point.
As shown in fig. 3, based on the embodiment shown in fig. 1, the process of determining the first depth information in step 104 may include the following steps:
step 1041, in the three-dimensional model, a first connection line segment from the first point location to the target point location is established with the first point location as a start point and the target point location as an end point.
Step 1042, determining a first direction of the target point location with respect to the first point location based on the first point location and the first connection segment.
Step 1043, determining a first pixel point from the first color panorama based on the first direction, and determining first depth information according to the depth information of the first pixel point.
In this embodiment, a first direction of the target point position relative to the first point position is determined according to a position relationship between the first point position and the target point position in the three-dimensional model, and then a first pixel corresponding to the first direction is determined by searching in the first direction in the first color panorama, and each pixel in the color panorama has corresponding depth information (and color information), and the depth information of the pixel is used as the first depth information.
As shown in fig. 4, on the basis of the embodiment shown in fig. 1, the process of determining the second depth information in step 104 may include the following steps:
step 1044, establishing a second connecting line segment from the second point location to the target point location in the three-dimensional model by taking the second point location as a starting point and the target point location as an ending point.
Step 1045, determining a second direction of the target point location with respect to the second point location based on the second point location and the second connecting line segment.
Step 1046, determining a second pixel point from the second color panorama based on the second direction, and determining second depth information according to the depth information of the second pixel point.
In this embodiment, a second direction of the target point location relative to the second point location is determined according to a position relationship between the second point location and the target point location in the three-dimensional model, a second pixel corresponding to the second direction is determined by searching in the second direction in the second color panorama, each pixel in the color panorama has corresponding color information, and depth prediction is performed on the color panorama by using a pre-trained neural network model for predicting image depth, so that depth information of each pixel in the color panorama can be obtained, and the depth information of the pixel is used as the second depth information.
In some alternative embodiments, step 106 may include:
determining third depth information based on the position relation between the target point position and the first point position in the three-dimensional model;
and determining fourth depth information based on the position relation of the target point position and the second point position in the three-dimensional model.
Alternatively, in an alternative example, as shown in fig. 5 (the positional relationship is shown only in a top view), the third depth information between the target point K and the first point a under the three-dimensional model may be determined by calculating a distance between the target point K and the first point a in the three-dimensional model, for example, a line segment AK in the diagram may represent the third depth information, and a line segment AM may represent the corresponding first depth information because the target point K is occluded when viewed at the point a; fourth depth information between the same target point position K and the second point position B under the three-dimensional model can be determined by calculating the distance between the same target point position K and the second point position B in the three-dimensional model; after the third depth information and the fourth depth information are determined, the efficiency of comparing with the depth information in the color panorama can be improved, and the efficiency of determining the color information of the target point location can be improved.
The rendering method of the model point location provided by the embodiment of the present disclosure is applied to color rendering of a three-dimensional model, and solves a transition problem of color rendering from one known point location to another known point location, and specifically, on the premise that a color panorama corresponding to two adjacent known point locations is known, based on the rendering method of the model point location provided by the embodiment, color information of a plurality of model point locations on the model corresponding to continuous transition point locations between two known point locations is determined, and the three-dimensional model is rendered based on the color information of the plurality of model point locations, so that smooth transition from one known point location to another known point location is achieved.
Any kind of rendering method of model point locations provided by the embodiments of the present disclosure may be executed by any suitable device with data processing capability, including but not limited to: terminal equipment, a server and the like. Alternatively, the rendering method of any model point location provided by the embodiments of the present disclosure may be executed by a processor, for example, the processor may execute the rendering method of any model point location mentioned in the embodiments of the present disclosure by calling a corresponding instruction stored in a memory. And will not be described in detail below.
Exemplary devices
Fig. 6 is a schematic structural diagram of a rendering apparatus for model point locations according to an exemplary embodiment of the present disclosure. As shown in fig. 6, the apparatus provided in this embodiment includes:
and the panorama acquisition module 61 is configured to acquire at least two color panoramas corresponding to at least two known point locations in the three-dimensional model.
The map depth determining module 62 is configured to determine first depth information of a first color panorama corresponding to a first point of the at least two known points in the direction of the target point, and determine second depth information of a second color panorama corresponding to a second point of the at least two known points in the direction of the target point.
Wherein the first point location and the second point location are adjacent points of at least two known point locations.
And the model depth determining module 63 is configured to determine third depth information of the target point location in the three-dimensional model relative to the first point location, and fourth depth information of the target point location in the three-dimensional model relative to the second point location.
And a target color determining module 64, configured to determine color information corresponding to the target point according to a matching relationship between the first depth information and the third depth information, and/or a matching relationship between the second depth information and the fourth depth information.
The rendering device for model point locations provided by the above embodiments of the present disclosure obtains at least two color panoramas corresponding to at least two known point locations in a three-dimensional model; determining first depth information of a first color panorama corresponding to a first point location in the at least two known point locations in a direction of a target point location, and determining second depth information of a second color panorama corresponding to a second point location in the at least two known point locations in the direction of the target point location; wherein the first point location and the second point location are adjacent points of the at least two known point locations; determining third depth information of the target point position relative to the first point position in the three-dimensional model and fourth depth information of the target point position relative to the second point position in the three-dimensional model; determining color information corresponding to the target point according to a matching relationship between the first depth information and the third depth information and/or a matching relationship between the second depth information and the fourth depth information; in the embodiment, how to determine the color information of the target point location is determined through the matching relationship between the depth information of the target point location in the color panorama and the depth information in the model, so that errors caused by determining the color information by combining the color panorama when the target point location is blocked in the color panorama corresponding to a certain known point location are avoided.
Optionally, the target color determination module 64 includes:
a first color determining unit, configured to determine a first difference between the first depth information and the third depth information, and determine, based on a relationship between the first difference and a first set threshold, color information corresponding to the target point in a first direction of the target point with respect to the first point and a second direction of the target point with respect to the second point, or in the second direction; and/or the presence of a gas in the gas,
and a second color determination unit configured to determine a second difference between the second depth information and the fourth depth information, and determine color information corresponding to the target point in the first direction and the second direction, or in the first direction, based on a relationship between the second difference and a second set threshold.
Optionally, the first color determining unit is specifically configured to determine a first difference value; determining color information corresponding to the target point in a second direction in response to the first difference being greater than a first set threshold; and determining color information corresponding to the target point position in the first direction and the second direction in response to the first difference value being smaller than or equal to the first set threshold value.
Optionally, the second color determining unit is specifically configured to determine a second difference value; in response to the second difference being larger than a second set threshold, determining color information corresponding to the target point in the first direction; and determining color information corresponding to the target point position in the first direction and the second direction in response to the second difference value being smaller than or equal to a second set threshold value.
Optionally, the first color determining unit or the second color determining unit is configured to, when determining the color information corresponding to the target point location in the first direction and the second direction, obtain the first color information from the first color panorama corresponding to the first point location based on the first direction; obtaining second color information from a second color panorama corresponding to a second point location based on a second direction; based on the first color information and the second color information, obtaining color information corresponding to the target point by using a synthesis method;
the second color determining unit is used for obtaining first color information from a first color panorama corresponding to a first point based on the first direction when determining the color information corresponding to the target point in the first direction, and taking the first color information as the color information corresponding to the target point;
the first color determining unit is configured to obtain second color information from a second color panorama corresponding to a second point based on a second direction when determining color information corresponding to the target point in the second direction, and use the second color information as the color information corresponding to the target point.
Optionally, the map depth determining module 62 is specifically configured to establish a first connection segment from the first point location to the target point location in the three-dimensional model by using the first point location as a starting point and the target point location as an ending point; determining a first direction of the target point position relative to the first point position based on the first point position and the first connection line segment; and determining a first pixel point from the first color panorama based on the first direction, and determining first depth information according to the depth information of the first pixel point.
Optionally, the map depth determining module 62 is further configured to establish a second connecting line segment from the second point location to the target point location in the three-dimensional model by using the second point location as a starting point and the target point location as an ending point; determining a second direction of the target point position relative to the second point position based on the second point position and the second connecting line segment; and determining a second pixel point from the second color panorama based on the second direction, and determining second depth information according to the depth information of the second pixel point.
Optionally, the model depth determining module 63 is specifically configured to determine third depth information based on a position relationship between the target point location and the first point location in the three-dimensional model; and determining fourth depth information based on the position relation of the target point position and the second point position in the three-dimensional model.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 7. The electronic device may be either or both of the first device 100 and the second device 200, or a stand-alone device separate from them that may communicate with the first device and the second device to receive the collected input signals therefrom.
FIG. 7 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
As shown in fig. 7, the electronic device 70 includes one or more processors 71 and a memory 72.
The processor 71 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 70 to perform desired functions.
Memory 72 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 71 to implement the rendering methods of model point locations of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 70 may further include: an input device 73 and an output device 74, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the first device 100 or the second device 200, the input device 73 may be a microphone or a microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input means 73 may be a communication network connector for receiving the acquired input signals from the first device 100 and the second device 200.
The input device 73 may also include, for example, a keyboard, a mouse, and the like.
The output device 74 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 74 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 70 relevant to the present disclosure are shown in fig. 7, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 70 may include any other suitable components, depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of rendering model points according to various embodiments of the present disclosure described in the "exemplary methods" section of this specification above.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a method of rendering model points according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A rendering method of model point locations is characterized by comprising the following steps:
acquiring at least two color panoramas corresponding to at least two known point positions in the three-dimensional model;
determining first depth information of a first color panorama corresponding to a first point location in the at least two known point locations in a direction of a target point location, and determining second depth information of a second color panorama corresponding to a second point location in the at least two known point locations in the direction of the target point location; wherein the first point location and the second point location are adjacent points of the at least two known point locations;
determining third depth information of the target point position relative to the first point position in the three-dimensional model and fourth depth information of the target point position relative to the second point position in the three-dimensional model;
and determining color information corresponding to the target point according to the matching relationship between the first depth information and the third depth information and/or the matching relationship between the second depth information and the fourth depth information.
2. The method according to claim 1, wherein the determining the color information corresponding to the target point according to the matching relationship between the first depth information and the third depth information and/or the matching relationship between the second depth information and the fourth depth information comprises:
determining a first difference between the first depth information and the third depth information, and determining color information corresponding to the target point in a first direction of the target point relative to the first point and a second direction of the target point relative to the second point based on a relationship between the first difference and a first set threshold, or in the second direction; and/or the presence of a gas in the gas,
determining a second difference between the second depth information and the fourth depth information, and determining color information corresponding to the target point in the first direction and the second direction or in the first direction based on a relationship between the second difference and a second set threshold.
3. The method according to claim 2, wherein the determining a first difference between the first depth information and the third depth information, and based on a relationship between the first difference and a first set threshold, determining color information corresponding to the target point in a first direction of the target point with respect to the first point location and a second direction of the target point with respect to the second point location, or in the second direction, comprises:
determining the first difference;
in response to the first difference being greater than the first set threshold, determining color information corresponding to the target point in the second direction;
and in response to the first difference being smaller than or equal to the first set threshold, determining color information corresponding to the target point in the first direction and the second direction.
4. The method according to claim 2 or 3, wherein the determining a second difference between the second depth information and the fourth depth information, and the determining the color information corresponding to the target point in the first direction and the second direction or in the first direction based on a relationship between the second difference and a second set threshold comprises:
determining the second difference;
in response to the second difference being greater than the second set threshold, determining color information corresponding to the target point in the first direction;
and in response to the second difference being smaller than or equal to the second set threshold, determining color information corresponding to the target point in the first direction and the second direction.
5. The method according to claim 3 or 4, wherein the determining the color information corresponding to the target point according to the first direction and the direction comprises:
obtaining first color information from a first color panorama corresponding to the first point based on the first direction;
obtaining second color information from a second color panorama corresponding to the second point location based on the second direction;
based on the first color information and the second color information, obtaining color information corresponding to the target point by using a synthesis method;
the determining the color information corresponding to the target point in the first direction includes:
obtaining first color information from a first color panorama corresponding to the first point based on the first direction, wherein the first color information is used as color information corresponding to the target point;
the determining the color information corresponding to the target point in the second direction includes:
and obtaining second color information from a second color panorama corresponding to the second point location based on the second direction, wherein the second color information is used as color information corresponding to the target point location.
6. The method according to any one of claims 1 to 5, wherein the determining the first depth information of the first color panorama corresponding to the first point of the at least two known points in the direction of the target point comprises:
in the three-dimensional model, a first connection line section from the first point position to the target point position is established by taking the first point position as a starting point and the target point position as an ending point;
determining a first direction of the target point location relative to the first point location based on the first point location and the first connection segment;
and determining a first pixel point from the first color panorama based on the first direction, and determining the first depth information according to the depth information of the first pixel point.
7. The method according to any one of claims 1 to 6, wherein the determining second depth information of the second color panorama corresponding to the second point location of the at least two known point locations in the direction of the target point location comprises:
in the three-dimensional model, with the second point position as a starting point and the target point position as an ending point, establishing a second connecting line segment from the second point position to the target point position;
determining a second direction of the target point location relative to the second point location based on the second point location and the second connecting line segment;
and determining a second pixel point from the second color panorama based on the second direction, and determining the second depth information according to the depth information of the second pixel point.
8. The method of any of claims 1-7, wherein said determining third depth information for said target point location in said three-dimensional model and fourth depth information for said target point location in said three-dimensional model relative to said second point location comprises:
determining the third depth information based on the position relation of the target point location and the first point location in the three-dimensional model;
and determining the fourth depth information based on the position relation of the target point position and the second point position in the three-dimensional model.
9. A computer-readable storage medium, wherein the storage medium stores a computer program for executing the method for rendering model points according to any one of claims 1 to 8.
10. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method for rendering model points according to any one of claims 1 to 8.
CN202110873568.4A 2021-07-30 2021-07-30 Model point location rendering method and device, storage medium and electronic equipment Pending CN113570694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110873568.4A CN113570694A (en) 2021-07-30 2021-07-30 Model point location rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110873568.4A CN113570694A (en) 2021-07-30 2021-07-30 Model point location rendering method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113570694A true CN113570694A (en) 2021-10-29

Family

ID=78169513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110873568.4A Pending CN113570694A (en) 2021-07-30 2021-07-30 Model point location rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113570694A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050128196A1 (en) * 2003-10-08 2005-06-16 Popescu Voicu S. System and method for three dimensional modeling
CN112200902A (en) * 2020-09-30 2021-01-08 北京达佳互联信息技术有限公司 Image rendering method and device, electronic equipment and storage medium
CN113096254A (en) * 2021-04-23 2021-07-09 北京百度网讯科技有限公司 Object rendering method and device, computer equipment and medium
CN113112581A (en) * 2021-05-13 2021-07-13 广东三维家信息科技有限公司 Texture map generation method, device and equipment for three-dimensional model and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050128196A1 (en) * 2003-10-08 2005-06-16 Popescu Voicu S. System and method for three dimensional modeling
CN112200902A (en) * 2020-09-30 2021-01-08 北京达佳互联信息技术有限公司 Image rendering method and device, electronic equipment and storage medium
CN113096254A (en) * 2021-04-23 2021-07-09 北京百度网讯科技有限公司 Object rendering method and device, computer equipment and medium
CN113112581A (en) * 2021-05-13 2021-07-13 广东三维家信息科技有限公司 Texture map generation method, device and equipment for three-dimensional model and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈百韬 等: "基于全景图的虚拟现实校园展示***的研究与实现", 软件, vol. 38, no. 4, 31 December 2017 (2017-12-31) *

Similar Documents

Publication Publication Date Title
CN111429354B (en) Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment
CN112509047B (en) Pose determining method and device based on image, storage medium and electronic equipment
CN111008985B (en) Panorama picture seam detection method and device, readable storage medium and electronic equipment
CN112489114B (en) Image conversion method, image conversion device, computer readable storage medium and electronic equipment
CN111402404B (en) Panorama complementing method and device, computer readable storage medium and electronic equipment
CN112037279B (en) Article position identification method and device, storage medium and electronic equipment
CN114757301A (en) Vehicle-mounted visual perception method and device, readable storage medium and electronic equipment
CN111612842A (en) Method and device for generating pose estimation model
CN114047823A (en) Three-dimensional model display method, computer-readable storage medium and electronic device
CN113572978A (en) Panoramic video generation method and device
CN113129211B (en) Optical center alignment detection method and device, storage medium and electronic equipment
CN114882465A (en) Visual perception method and device, storage medium and electronic equipment
CN111369557A (en) Image processing method, image processing device, computing equipment and storage medium
CN113689508A (en) Point cloud marking method and device, storage medium and electronic equipment
CN114139630A (en) Gesture recognition method and device, storage medium and electronic equipment
CN111179328A (en) Data synchronization calibration method and device, readable storage medium and electronic equipment
CN113450258B (en) Visual angle conversion method and device, storage medium and electronic equipment
CN113438463B (en) Method and device for simulating orthogonal camera image, storage medium and electronic equipment
CN113570694A (en) Model point location rendering method and device, storage medium and electronic equipment
CN117237532A (en) Panorama display method and device for points outside model, equipment and medium
CN113111692B (en) Target detection method, target detection device, computer readable storage medium and electronic equipment
CN113379895B (en) Three-dimensional house model generation method and device and computer readable storage medium
CN113194279B (en) Recording method of network conference, computer readable storage medium and electronic device
CN112465716A (en) Image conversion method and device, computer readable storage medium and electronic equipment
CN113762173A (en) Training method and device for human face light stream estimation and light stream value prediction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211223

Address after: Unit 05, room 112, 1st floor, office building, Nangang Industrial Zone, economic and Technological Development Zone, Binhai New Area, Tianjin 300457

Applicant after: BEIKE TECHNOLOGY Co.,Ltd.

Address before: 101300 room 24, 62 Farm Road, Erjie village, Yangzhen Town, Shunyi District, Beijing

Applicant before: Beijing fangjianghu Technology Co.,Ltd.