CN111832255A - Label processing method, electronic equipment and related product - Google Patents

Label processing method, electronic equipment and related product Download PDF

Info

Publication number
CN111832255A
CN111832255A CN202010603778.7A CN202010603778A CN111832255A CN 111832255 A CN111832255 A CN 111832255A CN 202010603778 A CN202010603778 A CN 202010603778A CN 111832255 A CN111832255 A CN 111832255A
Authority
CN
China
Prior art keywords
target
information
viewpoint
interface
iris image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010603778.7A
Other languages
Chinese (zh)
Other versions
CN111832255B (en
Inventor
李晨楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanyi Digital Technology Co ltd
Original Assignee
Wanyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanyi Technology Co Ltd filed Critical Wanyi Technology Co Ltd
Priority to CN202010603778.7A priority Critical patent/CN111832255B/en
Publication of CN111832255A publication Critical patent/CN111832255A/en
Application granted granted Critical
Publication of CN111832255B publication Critical patent/CN111832255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/117Tagging; Marking up; Designating a block; Setting of attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/12Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Architecture (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a label processing method, electronic equipment and related products, which are applied to the electronic equipment, wherein the method comprises the following steps: starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model; acquiring newly-built target problem information, wherein the target problem information comprises a target viewpoint and problem description information; and generating a target viewpoint diagram on the target interface based on the target problem information. By adopting the method and the device, the display interface can be quickly started, the corresponding viewpoint image is generated based on the related information of the new problem, and the marking efficiency and the operation friendliness of the new problem are improved.

Description

Label processing method, electronic equipment and related product
Technical Field
The application relates to the technical field of BIM, in particular to a label processing method, electronic equipment and related products.
Background
The Building Information model (Building Information Modeling) is a new tool for architecture, engineering and civil engineering. The term building information model or building information model was created by Autodesk. It is used to describe the computer aided design mainly based on three-dimensional figure, object guide and building engineering. At first this concept was generalized by Jerry laisser to the public by the technology provided by Autodesk, pentry systems software corporation, grapheisoft. BIM is widely applied in the field of building engineering, but the new construction problem on the drawing is not efficient enough, and the operability is not friendly.
Disclosure of Invention
The embodiment of the application provides a label processing method, electronic equipment and related products, which can efficiently solve the problem of new construction on a drawing and improve the operation friendliness.
In a first aspect, an embodiment of the present application provides an annotation processing method, which is applied to an electronic device, and the method includes:
starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model;
acquiring newly-built target problem information, wherein the target problem information comprises a target viewpoint and problem description information;
and generating a target viewpoint diagram on the target interface based on the target problem information.
In a second aspect, an embodiment of the present application provides an annotation processing apparatus, which is applied to an electronic device, and the apparatus includes: a starting unit, an obtaining unit and a generating unit, wherein,
the starting unit is used for starting a target interface, and the target interface is a display interface corresponding to the target BIM model;
the acquisition unit is used for acquiring newly-built target problem information, and the target problem information comprises a target viewpoint and problem description information;
and the generating unit is used for generating a target viewpoint diagram on the target interface based on the target problem information.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that the annotation processing method, the electronic device, and the related product described in the embodiment of the present application are applied to an electronic device, a target interface is started, the target interface is a display interface corresponding to a target BIM model, new target problem information is obtained, the target problem information includes a target viewpoint and problem description information, and a target viewpoint map is generated on the target interface based on the target problem information.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an annotation processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another annotation processing method provided in the embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 4A is a block diagram illustrating functional units of an annotation processing apparatus according to an embodiment of the present disclosure;
fig. 4B is a block diagram of functional units of an annotation processing apparatus according to an embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The electronic device described in the embodiment of the present application may include a smart Phone (e.g., an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a notebook computer, a video matrix, a monitoring platform, a Mobile Internet device (MID, Mobile Internet Devices), or a wearable device, which are merely examples, but not exhaustive, and include but are not limited to the foregoing Devices.
The following describes embodiments of the present application in detail.
Fig. 1 is a schematic flow chart of a label processing method provided in an embodiment of the present application, and as shown in the figure, the label processing method includes:
101. and starting a target interface, wherein the target interface is a display interface corresponding to the target BIM model.
In this embodiment of the application, the target BIM may be a BIM model for a building project, and may be specifically applied to a CAD scene or other drawing tool scenes, which is not limited herein, and the BIM model may be manually manufactured based on CAD software, or may also be generated from a scanned building drawing. The electronic device may import the CAD drawing into Building Information Modeling (BIM) model software. In an embodiment of the present application, the construction project may be at least one of: airports, train stations, bus stations, office buildings, residential buildings, hospitals, museums, tourist attractions, churches, schools, parks, and the like, without limitation. The target interface may be any interface of the target BIM model.
In a specific implementation, the electronic device may start a target interface, where the target interface is a display interface corresponding to the target BIM model, and may be any display interface of the BIM model.
In one possible example, the step 101 of starting the target interface may include the following steps:
11. acquiring a target iris image of a target user;
12. matching the target iris image with a preset iris image, wherein the preset iris image is an iris image of a registered user;
13. and when the target iris image is successfully matched with the preset iris image, acquiring login information of the registered user, and starting the target interface according to the login information.
The electronic device may pre-store a preset iris image, where the preset iris image is an iris image of a registered user, and the login information may be at least one of the following information: user name, password, login interface, etc., without limitation. In the specific implementation, the electronic device can acquire a target iris image of a target user, and further can match the target iris image with a preset iris image, when the target iris image is successfully matched with the preset iris image, login information of a registered user is acquired, a target interface is started according to the login information, otherwise, iris verification is required to be performed again by the user, so that on one hand, the target interface can be started through identity verification, system safety is guaranteed, on the other hand, the login information can be quickly acquired, and the target interface can be quickly started.
In a possible example, before the step 11, the following steps may be further included:
a1, acquiring an input target character string;
a2, comparing the target character string with a preset character string;
a3, when the comparison between the target character string and the preset character string fails, executing the step of obtaining the target iris image of the target user.
The preset character string can be stored in the electronic equipment in advance, in the specific implementation, the electronic equipment can acquire an input target character string, compare the target character string with the preset character string, execute the step of acquiring a target iris image of a target user when the target character string is failed to be compared with the preset character string, and otherwise, start a target interface when the target character string is successfully compared with the preset character string.
Further, in a possible example, the step 12 of matching the target iris image with a preset iris image may include the following steps:
121. extracting the outline of the target iris image to obtain an outline image;
122. extracting characteristic points of the contour image to obtain a characteristic point distribution map;
123. dividing the characteristic point distribution map into a plurality of regions, wherein the area of each region is larger than a preset threshold value;
124. determining the distribution density of the characteristic points of each area in the plurality of areas to obtain a plurality of distribution densities of the characteristic points;
125. selecting a maximum value from the distribution densities of the plurality of characteristic points, and acquiring an iris image of a target area corresponding to the maximum value from the target iris image;
126. acquiring a target image quality evaluation value corresponding to the iris image of the target area, and acquiring the target area of the iris image of the target area;
127. determining a threshold adjustment parameter and a target weight value pair corresponding to the target image quality evaluation value, wherein the target weight value pair comprises a first weight and a second weight, the first weight is a weight corresponding to contour matching, and the second weight is a weight corresponding to feature point matching;
128. acquiring the target iris area of the target iris image;
129. adjusting a preset iris recognition threshold according to the threshold adjusting parameter, the target area and the target iris area to obtain a target iris recognition threshold;
130. acquiring a first contour set and a first characteristic point set of the iris image of the target area;
131. acquiring a second contour set and a second feature point set corresponding to the preset iris template;
132. matching the first contour set with the second contour set to obtain a first matching value;
133. matching the first characteristic point set with the second characteristic point set to obtain a second matching value;
134. performing weighting operation according to the first matching value, the second matching value, the first weight and the second weight to obtain a target matching value;
135. and when the target matching value is larger than the target iris recognition threshold value, determining that the target iris image is successfully matched with the preset iris image.
In specific implementation, the electronic device may perform contour extraction on the target iris image to obtain a contour image, and the specifically related contour extraction algorithm may be at least one of the following algorithms: hough transform, canny operator, sobel operator, prewitt operator, etc., without limitation. Furthermore, feature point extraction can be performed on the contour image to obtain a feature point distribution map, and the specific feature point extraction algorithm can be at least one of the following algorithms: harris corner detection, Scale Invariant Feature Transform (SIFT), laplace transform, wavelet transform, contourlet transform, shear wave transform, and the like, without limitation. The electronic device may divide the feature point distribution map into a plurality of regions, and an area of each region is greater than a preset threshold, where the preset threshold may be set by a user or default by a system.
Furthermore, the electronic device may determine a distribution density of feature points in each of the plurality of regions to obtain a plurality of distribution densities of the feature points, where the distribution density of the feature points is a total number of feature points per region area of each region, and the electronic device may select a maximum value from the plurality of distribution densities of the feature points and obtain an iris image of the target region corresponding to the maximum value from the iris image of the target, so that the selected region has a large number of feature points and a clear contour, which is helpful for improving the accuracy of subsequent iris recognition.
Further, the electronic device may obtain a target image quality evaluation value corresponding to the target region iris image and a target region area of the target region iris image, and in a specific implementation, the electronic device may perform image quality evaluation on the target region iris image by using at least one image quality evaluation index to obtain at least one image quality evaluation value, and perform weighting operation according to the at least one image quality evaluation value to obtain the target image quality evaluation value, where the image quality evaluation index may be at least one of: image quality evaluation value, edge preservation, contrast, information entropy, average gradient, and the like, which are not limited herein.
The electronic device may pre-store a mapping relationship between the image quality evaluation value and the adjustment parameter, where a value range of the adjustment parameter is a range value between-1 and 1, for example, a value range may be-0.15 to 0.15, and further, a threshold adjustment parameter corresponding to the target image quality evaluation value may be determined according to the mapping relationship, and the electronic device may also pre-store a mapping relationship between the image quality evaluation value and a weight pair, and a target weight pair, where the target weight pair includes a first weight and a second weight, the first weight is a weight corresponding to contour matching, the second weight is a weight corresponding to feature point matching, and a sum of the first weight and the second weight is less than or equal to 1. Therefore, the dynamic adjustment of the recognition threshold value can be realized, and the subsequent iris recognition efficiency is improved.
Further, the electronic device may obtain a target iris area of the target iris image, and adjust the preset iris recognition threshold according to the threshold adjustment parameter, the target area, and the target iris area to obtain the target iris recognition threshold, where the preset iris recognition threshold may be set by a user or default by a system, and a specific calculation formula of the target iris recognition threshold is as follows:
target iris recognition threshold (1+ threshold adjustment parameter) — preset iris recognition threshold (target area/target iris area)
Furthermore, the electronic device may obtain a first contour set and a first feature point set of the iris image of the target area, obtain a second contour set and a second feature point set corresponding to the preset iris template, match the first contour set with the second contour set to obtain a first matching value, match the first feature point set with the second feature point set to obtain a second matching value, and perform weighting operation according to the first matching value, the second matching value, the first weight, and the second weight to obtain a target matching value, which is specifically as follows:
the target matching value is the first matching value + the first weight value + the second matching value + the second weight value
And finally, when the target matching value is larger than the target iris recognition threshold value, determining that the target iris image is successfully matched with the preset iris image, otherwise, determining that the target iris image is unsuccessfully matched with the preset iris image.
Further, in step 126, acquiring the target image quality evaluation value corresponding to the iris image of the target region may include the following steps:
1261. carrying out multi-scale feature decomposition on the iris image of the target area to obtain low-frequency feature components and high-frequency feature components;
1262. dividing the low-frequency feature components into a plurality of regions;
1263. determining an information entropy corresponding to each of the plurality of regions to obtain a plurality of information entropies;
1264. determining an average information entropy and a target mean square error according to the plurality of information entropies;
1265. determining a target adjusting coefficient corresponding to the target mean square error;
1266. adjusting the average information entropy according to the target adjustment coefficient to obtain a target information entropy;
1267. determining a first evaluation value corresponding to the target information entropy according to a mapping relation between a preset information entropy and the evaluation value;
1268. acquiring target shooting parameters corresponding to the target iris;
1269. determining a target low-frequency weight corresponding to the target information entropy according to a mapping relation between preset shooting parameters and the low-frequency weight, and determining a target high-frequency weight according to the target low-frequency weight;
1270. determining the distribution density of the target characteristic points according to the high-frequency characteristic components;
1271. determining a second evaluation value corresponding to the target feature point distribution density according to a preset mapping relation between the feature point distribution density and the evaluation value;
1272. and performing weighting operation according to the first evaluation value, the second evaluation value, the target low-frequency weight and the target high-frequency weight to obtain a target image quality evaluation value of the target iris.
In specific implementation, the electronic device may perform multi-scale feature decomposition on the target iris by using a multi-scale decomposition algorithm to obtain a low-frequency feature component and a high-frequency feature component, where the multi-scale decomposition algorithm may be at least one of the following algorithms: pyramid transform algorithms, wavelet transforms, contourlet transforms, shear wave transforms, etc., and are not limited herein. Further, the low-frequency characteristic component may be divided into a plurality of regions, and the area size of each region may be the same or different. The low-frequency feature component reflects the main features of the image, and the high-frequency feature component reflects the detail information of the image.
Furthermore, the electronic device can determine the information entropy corresponding to each of the plurality of regions to obtain a plurality of information entropies, and determine the average information entropy and the target mean square error according to the plurality of information entropies, wherein the information entropy reflects the amount of the image information to a certain extent, and the mean square error can reflect the stability of the image information. The electronic device may pre-store a mapping relationship between a preset mean square error and an adjustment coefficient, and further determine a target adjustment coefficient corresponding to the target mean square error according to the mapping relationship, in this embodiment, a value range of the adjustment coefficient may be-0.15 to 0.15.
Further, the electronic device may adjust the average information entropy according to a target adjustment coefficient to obtain a target information entropy, where the target information entropy is (1+ target adjustment coefficient) × the average information entropy. The electronic device may pre-store a mapping relationship between a preset information entropy and an evaluation value, and further, may determine a first evaluation value corresponding to the target information entropy according to the mapping relationship between the preset information entropy and the evaluation value.
In addition, the electronic device may acquire target shooting parameters corresponding to the target iris, where the target shooting parameters may be at least one of: ISO, exposure duration, white balance parameter, focus parameter, etc., without limitation. The electronic device may further pre-store a mapping relationship between a preset shooting parameter and a low frequency weight, and further determine a target low frequency weight corresponding to the target information entropy according to the mapping relationship between the preset shooting parameter and the low frequency weight, and determine a target high frequency weight according to the target low frequency weight, where the target low frequency weight + the target high frequency weight is 1.
Further, the electronic device may determine a target feature point distribution density from the high-frequency feature components, where the target feature point distribution density is the total number of feature points/area of the high-frequency feature components. The electronic device may further pre-store a mapping relationship between a preset feature point distribution density and an evaluation value, further determine a second evaluation value corresponding to the target feature point distribution density according to the mapping relationship between the preset feature point distribution density and the evaluation value, and finally perform a weighting operation according to the first evaluation value, the second evaluation value, the target low-frequency weight, and the target high-frequency weight to obtain a target image quality evaluation value of the target iris, which is specifically as follows:
a target image quality evaluation value is a first evaluation value and a target low-frequency weight and a second evaluation value and a target high-frequency weight;
therefore, image quality evaluation can be performed based on two dimensions of the low-frequency component and the high-frequency component of the iris, and evaluation parameters suitable for a shooting environment, namely a target image quality evaluation value, can be accurately obtained.
102. And acquiring newly-built target problem information, wherein the target problem information comprises a target viewpoint and problem description information.
The target question information may be a question that a user needs to label in the BIM model, the question description information may be understood as question content, and the target viewpoint may include at least one of the following parameters: coordinate location, presentation form, display frame shape, etc., without limitation.
103. And generating a target viewpoint diagram on the target interface based on the target problem information.
The target viewpoint image can be a two-dimensional image or a three-dimensional image, and the target viewpoint image can be used for labeling target problem information, so that the problem can be labeled.
For example, aiming at the problems that a new problem is not efficient enough on a drawing and the operability is not friendly, the embodiment of the application can realize that a user can directly create the problem on an interface, the newly created problem comprises a viewpoint corresponding to the problem and corresponding text description, in addition, the problem and the viewpoint can be generated within a certain range directly selected on the drawing by the user, and the viewpoint can also be automatically generated according to the coordinate or the axis network number of the problem appointed by the user, so that the problem creating method can help the user, flexibly and quickly create the problem on the drawing according to the self requirement, the problem marking creating efficiency is improved, and the user experience is improved.
In one possible example, the step 31 of generating the target viewpoint map on the target interface based on the target problem information may include the following steps:
31. determining a target display area corresponding to the problem description information, wherein the target display area is a partial display area of the target interface;
32. and generating the target viewpoint image corresponding to the problem description information in the target display area based on the target viewpoint.
In a specific implementation, the problem description information may determine a scale of the display frame, and the electronic device may determine a target display area corresponding to the problem description information, where the target display area is a partial display area of the target interface, and the target display area may be specified by a user, or may be determined by a position of a component corresponding to the problem description information. Furthermore, the electronic device may generate a target viewpoint image corresponding to the question description information based on the target viewpoint in the target display area.
Further optionally, when the newly created target problem information is problem information for the first component, after the step 103, the method may further include the following steps:
b1, acquiring first attribute information of the first component;
b2, determining the component attribute information in the BIM, which is the same as the first attribute information, and determining a target component corresponding to the component attribute information;
and B3, generating a reference viewpoint diagram corresponding to the target member according to the member attribute information and the target viewpoint diagram.
In this embodiment of the application, the first attribute information may be at least one of the following: type, location, dimension, duration, budget, material, function, and purpose, etc., without limitation. In concrete implementation, when newly-built target problem information is problem information for a first component, the electronic device may obtain first attribute information of the first component, determine component attribute information in the BIM model, which is the same as the first attribute information, and determine a target component corresponding to the component attribute information, which is equivalent to searching for a component similar to the first component, and generate a reference view point diagram corresponding to the target component according to the component attribute information and the target view point diagram.
It can be seen that the annotation processing method described in the embodiment of the present application is applied to an electronic device, starts a target interface, where the target interface is a display interface corresponding to a target BIM model, obtains newly-built target problem information, where the target problem information includes a target viewpoint and problem description information, and generates a target viewpoint map on the target interface based on the target problem information.
Referring to fig. 2, fig. 2 is a schematic flow chart of a label processing method according to an embodiment of the present application, and as shown in the figure, the label processing method is applied to an electronic device, and includes:
201. and starting a target interface, wherein the target interface is a display interface corresponding to the target BIM model.
202. And acquiring newly-built target problem information, wherein the target problem information comprises a target viewpoint and problem description information, and the newly-built target problem information is problem information aiming at the first component.
203. And generating a target viewpoint diagram on the target interface based on the target problem information.
204. First attribute information of the first member is acquired.
205. And determining component attribute information which is the same as the first attribute information in the BIM, and determining a target component corresponding to the component attribute information.
206. And generating a reference viewpoint diagram corresponding to the target member according to the member attribute information and the target viewpoint diagram.
The detailed description of the steps 201 to 206 may refer to the corresponding steps of the label processing method described in the foregoing fig. 1, and will not be described herein again.
It can be seen that the annotation processing method described in the embodiment of the present application is applied to an electronic device, and is configured to start a target interface, where the target interface is a display interface corresponding to a target BIM model, obtain new target question information, where the target question information includes a target viewpoint and question description information, generate a target viewpoint map on the target interface based on the target question information, obtain first attribute information of a first component, determine component attribute information in the BIM model that is the same as the first attribute information, determine a target component corresponding to the component attribute information, and generate a reference viewpoint map corresponding to the target component according to the component attribute information and the target viewpoint map, so that on one hand, the display interface can be started quickly, and a corresponding viewpoint map is generated based on related information of a new question, thereby improving annotation efficiency of the new question and operation friendliness, after a component is marked, a component similar to the component can be found, corresponding marking of the gourd ladle is achieved, and marking efficiency is improved.
In accordance with the foregoing embodiments, please refer to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in the drawing, the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in an embodiment of the present application, the programs include instructions for performing the following steps:
starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model;
acquiring newly-built target problem information, wherein the target problem information comprises a target viewpoint and problem description information;
and generating a target viewpoint diagram on the target interface based on the target problem information.
It can be seen that, in the electronic device described in this embodiment of the present application, a target interface is started, where the target interface is a display interface corresponding to a target BIM model, new target problem information is obtained, where the target problem information includes a target viewpoint and problem description information, and a target viewpoint map is generated on the target interface based on the target problem information.
In one possible example, in the generating a target viewpoint map at the target interface based on the target issue information, the program includes instructions for:
determining a target display area corresponding to the problem description information, wherein the target display area is a partial display area of the target interface;
and generating the target viewpoint image corresponding to the problem description information in the target display area based on the target viewpoint.
In one possible example, when the newly created target problem information is problem information for the first component, the program further includes instructions for performing the steps of:
acquiring first attribute information of the first component;
determining component attribute information which is the same as the first attribute information in the BIM model, and determining a target component corresponding to the component attribute information;
and generating a reference viewpoint diagram corresponding to the target member according to the member attribute information and the target viewpoint diagram.
In one possible example, in connection with the launch target interface, the program includes instructions for performing the steps of:
acquiring a target iris image of a target user;
matching the target iris image with a preset iris image, wherein the preset iris image is an iris image of a registered user;
and when the target iris image is successfully matched with the preset iris image, acquiring login information of the registered user, and starting the target interface according to the login information.
In one possible example, the program further includes instructions for performing the steps of:
acquiring an input target character string;
comparing the target character string with a preset character string;
and when the comparison between the target character string and the preset character string fails, executing the step of acquiring the target iris image of the target user.
In one possible example, in said matching said target iris image with a preset iris image, the above program comprises instructions for performing the following steps:
extracting the outline of the target iris image to obtain an outline image;
extracting characteristic points of the contour image to obtain a characteristic point distribution map;
dividing the characteristic point distribution map into a plurality of regions, wherein the area of each region is larger than a preset threshold value;
determining the distribution density of the characteristic points of each area in the plurality of areas to obtain a plurality of distribution densities of the characteristic points;
selecting a maximum value from the distribution densities of the plurality of characteristic points, and acquiring an iris image of a target area corresponding to the maximum value from the target iris image;
acquiring a target image quality evaluation value corresponding to the target area iris image and a target area of the target area iris image;
determining a threshold adjustment parameter and a target weight value pair corresponding to the target image quality evaluation value, wherein the target weight value pair comprises a first weight and a second weight, the first weight is a weight corresponding to contour matching, and the second weight is a weight corresponding to feature point matching;
acquiring the target iris area of the target iris image;
adjusting a preset iris recognition threshold according to the threshold adjusting parameter, the target area and the target iris area to obtain a target iris recognition threshold;
acquiring a first contour set and a first characteristic point set of the iris image of the target area;
acquiring a second contour set and a second feature point set corresponding to the preset iris template;
matching the first contour set with the second contour set to obtain a first matching value;
matching the first characteristic point set with the second characteristic point set to obtain a second matching value;
performing weighting operation according to the first matching value, the second matching value, the first weight and the second weight to obtain a target matching value;
and when the target matching value is larger than the target iris recognition threshold value, determining that the target iris image is successfully matched with the preset iris image.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 4A is a block diagram of functional units of the annotation processing apparatus 400 according to the embodiment of the present application. The label processing apparatus 400 is applied to an electronic device, and the apparatus 400 includes: an activation unit 401, an acquisition unit 402, and a generation unit 403, wherein,
the starting unit 401 is configured to start a target interface, where the target interface is a display interface corresponding to the target BIM model;
the obtaining unit 402 is configured to obtain newly-created target question information, where the target question information includes a target viewpoint and question description information;
the generating unit 403 is configured to generate a target viewpoint map in the target interface based on the target problem information.
It can be seen that the annotation processing apparatus described in the embodiment of the present application is applied to an electronic device, starts a target interface, where the target interface is a display interface corresponding to a target BIM model, obtains newly-built target problem information, where the target problem information includes a target viewpoint and problem description information, and generates a target viewpoint map on the target interface based on the target problem information.
In one possible example, in the aspect of generating the target viewpoint map on the target interface based on the target problem information, the generating unit 403 is specifically configured to:
determining a target display area corresponding to the problem description information, wherein the target display area is a partial display area of the target interface;
and generating the target viewpoint image corresponding to the problem description information in the target display area based on the target viewpoint.
In one possible example, when the newly created target question information is question information for the first component, as shown in fig. 4B, fig. 4B is a further modified structure of the annotation processing apparatus 400 shown in fig. 4A, which may further include, compared with fig. 4A: a determination unit 404 for determining, among other things,
the obtaining unit 402 is further configured to obtain first attribute information of the first component;
the determining unit 404 is configured to determine component attribute information in the BIM model, which is the same as the first attribute information, and determine a target component corresponding to the component attribute information;
the generating unit 403 is further configured to generate a reference viewpoint map corresponding to the target component according to the component attribute information and the target viewpoint map.
In a possible example, in terms of the launch target interface, the launch unit 401 is specifically configured to:
acquiring a target iris image of a target user;
matching the target iris image with a preset iris image, wherein the preset iris image is an iris image of a registered user;
and when the target iris image is successfully matched with the preset iris image, acquiring login information of the registered user, and starting the target interface according to the login information.
In one possible example, the apparatus is further specifically configured to:
acquiring an input target character string;
comparing the target character string with a preset character string;
and when the comparison between the target character string and the preset character string fails, executing the step of acquiring the target iris image of the target user.
In one possible example, in the aspect of matching the target iris image with a preset iris image, the starting unit 401 is specifically configured to:
extracting the outline of the target iris image to obtain an outline image;
extracting characteristic points of the contour image to obtain a characteristic point distribution map;
dividing the characteristic point distribution map into a plurality of regions, wherein the area of each region is larger than a preset threshold value;
determining the distribution density of the characteristic points of each area in the plurality of areas to obtain a plurality of distribution densities of the characteristic points;
selecting a maximum value from the distribution densities of the plurality of characteristic points, and acquiring an iris image of a target area corresponding to the maximum value from the target iris image;
acquiring a target image quality evaluation value corresponding to the target area iris image and a target area of the target area iris image;
determining a threshold adjustment parameter and a target weight value pair corresponding to the target image quality evaluation value, wherein the target weight value pair comprises a first weight and a second weight, the first weight is a weight corresponding to contour matching, and the second weight is a weight corresponding to feature point matching;
acquiring the target iris area of the target iris image;
adjusting a preset iris recognition threshold according to the threshold adjusting parameter, the target area and the target iris area to obtain a target iris recognition threshold;
acquiring a first contour set and a first characteristic point set of the iris image of the target area;
acquiring a second contour set and a second feature point set corresponding to the preset iris template;
matching the first contour set with the second contour set to obtain a first matching value;
matching the first characteristic point set with the second characteristic point set to obtain a second matching value;
performing weighting operation according to the first matching value, the second matching value, the first weight and the second weight to obtain a target matching value;
and when the target matching value is larger than the target iris recognition threshold value, determining that the target iris image is successfully matched with the preset iris image.
It can be understood that the functions of each program module of the annotation processing apparatus in this embodiment can be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process thereof can refer to the related description of the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An annotation processing method applied to an electronic device, the method comprising:
starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model;
acquiring newly-built target problem information, wherein the target problem information comprises a target viewpoint and problem description information;
and generating a target viewpoint diagram on the target interface based on the target problem information.
2. The method of claim 1, wherein generating a target view map at the target interface based on the target issue information comprises:
determining a target display area corresponding to the problem description information, wherein the target display area is a partial display area of the target interface;
and generating the target viewpoint image corresponding to the problem description information in the target display area based on the target viewpoint.
3. The method according to claim 1 or 2, wherein when the newly created target problem information is problem information for the first component, the method further comprises:
acquiring first attribute information of the first component;
determining component attribute information which is the same as the first attribute information in the BIM model, and determining a target component corresponding to the component attribute information;
and generating a reference viewpoint diagram corresponding to the target member according to the member attribute information and the target viewpoint diagram.
4. The method of any of claims 1-3, wherein the launching a target interface comprises:
acquiring a target iris image of a target user;
matching the target iris image with a preset iris image, wherein the preset iris image is an iris image of a registered user;
and when the target iris image is successfully matched with the preset iris image, acquiring login information of the registered user, and starting the target interface according to the login information.
5. The method of claim 4, further comprising:
acquiring an input target character string;
comparing the target character string with a preset character string;
and when the comparison between the target character string and the preset character string fails, executing the step of acquiring the target iris image of the target user.
6. An annotation processing apparatus, applied to an electronic device, the apparatus comprising: a starting unit, an obtaining unit and a generating unit, wherein,
the starting unit is used for starting a target interface, and the target interface is a display interface corresponding to the target BIM model;
the acquisition unit is used for acquiring newly-built target problem information, and the target problem information comprises a target viewpoint and problem description information;
and the generating unit is used for generating a target viewpoint diagram on the target interface based on the target problem information.
7. The apparatus according to claim 6, wherein, in said generating a target viewpoint map at the target interface based on the target question information, the generating unit is specifically configured to:
determining a target display area corresponding to the problem description information, wherein the target display area is a partial display area of the target interface;
and generating the target viewpoint image corresponding to the problem description information in the target display area based on the target viewpoint.
8. The apparatus according to claim 6 or 7, wherein when the newly created target problem information is problem information for the first component, the apparatus further comprises: a determination unit for determining, wherein,
the acquiring unit is further used for acquiring first attribute information of the first component;
the determining unit is used for determining component attribute information which is the same as the first attribute information in the BIM model and determining a target component corresponding to the component attribute information;
the generating unit is further configured to generate a reference viewpoint map corresponding to the target component according to the component attribute information and the target viewpoint map.
9. An electronic device comprising a processor, a memory for storing one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN202010603778.7A 2020-06-29 2020-06-29 Labeling processing method, electronic equipment and related products Active CN111832255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010603778.7A CN111832255B (en) 2020-06-29 2020-06-29 Labeling processing method, electronic equipment and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010603778.7A CN111832255B (en) 2020-06-29 2020-06-29 Labeling processing method, electronic equipment and related products

Publications (2)

Publication Number Publication Date
CN111832255A true CN111832255A (en) 2020-10-27
CN111832255B CN111832255B (en) 2024-05-14

Family

ID=72898273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010603778.7A Active CN111832255B (en) 2020-06-29 2020-06-29 Labeling processing method, electronic equipment and related products

Country Status (1)

Country Link
CN (1) CN111832255B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065817A (en) * 2014-06-16 2014-09-24 惠州Tcl移动通信有限公司 Mobile terminal identity authentication processing method and system based on iris identification
CN104243500A (en) * 2014-10-13 2014-12-24 步步高教育电子有限公司 Intelligent login method and system for users
CN104679846A (en) * 2015-02-11 2015-06-03 广州拓欧信息技术有限公司 Method and system for describing building information modeling by utilizing XML (X Exrensible Markup Language) formatted data
CN107368996A (en) * 2017-06-09 2017-11-21 上海嘉实(集团)有限公司 The problem of live project processing/monitoring and managing method/system, storage medium, terminal
CN109558047A (en) * 2018-09-20 2019-04-02 中建科技有限公司深圳分公司 Property based on BIM light weighed model reports method, apparatus and terminal device for repairment
CN109726647A (en) * 2018-12-14 2019-05-07 广州文远知行科技有限公司 Mask method, device, computer equipment and the storage medium of point cloud
CN110704904A (en) * 2019-09-12 2020-01-17 国网上海市电力公司 Multi-software collaborative transformer substation three-dimensional planning method
CN110807216A (en) * 2019-09-26 2020-02-18 杭州鲁尔物联科技有限公司 Image-based bridge BIM model crack visualization creation method
CN111026644A (en) * 2019-11-20 2020-04-17 东软集团股份有限公司 Operation result labeling method and device, storage medium and electronic equipment
CN111090903A (en) * 2019-12-16 2020-05-01 万翼科技有限公司 BIM-based component statistical method and related device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065817A (en) * 2014-06-16 2014-09-24 惠州Tcl移动通信有限公司 Mobile terminal identity authentication processing method and system based on iris identification
CN104243500A (en) * 2014-10-13 2014-12-24 步步高教育电子有限公司 Intelligent login method and system for users
CN104679846A (en) * 2015-02-11 2015-06-03 广州拓欧信息技术有限公司 Method and system for describing building information modeling by utilizing XML (X Exrensible Markup Language) formatted data
CN107368996A (en) * 2017-06-09 2017-11-21 上海嘉实(集团)有限公司 The problem of live project processing/monitoring and managing method/system, storage medium, terminal
CN109558047A (en) * 2018-09-20 2019-04-02 中建科技有限公司深圳分公司 Property based on BIM light weighed model reports method, apparatus and terminal device for repairment
CN109726647A (en) * 2018-12-14 2019-05-07 广州文远知行科技有限公司 Mask method, device, computer equipment and the storage medium of point cloud
CN110704904A (en) * 2019-09-12 2020-01-17 国网上海市电力公司 Multi-software collaborative transformer substation three-dimensional planning method
CN110807216A (en) * 2019-09-26 2020-02-18 杭州鲁尔物联科技有限公司 Image-based bridge BIM model crack visualization creation method
CN111026644A (en) * 2019-11-20 2020-04-17 东软集团股份有限公司 Operation result labeling method and device, storage medium and electronic equipment
CN111090903A (en) * 2019-12-16 2020-05-01 万翼科技有限公司 BIM-based component statistical method and related device

Also Published As

Publication number Publication date
CN111832255B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
US12014471B2 (en) Generation of synthetic 3-dimensional object images for recognition systems
CN108898186B (en) Method and device for extracting image
US11238272B2 (en) Method and apparatus for detecting face image
US8059917B2 (en) 3-D modeling
US20170278308A1 (en) Image modification and enhancement using 3-dimensional object model based recognition
CN113420719B (en) Method and device for generating motion capture data, electronic equipment and storage medium
KR102476016B1 (en) Apparatus and method for determining position of eyes
CN105912912A (en) Method and system for user to log in terminal by virtue of identity information
CN105096353A (en) Image processing method and device
CN108597034B (en) Method and apparatus for generating information
CN110619334A (en) Portrait segmentation method based on deep learning, architecture and related device
CN111783561A (en) Picture examination result correction method, electronic equipment and related products
CN111783910A (en) Building project management method, electronic equipment and related products
CN110765893B (en) Drawing file identification method, electronic equipment and related product
US20160110909A1 (en) Method and apparatus for creating texture map and method of creating database
CN109711287B (en) Face acquisition method and related product
CN112102145B (en) Image processing method and device
CN111832255B (en) Labeling processing method, electronic equipment and related products
KR102221152B1 (en) Apparatus for providing a display effect based on posture of object, method thereof and computer readable medium having computer program recorded therefor
CN109598201B (en) Action detection method and device, electronic equipment and readable storage medium
CN105631938B (en) Image processing method and electronic equipment
CN113781491A (en) Training of image segmentation model, image segmentation method and device
CN112015319A (en) Screenshot processing method and device and storage medium
CN111222448A (en) Image conversion method and related product
CN109816746B (en) Sketch image generation method and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230703

Address after: A601, Zhongke Naneng Building, No. 06 Yuexing 6th Road, Gaoxin District Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518051

Applicant after: Shenzhen Wanyi Digital Technology Co.,Ltd.

Address before: 519000 room 105-24914, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province (centralized office area)

Applicant before: WANYI TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant