CN111832255B - Labeling processing method, electronic equipment and related products - Google Patents

Labeling processing method, electronic equipment and related products Download PDF

Info

Publication number
CN111832255B
CN111832255B CN202010603778.7A CN202010603778A CN111832255B CN 111832255 B CN111832255 B CN 111832255B CN 202010603778 A CN202010603778 A CN 202010603778A CN 111832255 B CN111832255 B CN 111832255B
Authority
CN
China
Prior art keywords
target
information
component
interface
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010603778.7A
Other languages
Chinese (zh)
Other versions
CN111832255A (en
Inventor
李晨楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanyi Digital Technology Co ltd
Original Assignee
Shenzhen Wanyi Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wanyi Digital Technology Co ltd filed Critical Shenzhen Wanyi Digital Technology Co ltd
Priority to CN202010603778.7A priority Critical patent/CN111832255B/en
Publication of CN111832255A publication Critical patent/CN111832255A/en
Application granted granted Critical
Publication of CN111832255B publication Critical patent/CN111832255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/117Tagging; Marking up; Designating a block; Setting of attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/12Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Architecture (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a labeling processing method, electronic equipment and related products, which are applied to the electronic equipment, wherein the method comprises the following steps: starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model; acquiring newly-built target problem information, wherein the target problem information comprises target view points and problem description information; and generating a target viewpoint diagram on the target interface based on the target problem information. By adopting the embodiment of the application, the display interface can be started quickly, the corresponding viewpoint diagram is generated based on the related information of the new problem, the marking efficiency of the new problem is improved, and the operation is friendly.

Description

Labeling processing method, electronic equipment and related products
Technical Field
The application relates to the technical field of BIM, in particular to a labeling processing method, electronic equipment and related products.
Background
The building information model (Building Information Modeling) is a new tool for architecture, engineering, and civil engineering. The term building information model or building information model is created by Autodesk. It is a computer aided design mainly based on three-dimensional graphics, object-oriented and architecture-related. Originally, the concept is to spread the technology provided by Autodesk, pente System software company, graphisoft to the public by Jerry Laiserin. BIM is widely applied in the field of building engineering, but the problem of insufficient efficiency and unfriendly operability of new construction on drawings is solved.
Disclosure of Invention
The embodiment of the application provides a labeling processing method, electronic equipment and related products, which can be used for effectively creating new problems on drawings and improving the operation friendliness.
In a first aspect, an embodiment of the present application provides a label processing method, applied to an electronic device, where the method includes:
Starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model;
acquiring newly-built target problem information, wherein the target problem information comprises target view points and problem description information;
and generating a target viewpoint diagram on the target interface based on the target problem information.
In a second aspect, an embodiment of the present application provides an annotation processing apparatus, applied to an electronic device, where the apparatus includes: a starting unit, an obtaining unit and a generating unit, wherein,
The starting unit is used for starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model;
The acquisition unit is used for acquiring newly-built target problem information, wherein the target problem information comprises target viewpoints and problem description information;
and the generating unit is used for generating a target viewpoint diagram on the target interface based on the target problem information.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the steps described in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
It can be seen that the labeling processing method, the electronic device and the related products described in the embodiments of the present application are applied to the electronic device, and the target interface is started, and the target interface is a display interface corresponding to the target BIM model, so as to obtain new target problem information, where the target problem information includes a target viewpoint and problem description information, and a target viewpoint map is generated on the target interface based on the target problem information, so that the display interface can be started quickly, and a corresponding viewpoint map is generated based on the related information of the new problem, thereby improving the labeling efficiency of the new problem, and improving the operation friendliness.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a labeling processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of another labeling method according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 4A is a functional block diagram of a labeling processing device according to an embodiment of the present application;
Fig. 4B is a functional unit composition block diagram of a labeling processing apparatus according to an embodiment of the present application.
Detailed Description
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The electronic device described in the embodiments of the present application may include a smart Phone (such as an Android Mobile Phone, an iOS Mobile Phone, a Windows Phone Mobile Phone, etc.), a tablet computer, a palm computer, a notebook computer, a video matrix, a monitoring platform, a Mobile internet device (MID, mobile INTERNET DEVICES), a wearable device, etc., which are merely examples, but not exhaustive, including but not limited to the above devices, and of course, the above electronic device may also be a server, for example, a cloud server.
Embodiments of the present application are described in detail below.
FIG. 1 is a schematic flow chart of a labeling method according to an embodiment of the present application, and as shown in the drawing, the labeling method includes:
101. and starting a target interface, wherein the target interface is a display interface corresponding to the target BIM model.
In the embodiment of the present application, the target BIM may be a BIM model for a building project, which may be specifically applied to a CAD scene or other drawing tool scenes, where the present application is not limited thereto, and the BIM model may be manufactured manually based on CAD software, or may also be generated from scanned building drawings. The electronic device may import the CAD building drawing into the building informative model (Building Information Modeling, BIM) model software. In the embodiment of the application, the building project can be at least one of the following: airports, train stations, bus stops, office buildings, residential buildings, hospitals, museums, tourist attractions, churches, schools, parks, etc., without limitation. The target interface may be any interface of the target BIM model.
In a specific implementation, the electronic device may start a target interface, where the target interface is a display interface corresponding to the target BIM model, and may be any display interface of the BIM model.
In one possible example, the step 101, starting the target interface may include the following steps:
11. acquiring a target iris image of a target user;
12. Matching the target iris image with a preset iris image, wherein the preset iris image is an iris image of a registered user;
13. And when the target iris image is successfully matched with the preset iris image, acquiring login information of the registered user, and starting the target interface according to the login information.
The electronic device may store a preset iris image in advance, where the preset iris image is an iris image of a registered user, and the login information may be at least one of the following: user name, password, login interface, etc., without limitation. In a specific implementation, the electronic device can acquire the target iris image of the target user, further, the target iris image can be matched with the preset iris image, when the target iris image is successfully matched with the preset iris image, login information of the registered user is acquired, and the target interface is started according to the login information, otherwise, the user is required to perform iris verification again, so that on one hand, the target interface can be started through identity verification, the system safety is ensured, on the other hand, the login information can be quickly acquired, and the target interface is quickly started.
In a possible example, before the step 11, the method may further include the following steps:
a1, acquiring an input target character string;
A2, comparing the target character string with a preset character string;
A3, executing the step of acquiring the target iris image of the target user when the comparison of the target character string and the preset character string fails.
The preset character string can be stored in the electronic device in advance, in a specific implementation, the electronic device can acquire the input target character string, compare the target character string with the preset character string, execute the step of acquiring the target iris image of the target user when the comparison of the target character string and the preset character string fails, otherwise, start the target interface when the comparison of the target character string and the preset character string succeeds.
Further, in one possible example, the step 12 of matching the target iris image with a preset iris image may include the following steps:
121. extracting the outline of the target iris image to obtain an outline image;
122. extracting characteristic points of the contour image to obtain a characteristic point distribution map;
123. dividing the characteristic point distribution map into a plurality of areas, wherein the area of each area is larger than a preset threshold value;
124. Determining the distribution density of the characteristic points of each of the plurality of areas to obtain the distribution density of the plurality of characteristic points;
125. selecting a maximum value from the distribution densities of the plurality of characteristic points, and acquiring a target area iris image corresponding to the maximum value from the target iris image;
126. Acquiring a target image quality evaluation value corresponding to the target area iris image, and acquiring a target area of the target area iris image;
127. determining a threshold adjustment parameter corresponding to the target image quality evaluation value and a target weight pair, wherein the target weight pair comprises a first weight and a second weight, the first weight is a weight corresponding to contour matching, and the second weight is a weight corresponding to feature point matching;
128. Acquiring a target iris area of the target iris image;
129. Adjusting a preset iris recognition threshold according to the threshold adjustment parameter, the target area and the target iris area to obtain a target iris recognition threshold;
130. acquiring a first contour set and a first feature point set of the iris image of the target area;
131. acquiring a second contour set and a second feature point set corresponding to the preset iris template;
132. Matching the first profile set with the second profile set to obtain a first matching value;
133. Matching the first characteristic point set with the second characteristic point set to obtain a second matching value;
134. Performing weighting operation according to the first matching value, the second matching value, the first weight and the second weight to obtain a target matching value;
135. And when the target matching value is larger than the target iris recognition threshold value, determining that the target iris image is successfully matched with the preset iris image.
In a specific implementation, the electronic device may perform contour extraction on the target iris image to obtain a contour image, and the specifically related contour extraction algorithm may be at least one of the following: hough transforms, canny operators, sobel operators, prewitt operators, and the like, without limitation. Furthermore, feature point extraction may be performed on the contour image to obtain a feature point distribution map, and the feature point extraction algorithm specifically related may be at least one of the following: the characteristics of the characteristics are not limited herein, such as harris corner detection, scale invariant feature transform (SCALE INVARIANT feature transform, SIFT), laplace transform, wavelet transform, contourlet transform, shear wave transform, etc. The electronic device may divide the feature point distribution map into a plurality of regions, where an area of each region is greater than a preset threshold, where the preset threshold may be set by a user or default by the system.
Further, the electronic device may determine the distribution density of the feature points in each of the plurality of regions, to obtain a plurality of distribution densities of the feature points, where the distribution density of the feature points=the total number of feature points in each region/the area of the region, and the electronic device may select a maximum value from the plurality of distribution densities of the feature points, and obtain an iris image of a target region corresponding to the maximum value from the iris image of the target, so that the selected region has more feature points and a clear outline, which is helpful for improving the subsequent iris recognition accuracy.
Further, the electronic device may obtain a target image quality evaluation value corresponding to the iris image of the target area and a target area of the iris image of the target area, and in a specific implementation, the electronic device may use at least one image quality evaluation index to perform image quality evaluation on the iris image of the target area to obtain at least one image quality evaluation value, and perform a weighting operation according to the at least one image quality evaluation value to obtain the target image quality evaluation value, where the image quality evaluation index may be at least one of the following: image quality evaluation value, edge retention, contrast, information entropy, average gradient, and the like, are not limited herein.
The mapping relation between the image quality evaluation value and the adjustment parameter can be stored in the electronic equipment in advance, the value range of the adjustment parameter is a range value between-1 and 1, for example, the value range can be between-0.15 and 0.15, further, the threshold adjustment parameter corresponding to the target image quality evaluation value can be determined according to the mapping relation, the mapping relation between the image quality evaluation value and the weight pair and the target weight pair can be stored in the electronic equipment in advance, the target weight pair comprises a first weight and a second weight, the first weight is a weight corresponding to contour matching, the second weight is a weight corresponding to feature point matching, and the sum of the first weight and the second weight is less than or equal to 1. Therefore, the dynamic adjustment of the recognition threshold can be realized, and the subsequent iris recognition efficiency can be improved more favorably.
Further, the electronic device may acquire a target iris area of the target iris image, and adjust a preset iris recognition threshold according to the threshold adjustment parameter, the target area and the target iris area to obtain a target iris recognition threshold, where the preset iris recognition threshold may be set by a user or default by a system, and a specific calculation formula of the target iris recognition threshold is as follows:
target iris recognition threshold = (1+threshold adjustment parameter) ×preset iris recognition threshold × (target area/target iris area)
Furthermore, the electronic device may acquire a first contour set and a first feature point set of the iris image of the target area, acquire a second contour set and a second feature point set corresponding to the preset iris template, match the first contour set with the second contour set to obtain a first matching value, match the first feature point set with the second feature point set to obtain a second matching value, and perform a weighting operation according to the first matching value, the second matching value, the first weight and the second weight to obtain a target matching value, which is specifically as follows:
target matching value = first matching value + first weight + second matching value + second weight
And finally, when the target matching value is larger than the target iris recognition threshold, determining that the target iris image is successfully matched with the preset iris image, otherwise, determining that the target iris image is failed to be matched with the preset iris image.
Further, the step 126 of obtaining the target image quality evaluation value corresponding to the iris image of the target area may include the following steps:
1261. Performing multi-scale feature decomposition on the iris image of the target area to obtain a low-frequency feature component and a high-frequency feature component;
1262. dividing the low-frequency characteristic component into a plurality of regions;
1263. determining the information entropy corresponding to each of the plurality of areas to obtain a plurality of information entropy;
1264. Determining average information entropy and target mean square error according to the plurality of information entropy;
1265. Determining a target adjustment coefficient corresponding to the target mean square error;
1266. adjusting the average information entropy according to the target adjustment coefficient to obtain a target information entropy;
1267. determining a first evaluation value corresponding to the target information entropy according to a mapping relation between a preset information entropy and the evaluation value;
1268. acquiring a target shooting parameter corresponding to the target iris;
1269. determining a target low-frequency weight corresponding to the target information entropy according to a mapping relation between a preset shooting parameter and the low-frequency weight, and determining the target low-frequency weight according to the target low-frequency weight;
1270. determining the distribution density of the target feature points according to the high-frequency feature components;
1271. determining a second evaluation value corresponding to the target feature point distribution density according to a mapping relation between the preset feature point distribution density and the evaluation value;
1272. And carrying out weighting operation according to the first evaluation value, the second evaluation value, the target low-frequency weight and the target high-frequency weight to obtain a target image quality evaluation value of the target iris.
In a specific implementation, the electronic device may perform multi-scale feature decomposition on the target iris by using a multi-scale decomposition algorithm to obtain a low-frequency feature component and a high-frequency feature component, where the multi-scale decomposition algorithm may be at least one of the following: pyramid transformation algorithms, wavelet transforms, contour wave transforms, shear wave transforms, and the like, are not limited herein. Further, the low-frequency characteristic component may be divided into a plurality of regions, each of which has the same or different area size. The low frequency feature component reflects the subject feature of the image and the high frequency feature component reflects the detail information of the image.
Further, the electronic device may determine an information entropy corresponding to each of the plurality of regions, obtain a plurality of information entropies, determine an average information entropy and a target mean square error according to the plurality of information entropies, where the information entropy reflects the image information to a certain extent, and the mean square error may reflect the stability of the image information. The mapping relation between the preset mean square error and the adjustment coefficient can be stored in the electronic equipment in advance, and then the target adjustment coefficient corresponding to the target mean square error can be determined according to the mapping relation.
Further, the electronic device may adjust the average information entropy according to the target adjustment coefficient, to obtain a target information entropy, where target information entropy= (1+target adjustment coefficient) ×average information entropy. The mapping relation between the preset information entropy and the evaluation value can be stored in the electronic equipment in advance, and further, the first evaluation value corresponding to the target information entropy can be determined according to the mapping relation between the preset information entropy and the evaluation value.
In addition, the electronic device may acquire a target shooting parameter corresponding to the target iris, where the target shooting parameter may be at least one of the following: ISO, exposure time, white balance parameters, focus parameters, etc., are not limited herein. The mapping relation between the preset shooting parameters and the low-frequency weights can be stored in the electronic equipment in advance, further, the target low-frequency weight corresponding to the target information entropy can be determined according to the mapping relation between the preset shooting parameters and the low-frequency weights, the target low-frequency weight is determined according to the target low-frequency weight, and the target low-frequency weight plus the target low-frequency weight=1.
Further, the electronic device may determine a target feature point distribution density according to the high-frequency feature component, the target feature point distribution density=the total number of feature points of the high-frequency feature component/the area of the region. The electronic device may further store a mapping relationship between a preset feature point distribution density and an evaluation value in advance, further determine a second evaluation value corresponding to the target feature point distribution density according to the mapping relationship between the preset feature point distribution density and the evaluation value, and finally perform a weighted operation according to the first evaluation value, the second evaluation value, the target low-frequency weight and the target high-frequency weight to obtain a target image quality evaluation value of the target iris, where the method specifically includes:
target image quality evaluation value = first evaluation value × target low frequency weight + second evaluation value × target high frequency weight;
Therefore, the image quality evaluation can be performed based on the low-frequency component and the high-frequency component of the iris, and the evaluation parameter which is suitable for the shooting environment, namely the target image quality evaluation value, can be accurately obtained.
102. And acquiring newly-built target problem information, wherein the target problem information comprises target viewpoints and problem description information.
The target problem information may be a problem that a user needs to label in the BIM model, the problem description information may be understood as problem content, and the target viewpoint may be at least one parameter including: the coordinate position, the presentation form, the display frame shape, and the like are not limited herein.
103. And generating a target viewpoint diagram on the target interface based on the target problem information.
The target viewpoint diagram can be a two-dimensional diagram or a three-dimensional diagram, and can be used for marking target problem information, so that the problem can be marked.
By way of illustration, aiming at the problem that the newly-built problem on the drawing is not efficient enough and is not friendly in operability, the embodiment of the application can realize that a user can directly build the problem on the interface, wherein the newly-built problem comprises a view point corresponding to the problem and corresponding text description, in addition, the problem and the view point can be generated according to a certain range selected by the user on the drawing directly, or the view point diagram can be automatically generated according to the coordinates or the axis network number of the problem designated by the user, thus helping the user to flexibly and rapidly create the problem on the drawing according to the self requirement, improving the problem mark creation efficiency and improving the user experience.
In one possible example, the generating, in the target interface, the target viewpoint diagram based on the target problem information in the step 31 may include the following steps:
31. Determining a target display area corresponding to the problem description information, wherein the target display area is a part of the display area of the target interface;
32. and generating the target viewpoint diagram corresponding to the problem description information based on the target viewpoint in the target display area.
In a specific implementation, the problem description information may determine a scale of the display frame, and the electronic device may determine a target display area corresponding to the problem description information, where the target display area is a part of a display area of the target interface, and the target display area may be specified by a user or may be determined by a position of a member corresponding to the problem description information. Further, the electronic device may generate a target viewpoint map corresponding to the problem description information based on the target viewpoint in the target display area.
Further optionally, when the newly created target problem information is problem information for the first component, the step 103 may further include the following steps:
b1, acquiring first attribute information of the first component;
b2, determining component attribute information which is the same as the first attribute information in the BIM model, and determining a target component corresponding to the component attribute information;
and B3, generating a reference viewpoint diagram corresponding to the target component according to the component attribute information and the target viewpoint diagram.
In this embodiment of the present application, the first attribute information may be at least one of the following: types, locations, dimensions, construction periods, budgets, materials, functions, uses, etc., are not limited in this regard. In a specific implementation, when the newly-built target problem information is problem information aiming at the first component, the electronic device can acquire the first attribute information of the first component, determine the component attribute information which is the same as the first attribute information in the BIM model, determine the target component corresponding to the component attribute information, and equivalently search for a component similar to the first component, and generate a reference view diagram corresponding to the target component according to the component attribute information and the target view diagram.
It can be seen that the labeling processing method described in the embodiment of the application is applied to electronic equipment, and is used for starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model, new target problem information is acquired, the target problem information comprises target viewpoints and problem description information, and a target viewpoint diagram is generated on the basis of the target problem information on the target interface, so that the display interface can be quickly started, a corresponding viewpoint diagram is generated on the basis of relevant information of the new problem, and the labeling efficiency and the operation friendliness of the new problem are improved.
In accordance with the embodiment shown in fig. 1, please refer to fig. 2, fig. 2 is a flow chart of a labeling processing method according to an embodiment of the present application, as shown in the drawings, applied to an electronic device, where the labeling processing method includes:
201. and starting a target interface, wherein the target interface is a display interface corresponding to the target BIM model.
202. The method comprises the steps of obtaining newly-built target problem information, wherein the target problem information comprises target viewpoints and problem description information, and the newly-built target problem information is problem information aiming at a first component.
203. And generating a target viewpoint diagram on the target interface based on the target problem information.
204. First attribute information of the first member is acquired.
205. And determining component attribute information which is the same as the first attribute information in the BIM model, and determining a target component corresponding to the component attribute information.
206. And generating a reference viewpoint diagram corresponding to the target component according to the component attribute information and the target viewpoint diagram.
The specific descriptions of the steps 201 to 206 may refer to the corresponding steps of the labeling processing method described in fig. 1, and are not repeated herein.
It can be seen that, the labeling processing method described in the embodiment of the present application is applied to an electronic device, and starts a target interface, where the target interface is a display interface corresponding to a target BIM model, acquires newly-built target problem information, where the target problem information includes a target viewpoint and problem description information, generates a target viewpoint diagram based on the target problem information in the target interface, acquires first attribute information of a first component, determines component attribute information identical to the first attribute information in the BIM model, determines a target component corresponding to the component attribute information, and generates a reference viewpoint diagram corresponding to the target component according to the component attribute information and the target viewpoint diagram, so that, on one hand, the display interface can be started quickly, and a corresponding viewpoint diagram is generated based on relevant information of the newly-built problem, thereby improving labeling efficiency of the newly-built problem, and operating friendliness.
In accordance with the above embodiment, referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in the drawing, the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in the embodiment of the present application, the programs include instructions for executing the following steps:
Starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model;
acquiring newly-built target problem information, wherein the target problem information comprises target view points and problem description information;
and generating a target viewpoint diagram on the target interface based on the target problem information.
It can be seen that, in the electronic device described in the embodiment of the present application, the target interface is started, the target interface is a display interface corresponding to the target BIM model, new target problem information is obtained, the target problem information includes a target viewpoint and problem description information, and a target viewpoint diagram is generated on the basis of the target problem information on the target interface, so that the display interface can be quickly started, a corresponding viewpoint diagram is generated on the basis of relevant information of the new problem, and therefore, the labeling efficiency and the operation friendliness of the new problem are improved.
In one possible example, in terms of the generating a target viewpoint diagram at the target interface based on the target issue information, the program includes instructions for:
Determining a target display area corresponding to the problem description information, wherein the target display area is a part of the display area of the target interface;
And generating the target viewpoint diagram corresponding to the problem description information based on the target viewpoint in the target display area.
In one possible example, when the newly created target problem information is problem information for the first component, the above program further includes instructions for performing the steps of:
Acquiring first attribute information of the first component;
determining component attribute information which is the same as the first attribute information in the BIM model, and determining a target component corresponding to the component attribute information;
and generating a reference viewpoint diagram corresponding to the target component according to the component attribute information and the target viewpoint diagram.
In one possible example, in the aspect of the start target interface, the program includes instructions for:
acquiring a target iris image of a target user;
Matching the target iris image with a preset iris image, wherein the preset iris image is an iris image of a registered user;
and when the target iris image is successfully matched with the preset iris image, acquiring login information of the registered user, and starting the target interface according to the login information.
In one possible example, the above-described program further includes instructions for performing the steps of:
Acquiring an input target character string;
comparing the target character string with a preset character string;
and executing the step of acquiring the target iris image of the target user when the comparison of the target character string and the preset character string fails.
In one possible example, in said matching the target iris image with a preset iris image, the program comprises instructions for:
extracting the outline of the target iris image to obtain an outline image;
Extracting characteristic points of the contour image to obtain a characteristic point distribution map;
Dividing the characteristic point distribution map into a plurality of areas, wherein the area of each area is larger than a preset threshold value;
Determining the distribution density of the characteristic points of each of the plurality of areas to obtain the distribution density of the plurality of characteristic points;
Selecting a maximum value from the distribution densities of the plurality of characteristic points, and acquiring a target area iris image corresponding to the maximum value from the target iris image;
acquiring a target image quality evaluation value corresponding to the target area iris image and a target area of the target area iris image;
Determining a threshold adjustment parameter corresponding to the target image quality evaluation value and a target weight pair, wherein the target weight pair comprises a first weight and a second weight, the first weight is a weight corresponding to contour matching, and the second weight is a weight corresponding to feature point matching;
acquiring a target iris area of the target iris image;
Adjusting a preset iris recognition threshold according to the threshold adjustment parameter, the target area and the target iris area to obtain a target iris recognition threshold;
acquiring a first contour set and a first feature point set of the iris image of the target area;
acquiring a second contour set and a second feature point set corresponding to the preset iris template;
matching the first profile set with the second profile set to obtain a first matching value;
Matching the first characteristic point set with the second characteristic point set to obtain a second matching value;
Performing weighting operation according to the first matching value, the second matching value, the first weight and the second weight to obtain a target matching value;
And when the target matching value is larger than the target iris recognition threshold value, determining that the target iris image is successfully matched with the preset iris image.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional units of the electronic device according to the method example, for example, each functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
Fig. 4A is a block diagram showing functional units of the labeling processing apparatus 400 according to the embodiment of the present application. The labeling processing apparatus 400 is applied to an electronic device, and the apparatus 400 includes: a starting unit 401, an obtaining unit 402, and a generating unit 403, wherein,
The starting unit 401 is configured to start a target interface, where the target interface is a display interface corresponding to a target BIM model;
The acquiring unit 402 is configured to acquire newly-built target problem information, where the target problem information includes a target viewpoint and problem description information;
the generating unit 403 is configured to generate a target viewpoint diagram on the target interface based on the target problem information.
It can be seen that the labeling processing device described in the embodiment of the application is applied to electronic equipment, and is used for starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model, new target problem information is acquired, the target problem information comprises a target viewpoint and problem description information, and a target viewpoint diagram is generated on the basis of the target problem information on the target interface, so that the display interface can be quickly started, a corresponding viewpoint diagram is generated on the basis of relevant information of the new problem, and the labeling efficiency and the operation friendliness of the new problem are improved.
In one possible example, in the aspect of generating a target viewpoint diagram on the target interface based on the target problem information, the generating unit 403 is specifically configured to:
Determining a target display area corresponding to the problem description information, wherein the target display area is a part of the display area of the target interface;
And generating the target viewpoint diagram corresponding to the problem description information based on the target viewpoint in the target display area.
In one possible example, when the newly created target problem information is problem information for the first component, as shown in fig. 4B, fig. 4B is a further modified structure of the labeling processing apparatus 400 shown in fig. 4A, which may further include, compared with fig. 4A: a determination unit 404, wherein,
The acquiring unit 402 is further configured to acquire first attribute information of the first member;
The determining unit 404 is configured to determine component attribute information that is the same as the first attribute information in the BIM model, and determine a target component corresponding to the component attribute information;
The generating unit 403 is further configured to generate a reference viewpoint diagram corresponding to the target member according to the member attribute information and the target viewpoint diagram.
In one possible example, in terms of the start target interface, the start unit 401 is specifically configured to:
acquiring a target iris image of a target user;
Matching the target iris image with a preset iris image, wherein the preset iris image is an iris image of a registered user;
and when the target iris image is successfully matched with the preset iris image, acquiring login information of the registered user, and starting the target interface according to the login information.
In one possible example, the apparatus is further specifically configured to:
Acquiring an input target character string;
comparing the target character string with a preset character string;
and executing the step of acquiring the target iris image of the target user when the comparison of the target character string and the preset character string fails.
In one possible example, in the matching the target iris image with a preset iris image, the starting unit 401 is specifically configured to:
extracting the outline of the target iris image to obtain an outline image;
Extracting characteristic points of the contour image to obtain a characteristic point distribution map;
Dividing the characteristic point distribution map into a plurality of areas, wherein the area of each area is larger than a preset threshold value;
Determining the distribution density of the characteristic points of each of the plurality of areas to obtain the distribution density of the plurality of characteristic points;
Selecting a maximum value from the distribution densities of the plurality of characteristic points, and acquiring a target area iris image corresponding to the maximum value from the target iris image;
acquiring a target image quality evaluation value corresponding to the target area iris image and a target area of the target area iris image;
Determining a threshold adjustment parameter corresponding to the target image quality evaluation value and a target weight pair, wherein the target weight pair comprises a first weight and a second weight, the first weight is a weight corresponding to contour matching, and the second weight is a weight corresponding to feature point matching;
acquiring a target iris area of the target iris image;
Adjusting a preset iris recognition threshold according to the threshold adjustment parameter, the target area and the target iris area to obtain a target iris recognition threshold;
acquiring a first contour set and a first feature point set of the iris image of the target area;
acquiring a second contour set and a second feature point set corresponding to the preset iris template;
matching the first profile set with the second profile set to obtain a first matching value;
Matching the first characteristic point set with the second characteristic point set to obtain a second matching value;
Performing weighting operation according to the first matching value, the second matching value, the first weight and the second weight to obtain a target matching value;
And when the target matching value is larger than the target iris recognition threshold value, determining that the target iris image is successfully matched with the preset iris image.
It may be understood that the functions of each program module of the labeling processing apparatus of the present embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not repeated herein.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program makes a computer execute part or all of the steps of any one of the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (6)

1. A method for labeling, applied to an electronic device, the method comprising:
Starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model;
acquiring newly-built target problem information, wherein the target problem information comprises target view points and problem description information; the target viewpoint includes the following parameters: coordinate position, display form, display frame shape; the target problem information comprises problems to be marked in a BIM model by a user, and the problem description information comprises problem contents;
Generating a target viewpoint diagram on the target interface based on the target problem information, wherein the target viewpoint diagram is used for labeling the target problem information; the target viewpoint diagram comprises a three-dimensional diagram;
The generating a target viewpoint diagram on the target interface based on the target problem information comprises the following steps:
determining a target display area corresponding to the problem description information, wherein the target display area is a part of the display area of the target interface; the target display area is determined by the position of the component corresponding to the problem description information;
Generating the target viewpoint diagram corresponding to the problem description information based on the target viewpoint in the target display area;
Wherein when the newly created target problem information is problem information for the first component, the method further includes:
Acquiring first attribute information of the first component; the first attribute information includes at least one of: type, location, scale, construction period, budget, material, function, use;
determining component attribute information which is the same as the first attribute information in the BIM model, and determining a target component corresponding to the component attribute information;
And generating a reference viewpoint diagram corresponding to the target component according to the component attribute information and the target viewpoint diagram, wherein the reference viewpoint diagram is used for finding a component similar to the component after marking the component.
2. The method of claim 1, wherein the initiating the target interface comprises:
acquiring a target iris image of a target user;
Matching the target iris image with a preset iris image, wherein the preset iris image is an iris image of a registered user;
and when the target iris image is successfully matched with the preset iris image, acquiring login information of the registered user, and starting the target interface according to the login information.
3. The method according to claim 2, wherein the method further comprises:
Acquiring an input target character string;
comparing the target character string with a preset character string;
and executing the step of acquiring the target iris image of the target user when the comparison of the target character string and the preset character string fails.
4. An annotation processing apparatus for application to an electronic device, the apparatus comprising: a starting unit, an obtaining unit and a generating unit, wherein,
The starting unit is used for starting a target interface, wherein the target interface is a display interface corresponding to a target BIM model;
The acquisition unit is used for acquiring newly-built target problem information, wherein the target problem information comprises target viewpoints and problem description information; the target viewpoint includes the following parameters: coordinate position, display form, display frame shape; the target problem information comprises problems to be marked in a BIM model by a user, and the problem description information comprises problem contents;
The generating unit is used for generating a target viewpoint diagram on the target interface based on the target problem information, and the target viewpoint diagram is used for labeling the target problem information; the target viewpoint diagram comprises a three-dimensional diagram;
Wherein, in the aspect of generating a target viewpoint diagram on the target interface based on the target problem information, the generating unit is specifically configured to:
determining a target display area corresponding to the problem description information, wherein the target display area is a part of the display area of the target interface; the target display area is determined by the position of the component corresponding to the problem description information;
Generating the target viewpoint diagram corresponding to the problem description information based on the target viewpoint in the target display area;
Wherein when the newly created target problem information is problem information for the first component, the apparatus further includes: a determination unit, wherein,
The acquisition unit is further used for acquiring first attribute information of the first component; the first attribute information includes at least one of: type, location, scale, construction period, budget, material, function, use;
The determining unit is used for determining component attribute information which is the same as the first attribute information in the BIM model and determining a target component corresponding to the component attribute information;
The generating unit is further configured to generate a reference viewpoint diagram corresponding to the target member according to the member attribute information and the target viewpoint diagram, where the reference viewpoint diagram is used to find a member similar to the member after the member is marked.
5. An electronic device comprising a processor, a memory for storing one or more programs and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-3.
6. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of claims 1-3.
CN202010603778.7A 2020-06-29 2020-06-29 Labeling processing method, electronic equipment and related products Active CN111832255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010603778.7A CN111832255B (en) 2020-06-29 2020-06-29 Labeling processing method, electronic equipment and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010603778.7A CN111832255B (en) 2020-06-29 2020-06-29 Labeling processing method, electronic equipment and related products

Publications (2)

Publication Number Publication Date
CN111832255A CN111832255A (en) 2020-10-27
CN111832255B true CN111832255B (en) 2024-05-14

Family

ID=72898273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010603778.7A Active CN111832255B (en) 2020-06-29 2020-06-29 Labeling processing method, electronic equipment and related products

Country Status (1)

Country Link
CN (1) CN111832255B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065817A (en) * 2014-06-16 2014-09-24 惠州Tcl移动通信有限公司 Mobile terminal identity authentication processing method and system based on iris identification
CN104243500A (en) * 2014-10-13 2014-12-24 步步高教育电子有限公司 Intelligent login method and system for users
CN104679846A (en) * 2015-02-11 2015-06-03 广州拓欧信息技术有限公司 Method and system for describing building information modeling by utilizing XML (X Exrensible Markup Language) formatted data
CN107368996A (en) * 2017-06-09 2017-11-21 上海嘉实(集团)有限公司 The problem of live project processing/monitoring and managing method/system, storage medium, terminal
CN109558047A (en) * 2018-09-20 2019-04-02 中建科技有限公司深圳分公司 Property based on BIM light weighed model reports method, apparatus and terminal device for repairment
CN109726647A (en) * 2018-12-14 2019-05-07 广州文远知行科技有限公司 Mask method, device, computer equipment and the storage medium of point cloud
CN110704904A (en) * 2019-09-12 2020-01-17 国网上海市电力公司 Multi-software collaborative transformer substation three-dimensional planning method
CN110807216A (en) * 2019-09-26 2020-02-18 杭州鲁尔物联科技有限公司 Image-based bridge BIM model crack visualization creation method
CN111026644A (en) * 2019-11-20 2020-04-17 东软集团股份有限公司 Operation result labeling method and device, storage medium and electronic equipment
CN111090903A (en) * 2019-12-16 2020-05-01 万翼科技有限公司 BIM-based component statistical method and related device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065817A (en) * 2014-06-16 2014-09-24 惠州Tcl移动通信有限公司 Mobile terminal identity authentication processing method and system based on iris identification
CN104243500A (en) * 2014-10-13 2014-12-24 步步高教育电子有限公司 Intelligent login method and system for users
CN104679846A (en) * 2015-02-11 2015-06-03 广州拓欧信息技术有限公司 Method and system for describing building information modeling by utilizing XML (X Exrensible Markup Language) formatted data
CN107368996A (en) * 2017-06-09 2017-11-21 上海嘉实(集团)有限公司 The problem of live project processing/monitoring and managing method/system, storage medium, terminal
CN109558047A (en) * 2018-09-20 2019-04-02 中建科技有限公司深圳分公司 Property based on BIM light weighed model reports method, apparatus and terminal device for repairment
CN109726647A (en) * 2018-12-14 2019-05-07 广州文远知行科技有限公司 Mask method, device, computer equipment and the storage medium of point cloud
CN110704904A (en) * 2019-09-12 2020-01-17 国网上海市电力公司 Multi-software collaborative transformer substation three-dimensional planning method
CN110807216A (en) * 2019-09-26 2020-02-18 杭州鲁尔物联科技有限公司 Image-based bridge BIM model crack visualization creation method
CN111026644A (en) * 2019-11-20 2020-04-17 东软集团股份有限公司 Operation result labeling method and device, storage medium and electronic equipment
CN111090903A (en) * 2019-12-16 2020-05-01 万翼科技有限公司 BIM-based component statistical method and related device

Also Published As

Publication number Publication date
CN111832255A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
US12014471B2 (en) Generation of synthetic 3-dimensional object images for recognition systems
CN110310175B (en) System and method for mobile augmented reality
US11586785B2 (en) Information processing apparatus, information processing method, and program
US10740963B2 (en) 3D virtual environment generating method and device
Müller et al. Image-based procedural modeling of facades
US20170278308A1 (en) Image modification and enhancement using 3-dimensional object model based recognition
CN111814620B (en) Face image quality evaluation model establishment method, optimization method, medium and device
EP2704055A1 (en) Determining space to display content in augmented reality
US9898860B2 (en) Method, apparatus and terminal for reconstructing three-dimensional object
CN105096353A (en) Image processing method and device
CN108597034B (en) Method and apparatus for generating information
US20130169621A1 (en) Method of creating and transforming a face model and related system
CN111783910A (en) Building project management method, electronic equipment and related products
US20160110909A1 (en) Method and apparatus for creating texture map and method of creating database
CN111783561A (en) Picture examination result correction method, electronic equipment and related products
CN111832255B (en) Labeling processing method, electronic equipment and related products
CN112102145B (en) Image processing method and device
KR102221152B1 (en) Apparatus for providing a display effect based on posture of object, method thereof and computer readable medium having computer program recorded therefor
CN111738087B (en) Method and device for generating face model of game character
CN105631938B (en) Image processing method and electronic equipment
EP3104337A1 (en) Visualising computer models of urban environments
CN110033420B (en) Image fusion method and device
CN111222448A (en) Image conversion method and related product
KR102423421B1 (en) Method of mornitoring home training using three dimensional modeling and server performing the same
CN108446671A (en) A kind of face tracking methods and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230703

Address after: A601, Zhongke Naneng Building, No. 06 Yuexing 6th Road, Gaoxin District Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518051

Applicant after: Shenzhen Wanyi Digital Technology Co.,Ltd.

Address before: 519000 room 105-24914, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province (centralized office area)

Applicant before: WANYI TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant