CN113778281B - Auxiliary information generation method and device, electronic equipment and storage medium - Google Patents

Auxiliary information generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113778281B
CN113778281B CN202111086983.1A CN202111086983A CN113778281B CN 113778281 B CN113778281 B CN 113778281B CN 202111086983 A CN202111086983 A CN 202111086983A CN 113778281 B CN113778281 B CN 113778281B
Authority
CN
China
Prior art keywords
auxiliary information
vertex
input
rule
composition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111086983.1A
Other languages
Chinese (zh)
Other versions
CN113778281A (en
Inventor
吕琬军
李辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111086983.1A priority Critical patent/CN113778281B/en
Publication of CN113778281A publication Critical patent/CN113778281A/en
Application granted granted Critical
Publication of CN113778281B publication Critical patent/CN113778281B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an auxiliary information generation method and device, electronic equipment and storage medium, wherein the method comprises the following steps: determining a target geometric figure, obtaining first auxiliary information input by a first vertex aiming at the target geometric figure, analyzing the first auxiliary information to determine a composing rule of the first auxiliary information, obtaining input content of a second vertex aiming at the target geometric figure, and generating second auxiliary information aiming at the second vertex according to the input content and the composing rule. In the embodiment of the application, after the auxiliary information is marked on one vertex of the determined target geometric figure, when the auxiliary information is marked on other vertices, the rest information can be automatically complemented by only inputting part of information on each other vertex by a user, and the user does not need to input complete auxiliary information on each other vertex, thereby improving the speed of marking the vertices of the target geometric figure.

Description

Auxiliary information generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a method and apparatus for generating auxiliary information, an electronic device, and a storage medium.
Background
In some scenes, after a user draws a graph through electronic equipment, auxiliary information is required to be marked on the vertices of the graph to distinguish different vertices, but the graph vertices can only be marked one by the user at present, so that the marking speed is low.
Disclosure of Invention
The application aims to provide an auxiliary information generation method and device, electronic equipment and storage medium, and the auxiliary information generation method and device comprises the following technical scheme:
According to a first aspect of embodiments of the present disclosure, there is provided an auxiliary information generating method, the method including:
Determining a target geometry;
Obtaining first auxiliary information input for a first vertex of the target geometry;
Analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
Obtaining input content for a second vertex of the target geometry;
Generating second auxiliary information for the second vertex according to the input content and the construction rule.
With reference to the first aspect, in a first possible implementation manner, the determining the target geometry includes:
Receiving a plurality of input line segments;
And determining the geometric figure formed by the line segments as a target geometric figure.
With reference to the first aspect, in a second possible implementation manner, the parsing the first auxiliary information includes:
acquiring an image of a display area of the first auxiliary information;
and processing the image to obtain the composition rule of the first auxiliary information.
With reference to the first aspect, in a third possible implementation manner, the processing the image to obtain a composition rule of the first auxiliary information includes:
Performing character recognition on the image to obtain a recognition result; determining a composition rule of the first auxiliary information according to the identification result;
or alternatively
And processing the image based on an analysis engine to obtain the composition rule of the first auxiliary information output by the analysis engine.
With reference to the first aspect, in a fourth possible implementation manner, the forming rule of the first auxiliary information includes:
the composition of the first auxiliary information, and the relative positional relationship between the compositions.
With reference to the first aspect, in a fifth possible implementation manner, the generating second auxiliary information for the second vertex according to the input content and the composition rule includes:
Determining content to be supplemented based on the composition rule;
and combining the content to be supplemented with the input content according to the composition rule to obtain second auxiliary information aiming at the second vertex.
With reference to the first aspect, in a sixth possible implementation manner, the input content corresponds to a first input component in the first auxiliary information, and the content to be supplemented corresponds to a component other than the first input component in the first auxiliary information.
According to a second aspect of embodiments of the present disclosure, there is provided an auxiliary information generating apparatus, the apparatus including:
A determining module for determining a target geometry;
a first acquisition module for acquiring first auxiliary information input for a first vertex of the target geometry;
The analysis module is used for analyzing the first auxiliary information to determine the composition rule of the first auxiliary information;
A second acquisition module for acquiring input content for a second vertex of the target geometry;
And the generation module is used for generating second auxiliary information aiming at the second vertex according to the input content and the construction rule.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a memory for storing a program;
a processor for calling and executing the program in the memory, by executing the program, the steps of the auxiliary information generating method according to the first aspect being implemented.
According to a fourth aspect of embodiments of the present disclosure, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the auxiliary information generating method as described in the first aspect.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product directly loadable into an internal memory of a computer, the memory being comprised in an electronic device as described in the third aspect and comprising software code, the computer program being capable of implementing the steps of the auxiliary information generating method as described in the first aspect after being loaded and executed via the computer.
As can be seen from the above, the method, the device, the electronic device and the storage medium for generating auxiliary information provided by the present application determine a target geometry, obtain first auxiliary information input to a first vertex of the target geometry, parse the first auxiliary information to determine a rule for composing the first auxiliary information, obtain input content of a second vertex of the target geometry, and generate second auxiliary information for the second vertex according to the input content and the rule for composing. In the application, after the auxiliary information is marked on one vertex of the determined target geometric figure, when the auxiliary information is marked on other vertices, the rest information can be automatically complemented by only inputting part of information on each other vertex by a user, and the user does not need to input complete auxiliary information on each other vertex, thereby improving the speed of marking the vertices of the target geometric figure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation manner of a hardware architecture according to an embodiment of the present application;
fig. 2 is a flowchart of an auxiliary information generating method according to an embodiment of the present application;
fig. 3a to 3h are schematic diagrams illustrating obtaining input contents of a second vertex according to an embodiment of the present application;
Fig. 4 is a block diagram of an auxiliary information generating apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in other sequences than those illustrated herein.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides an auxiliary information generation method, an auxiliary information generation device, electronic equipment and a storage medium. Before introducing the technical scheme provided by the embodiment of the application, the hardware architecture related to the embodiment of the application is described.
In an alternative implementation manner, a first hardware architecture according to an embodiment of the present application includes: an electronic device.
By way of example, the electronic device may be any electronic product that can interact with a user by one or more of a keyboard, a touchpad, a touch screen, a remote control, a voice interaction, a handwriting device, etc., such as a mobile phone, a notebook computer, a tablet computer, a palm top computer, a personal computer, a wearable device, a smart television, a PAD, etc.
For example, a user may input first auxiliary information of a first vertex of a target geometry through an electronic device. The electronic device may analyze the first auxiliary information to obtain a composition rule of the first auxiliary information, and after a user inputs the input content of the second vertex of the target geometric figure through the electronic device, the electronic device may generate second auxiliary information for the second vertex according to the input content and the composition rule.
As shown in fig. 1, a schematic diagram of an implementation manner of a second hardware architecture according to an embodiment of the present application is shown, where the hardware architecture includes: an electronic device 11 and a server 12.
By way of example, the electronic device 11 may be any electronic product that can interact with a user by one or more of a keyboard, a touchpad, a touch screen, a remote control, a voice interaction, a handwriting device, etc., such as a mobile phone, a notebook computer, a tablet computer, a palm top computer, a personal computer, a wearable device, a smart television, a PAD, etc.
The server 12 may be a server, a server cluster comprising a plurality of servers, or a cloud computing server center, for example. The server 12 may include a processor, memory, a network interface, and the like.
Illustratively, the user may input first auxiliary information for the first vertex of the target geometry via the electronic device 11. The electronic device 11 sends the first auxiliary information to the server 12, the server 12 analyzes the first auxiliary information to obtain a composition rule of the first auxiliary information, and sends the composition rule to the electronic device 11, after a user inputs an input content of a second vertex of the target geometric figure through the electronic device 11, the electronic device 11 may generate the second auxiliary information for the second vertex according to the input content and the composition rule, or the electronic device 11 may send the input content to the server 12, and the server 12 may generate the second auxiliary information for the second vertex according to the input content and the composition rule, and send the second auxiliary information to the electronic device 11.
Those skilled in the art will appreciate that the above-described electronic devices and servers are merely examples, and that other existing or future-occurring electronic devices or servers, as applicable to the present disclosure, are intended to be within the scope of the present disclosure and are incorporated herein by reference.
The method for generating the auxiliary information provided by the embodiment of the application is described below with reference to a hardware architecture related to the embodiment of the application.
As shown in fig. 2, a flowchart of an auxiliary information generating method according to an embodiment of the present application may be applied to an electronic device with a first hardware architecture or a server 12 with a second hardware architecture, and the method may involve the following steps S21 to S25 in the implementation process.
Step S21: a target geometry is determined.
The shape of the target geometry may be, for example, any shape, such as a planar or a solid.
By way of example, the planar pattern may be any one of the following: line segments (curved or straight), sectors, arches, polygons.
Illustratively, the stereoscopic graphic may be any one of the following: polyhedron, cylinder.
Illustratively, there are a variety of ways to determine the target geometry, and embodiments of the present application provide, but are not limited to, the following four.
The first way to determine the target geometry is: and taking the display screen of the electronic device as handwriting equipment. The electronic device receives the geometry input by the user through the display screen and determines the geometry as the target geometry.
For example, a user may draw a geometric figure in a display screen.
The second way of determining the target geometry is: the electronic device is connected to the handwriting device. The electronic device receives the geometry input by the user through the handwriting device connected with the electronic device, and determines the geometry as a target geometry.
Illustratively, the handwriting device may be: drawing board.
For example, a user may draw a geometric figure in a handwriting device.
In an alternative implementation, in the first or second way of determining the target geometry, the process of receiving the target geometry input by the user includes: receiving a plurality of input line segments; and determining the geometric figure formed by the line segments as a target geometric figure.
A third way of determining the target geometry is: and shooting an image containing geometric figures through a camera of the electronic equipment, and determining the geometric figures contained in the image as target geometric figures.
Fourth way of determining target geometry: and receiving the geometric figures sent by other terminal equipment, and determining the received geometric figures as target geometric figures.
Illustratively, the target geometry includes one or more geometries.
In an alternative implementation, the plurality of geometries is determined to be the target geometry if, for any two geometries, at least one line or at least one face of the two geometries intersect. That is, the auxiliary information of the vertices of the plurality of geometric figures has the same construction rule.
In an alternative implementation, if none of the two geometries intersect and none of the faces intersect for any two of the plurality of geometries, each of the plurality of geometries is determined to be the target geometry. That is, the construction rules of the auxiliary information of the vertices of the plurality of geometric figures may be different (the specific construction rules are related to the first auxiliary information of the first vertex input by the user).
In an alternative implementation, for any two geometries in the plurality of geometries, at least one line or at least one plane of the two geometries intersects, and it may also be determined that each of the plurality of geometries is the target geometry, respectively. That is, the construction rules of the auxiliary information of the vertices of the plurality of geometric figures may be different (the specific construction rules are related to the first auxiliary information of the first vertex input by the user).
Step S22: first auxiliary information input for a first vertex of the target geometry is obtained.
The vertices of the target geometry may include one or more first points and/or one or more second points.
Wherein the first point includes, but is not limited to, at least one of: the intersection point of two edges of a corner in the target geometry, the highest point of a curve in the target geometry, the end point of a line segment (straight line segment or curve segment) in the target geometry, and the intersection point of two line segments in the target geometry.
Wherein the second point includes, but is not limited to: the target geometry has points of preset mark symbols.
Illustratively, the preset marking symbol may be any one of the following, or a combination of at least two of the following: arrows, solid dots, hollow dots.
Illustratively, the first point is different from the second point.
If the user needs to mark points other than the first point in the target geometry, for example, the user needs to mark the center of the cube, the preset mark symbol may be marked at the coordinates of the center of the cube. If the electronic device recognizes that the center of the cube has the preset mark symbol, the center of the cube can be used as a vertex.
For example, the first vertex may comprise any one or more vertices in the target geometry.
The first vertex mentioned in step S22 is explained below, here referring to two application scenarios.
First application scenario: the target geometry determined in step S21 already contains vertices labeled with auxiliary information.
The number of vertices to which the auxiliary information has been labeled may be one or more.
If the number of the vertexes marked with the auxiliary information is 1, the first vertex is the vertex marked with the auxiliary information; if the number of vertices already marked with the auxiliary information is greater than 1, the first vertex may be any vertex of the vertices already marked with the auxiliary information, or the first vertex includes vertices each already marked with the auxiliary information.
The second application scenario: the vertices of the target geometry determined in step S21 are not labeled with auxiliary information.
For example, auxiliary information is input for a certain vertex of the target geometric figure, the vertex is the first vertex, and the auxiliary information of the vertex is the first auxiliary information.
For example, auxiliary information is input to a plurality of vertices of the target geometric figure respectively, and then the plurality of vertices are all first vertices, and the auxiliary information corresponding to the plurality of vertices respectively is the first auxiliary information.
Illustratively, the auxiliary information of a vertex is a marker symbol for marking the vertex, and the auxiliary information of different vertices of the same target geometry is different.
In an alternative implementation, the composition rules of the auxiliary information corresponding to the vertices of the same target geometry are the same.
In an alternative implementation, the rule of formation of the auxiliary information corresponding to each vertex of the same target geometry may be different.
For example, each vertex of the target geometry may be divided into at least two vertex sets, the vertex sets including a plurality of vertices, the same vertex set containing vertices for which the same rule of construction of the auxiliary information is the same, and different vertex sets containing vertices for which the different rule of construction of the auxiliary information is different.
For example, the first vertices corresponding to different vertex sets are different, and any one or more vertices in the vertex sets may be used as the first vertex for each vertex set. For each vertex set, first auxiliary information input for a first vertex of the vertex set is needed for example.
Step S23: and analyzing the first auxiliary information to determine the composition rule of the first auxiliary information.
It will be appreciated that for simple auxiliary information, the construction rule may be derived based on the first auxiliary information of one first vertex; for complex auxiliary information, it may be necessary to obtain a construction rule based on first auxiliary information respectively corresponding to a plurality of first vertices.
For example, if the user desires a composition rule: one capital letter and character ", and the character" is located at the upper corner mark of the capital letter, for example, the auxiliary information is a ". Then, the number of first vertices may be one.
For example, if the user desires a composition rule: the uppercase letters are ordered sequentially in the order of 26 english alphabets, for example, the auxiliary information of the two vertices are respectively: BCD, CDE, then the number of first vertices is a plurality.
In summary, the composition rules include, but are not limited to, at least one of the following: the composition of the auxiliary information, the relative positional relationship between the composition of the auxiliary information, and the sequential relationship between the composition of the auxiliary information.
Exemplary compositions include, but are not limited to: at least one category of capital letters, lowercase letters, symbols, capital numbers, arabic numbers, emoticons.
Illustratively, the constituent component belongs to a category different from the category to which the preset reference symbol belongs.
Exemplary, the relative positional relationship between the constituent elements constituting the auxiliary information includes, but is not limited to: at least one of upper corner mark, lower corner mark and normal position relation.
For example, if the auxiliary information is a B, the relative positional relationship between the constituent components of the second location and the constituent components of the first location is: the constituent component B of the second position is positioned on the upper corner mark of the constituent component A of the first position; if the auxiliary information is C D, the relative positional relationship between the constituent components of the second position and the constituent components of the first position is: the constituent component D of the second position is located at the subscript of the constituent component C of the first position; if the auxiliary information is EF, the relative positional relationship between the constituent components of the second position and the constituent components of the first position is: the composition E of the first position and the composition F of the second position are in a normal position relationship.
For example, if the constituent components constituting the auxiliary information include a plurality of categories, the sequential relationship between the constituent components constituting the auxiliary information includes: sequential relationships of multiple categories, e.g., capital letters in a first position, lowercase letters in a second position, characters in a third position.
For example, if the constituent components constituting the auxiliary information include a plurality of categories, the sequential relationship between the constituent components constituting the auxiliary information may further include: the order relation between the same position of the constituent components in the auxiliary information corresponding to the vertex of the former marked and the vertex of the latter marked. For example, if the composition of the first position of the auxiliary information corresponding to the vertex of the previous label is a, and the sequence relationship of the composition of the same position in the auxiliary information corresponding to the vertex of the previous label and the auxiliary information corresponding to the vertex of the next label is that the composition of the first position of the auxiliary information corresponding to the vertex of the next label is B.
For example, if the constituent components constituting the auxiliary information include a plurality of constituent components belonging to the same category, the sequential relationship between the constituent components constituting the auxiliary information further includes: sequential relationships between constituent components belonging to the same category. For example, the order relation among the plurality of constituent elements belonging to the same category is sequentially ordered in the order of 26 english alphabets, assuming that the auxiliary information of the vertex includes three constituent elements belonging to capital letters, if the constituent element of the first position of the auxiliary information of the vertex is B, the constituent elements of the second position and the third position are C, D in order.
Illustratively, the foregoing description of the forming rule is merely an example, and the embodiment of the present application does not limit the forming rule, for example, the forming rule may further include: the format of the constituent components constituting the auxiliary information.
Exemplary formats of the constituent components constituting the auxiliary information include, but are not limited to: at least one of underlined, shaded, bolded, font size, glyphs, and fonts.
Step S24: input content for a second vertex of the target geometry is obtained.
Illustratively, local information of second auxiliary information whose content is the second vertex (the auxiliary information of the second vertex is referred to as second auxiliary information in the present application) is input.
There are various implementation manners of step S24, and the following two embodiments of the present application provide, but are not limited to, the following two.
The first implementation of step S24 includes the following steps a11 to a13.
Step A11: a first number of constituent elements included in the auxiliary information of the vertex is determined based on the constituent information.
Step A12: if a touch instruction at a display area of auxiliary information to be input corresponding to the vertex is detected, determining that the vertex is a second vertex, and displaying a first number of input indicators.
The first number of input indicators may be displayed in the display area, or may be displayed in any position, for example.
For example, if the display area corresponding to the vertex is detected to be touched, the instruction is obtained; the above instruction is obtained, for example, if a voice containing input information of an input vertex is included.
Illustratively, the input indicator may be: box, underline.
Step A13: input content is received at a location corresponding to any input indicator of the first number of input indicators.
In order to further understand the implementation manner of the first step S24 provided in the embodiment of the present application, the following is exemplified.
Fig. 3a to 3c are schematic diagrams illustrating an implementation manner of obtaining input content of the second vertex according to an embodiment of the present application.
Fig. 3a illustrates an example of a target geometry as a rectangle. Assume that the first number of constituent components included in the auxiliary information of the vertex of the target geometry is 2, and that the constituent rule of the auxiliary information includes: the first position is a capital letter, the second position is a character ', and the character' is an upper corner mark of the capital letter at the first position.
Since the formation rule of the auxiliary information is relatively simple, the number of the first vertices may be one, for example, the first auxiliary information of the first vertices is a'. If the user clicks the second vertex of the rectangle, as shown in fig. 3a, the user clicks the display area of the second vertex, 2 input indicators may be displayed at the display area corresponding to the second vertex, as shown in fig. 3 b.
Fig. 3b and 3c illustrate an example of an input indicator _.
Illustratively, the user may input the corresponding constituent components at any positions, and illustratively, the corresponding constituent components may be input at least one of the 2 input indicators to obtain the input content. For example, as shown in fig. 3c, the letter B is filled in at the first position, and the letter B is the input content.
Fig. 3d to 3f are schematic diagrams illustrating another implementation manner of obtaining the input content of the second vertex according to the embodiment of the present application.
Fig. 3d illustrates an example of a target geometry as a rectangle. Assume that the first number of constituent components included in the auxiliary information of the vertex of the target geometry is 4, and that the constituent rule of the auxiliary information includes: the first position is capital letters, the second position is capital letters, the third position is characters', the characters are upper corner marks of the capital letters at the second position, the fourth position is capital letters, and the order of the capital letters at the first position, the capital letters at the second position and the capital letters at the fourth position is 26 English letter sorting order.
Since the formation rule of the auxiliary information is relatively complex, the number of the first vertices may be plural, and the present example is described with reference to the number of the first vertices being 2. As shown in fig. 3D, it is assumed that the first auxiliary information of the two first vertices is AB 'C, BC' D, respectively. If the user clicks the second vertex of the rectangle, as shown in fig. 3d, the user clicks the display area of the second vertex, 4 input indicators may be displayed at the display area corresponding to the second vertex, as shown in fig. 3 e.
Fig. 3e and 3f illustrate an example of an input indicator _.
Illustratively, the user may input the corresponding constituent components at any positions, and illustratively, the corresponding constituent components may be input at least one of the 4 input indicators to obtain the input content. For example, as shown in fig. 3f, the letter D is filled in at the second position, and the letter D is the input content.
Illustratively, the input content includes the input character and the position of the character at the second auxiliary information.
The second implementation of step S24 includes the following steps a21 to a23.
Step A21: if a touch instruction at the display area of the auxiliary information to be input corresponding to the vertex is detected, determining that the vertex is the second vertex.
Step A22: and obtaining a preset position corresponding to the input content to be input, wherein the preset position is the position of the auxiliary information of the input content at the second vertex.
For example, the preset position may be set by the user, and the preset position may be any position of the auxiliary information of the second vertex.
Step A23: input content input by a user is obtained.
For example, a prompt message may be displayed, where the prompt message is used to prompt the user that the input content to be input is located at the position of the second auxiliary information.
Illustratively, the input content includes the input character and the position of the character at the second auxiliary information.
Step S25: generating second auxiliary information for the second vertex according to the input content and the construction rule.
For example, taking fig. 3a to 3c as an example, the second auxiliary information of the generated second vertex is B'. As shown in fig. 3 g. Taking fig. 3d to 3f as an example, the second auxiliary information of the generated second vertex is CD' E. As shown in fig. 3 h.
The application provides an auxiliary information generation method, an auxiliary information generation device, electronic equipment and a storage medium, which are used for determining a target geometric figure, obtaining first auxiliary information input by a first vertex of the target geometric figure, analyzing the first auxiliary information to determine a composition rule of the first auxiliary information, obtaining input content of a second vertex of the target geometric figure, and generating second auxiliary information of the second vertex according to the input content and the composition rule. In the application, after the auxiliary information is marked on one vertex of the determined target geometric figure, when the auxiliary information is marked on other vertices, the rest information can be automatically complemented by only inputting part of information on each other vertex by a user, and the user does not need to input complete auxiliary information on each other vertex, thereby improving the speed of marking the vertices of the target geometric figure.
In an alternative implementation, the implementation of step S23 is multiple, and the following two embodiments of the present application are provided but not limited to.
The first implementation of step S23 includes the following steps B11 to B12.
Step B11: and acquiring an image of a display area of the first auxiliary information.
Illustratively, the display area of the auxiliary information of the vertices of the target geometry is preset. It is possible to determine the display area in which the first auxiliary information is displayed and thereby obtain an image of the display area, i.e., an image containing the first auxiliary information.
Step B12: and processing the image to obtain the composition rule of the first auxiliary information.
The image is processed in various ways, that is, the implementation of step B12, and the following two methods are provided in the embodiments of the present application, but are not limited thereto.
The first implementation manner of the step B12 includes: performing character recognition on the image to obtain a recognition result; and determining the composing rule of the first auxiliary information according to the identification result.
For example, the image may be character-recognized using OCR (Optical Character Recognition ) technology to obtain the constituent elements constituting the first auxiliary information.
The second implementation manner of step B12 includes: and processing the image based on an analysis engine to obtain the composition rule of the first auxiliary information output by the analysis engine.
Illustratively, the parsing engine may be a pre-built character recognition model.
The character recognition model is obtained by taking a large number of sample images as input, taking corresponding constitution rules of the large number of sample images as training targets, and training a machine learning model. The sample image includes auxiliary information.
The training of the character recognition model involves at least one of artificial neural network, confidence network, reinforcement learning, transfer learning, induction learning, teaching learning and other technologies in machine learning.
By way of example, the character recognition model may be any one of a neural network model, a logistic regression model, a linear regression model, a Support Vector Machine (SVM), adaboost, XGboost, transformer-Encoder model.
The neural network model may be any one of a cyclic neural network-based model, a convolutional neural network-based model, and a transducer-encoder-based classification model, for example.
By way of example, the character recognition model may be a deep mix model of a cyclic neural network-based model, a convolutional neural network-based model, and a transducer-encoder-based classification model.
By way of example, the character recognition model may be any of an attention-based depth model, a memory network-based depth model, and a short text classification model based on deep learning.
The short text classification model based on deep learning is a Recurrent Neural Network (RNN) or a Convolutional Neural Network (CNN) or a variant based on the recurrent neural network or the convolutional neural network.
Illustratively, some simple domain adaptations may be made on the already pre-trained model to obtain the character recognition model.
Exemplary, "simple domain adaptation" includes, but is not limited to, secondary pre-training with large-scale unsupervised domain corpus again on an already pre-trained model, and/or model compression of an already pre-trained model by way of model distillation.
The second implementation of step S23 includes the following steps B21 to B22.
Step B21: and acquiring first auxiliary information input by a user.
For example, if the first auxiliary information is input into the electronic device by the user, the electronic device may directly obtain the first auxiliary information without analyzing the image containing the first auxiliary information to obtain the first auxiliary information.
Step B22: and determining a composition rule according to the first auxiliary information.
In an alternative implementation, the implementation of step S25 is various, and embodiments of the present application provide, but are not limited to, the following ways. The method comprises the steps of C1 to C2.
Step C1: and determining the content to be supplemented based on the composition rule.
Still taking fig. 3a to 3d as an example, if the input content is the component C of the second position of the second auxiliary information, the content to be complemented is: b in the first position, D in the third position', and D in the fourth position.
Step C2: and combining the content to be supplemented with the input content according to the composition rule to obtain second auxiliary information aiming at the second vertex.
Still referring to fig. 3a to 3D, the second auxiliary information BC 'D is obtained because the third position' is the upper corner mark of the C of the second position.
For example, the preset position in the implementation of the second step S24 may be the first position of the second auxiliary information. Namely, the input content corresponds to a first input component in the first auxiliary information, namely, the input content is positioned at a first position of the second auxiliary information; a component of the content to be supplemented, which corresponds to a non-first input of the components of the first auxiliary information; the content to be supplemented is a component of the second auxiliary information at a position other than the first position.
Corresponding to the method embodiment, the embodiment of the application further provides an auxiliary information generating device, and a schematic structural diagram of the device is shown in fig. 4, which may include: a determining module 41, a first obtaining module 42, a parsing module 43, a second obtaining module 44 and a generating module 45, wherein:
A determining module 41 for determining a target geometry;
A first obtaining module 42, configured to obtain first auxiliary information input for a first vertex of the target geometry;
the parsing module 43 is configured to parse the first auxiliary information to determine a rule of formation of the first auxiliary information;
A second acquisition module 44 for acquiring input content for a second vertex of the target geometry;
a generating module 45, configured to generate second auxiliary information for the second vertex according to the input content and the composition rule.
In an alternative implementation, the determining module includes:
A receiving unit for receiving a plurality of inputted line segments;
and the determining graph unit is used for determining the geometric figure formed by the line segments as a target geometric figure.
In an alternative implementation, the parsing module includes:
An image acquisition unit configured to acquire an image of a display area of the first auxiliary information;
And the acquisition rule unit is used for processing the image to obtain the composition rule of the first auxiliary information.
In an alternative implementation, the obtaining rule unit includes:
The recognition subunit is used for carrying out character recognition on the image to obtain a recognition result; determining a composition rule of the first auxiliary information according to the identification result;
or alternatively
And the analysis subunit is used for processing the image based on an analysis engine to obtain the composition rule of the first auxiliary information output by the analysis engine.
In an alternative implementation, the first auxiliary information includes a composition rule including:
the composition of the first auxiliary information, and the relative positional relationship between the compositions.
In an alternative implementation, the generating module includes:
a content determining unit for determining content to be supplemented based on the composition rule;
and the composition unit is used for combining the content to be supplemented with the input content according to the composition rule to obtain second auxiliary information aiming at the second vertex.
In an alternative implementation manner, the input content corresponds to a first input component in the first auxiliary information, and the content to be supplemented corresponds to a non-first input component in the first auxiliary information component.
Corresponding to the method embodiment, the application further provides an electronic device, and a schematic structural diagram of the electronic device is shown in fig. 5, which may include: at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
In the embodiment of the present application, the number of the processor 1, the communication interface 2, the memory 3 and the communication bus 4 is at least one, and the processor 1, the communication interface 2 and the memory 3 complete communication with each other through the communication bus 4.
The processor 1 may be a central processing unit CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED Circuit), or one or more integrated circuits configured to implement embodiments of the present application, etc.
The memory 3 may comprise a high-speed RAM memory, and may also comprise a non-volatile memory (non-volatile memory) or the like, such as at least one disk memory.
Wherein the memory 3 stores a program, the processor 1 may call the program stored in the memory 3, the program being for:
Determining a target geometry;
Obtaining first auxiliary information input for a first vertex of the target geometry;
Analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
Obtaining input content for a second vertex of the target geometry;
Generating second auxiliary information for the second vertex according to the input content and the construction rule.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
The embodiment of the present application also provides a readable storage medium storing a program adapted to be executed by a processor, the program being configured to:
Determining a target geometry;
Obtaining first auxiliary information input for a first vertex of the target geometry;
Analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
Obtaining input content for a second vertex of the target geometry;
Generating second auxiliary information for the second vertex according to the input content and the construction rule.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
In an exemplary embodiment, a computer program product is also provided, which can be directly loadable into the internal memory of a computer, for example into the memory comprised by the server, and contains software code, and which, when loaded and executed by the computer, is capable of realizing:
Determining a target geometry;
Obtaining first auxiliary information input for a first vertex of the target geometry;
Analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
Obtaining input content for a second vertex of the target geometry;
Generating second auxiliary information for the second vertex according to the input content and the construction rule.
Alternatively, the refinement function and the extension function of the program may be described with reference to the above.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
It should be understood that in the embodiments of the present application, the claims, the various embodiments, and the features may be combined with each other, so as to solve the foregoing technical problems.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method of generating auxiliary information, the method comprising:
Determining a target geometry;
Obtaining first auxiliary information input for a first vertex of the target geometry;
Analyzing the first auxiliary information to determine a composition rule of the first auxiliary information;
Obtaining input content for a second vertex of the target geometry;
Generating second auxiliary information for the second vertex according to the input content and the construction rule;
The first auxiliary information forming rule includes:
The composition of the first auxiliary information and the relative position relation among the composition;
the generating second auxiliary information for the second vertex according to the input content and the construction rule includes:
Determining content to be supplemented based on the composition rule;
and combining the content to be supplemented with the input content according to the composition rule to obtain second auxiliary information aiming at the second vertex.
2. The method of claim 1, the determining a target geometry comprising:
Receiving a plurality of input line segments;
And determining the geometric figure formed by the line segments as a target geometric figure.
3. The method of claim 1, the parsing the first auxiliary information comprising:
acquiring an image of a display area of the first auxiliary information;
and processing the image to obtain the composition rule of the first auxiliary information.
4. A method according to claim 3, wherein said processing said image to obtain a composition rule of said first auxiliary information comprises:
Performing character recognition on the image to obtain a recognition result; determining a composition rule of the first auxiliary information according to the identification result;
or alternatively
And processing the image based on an analysis engine to obtain the composition rule of the first auxiliary information output by the analysis engine.
5. The method of claim 1, the input content corresponding to a first input component of the first auxiliary information, the content to be supplemented corresponding to a non-first input component of the first auxiliary information components.
6. An auxiliary information generating apparatus, the apparatus comprising:
A determining module for determining a target geometry;
a first acquisition module for acquiring first auxiliary information input for a first vertex of the target geometry;
The analysis module is used for analyzing the first auxiliary information to determine the composition rule of the first auxiliary information;
A second acquisition module for acquiring input content for a second vertex of the target geometry;
a generating module, configured to generate second auxiliary information for the second vertex according to the input content and the composition rule;
The first auxiliary information forming rule includes:
The composition of the first auxiliary information and the relative position relation among the composition;
The generating module is specifically configured to:
Determining content to be supplemented based on the composition rule;
and combining the content to be supplemented with the input content according to the composition rule to obtain second auxiliary information aiming at the second vertex.
7. An electronic device, comprising:
a memory for storing a program;
Processor for calling and executing the program in the memory, by executing the program, the respective steps of the auxiliary information generating method according to any one of claims 1 to 5.
8. A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the auxiliary information generating method according to any of claims 1-5.
CN202111086983.1A 2021-09-16 2021-09-16 Auxiliary information generation method and device, electronic equipment and storage medium Active CN113778281B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111086983.1A CN113778281B (en) 2021-09-16 2021-09-16 Auxiliary information generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111086983.1A CN113778281B (en) 2021-09-16 2021-09-16 Auxiliary information generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113778281A CN113778281A (en) 2021-12-10
CN113778281B true CN113778281B (en) 2024-06-21

Family

ID=78851399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111086983.1A Active CN113778281B (en) 2021-09-16 2021-09-16 Auxiliary information generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113778281B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163340A (en) * 2011-04-18 2011-08-24 宁波万里电子科技有限公司 Method for labeling three-dimensional (3D) dynamic geometric figure data information in computer system
CN106504181A (en) * 2015-09-08 2017-03-15 想象技术有限公司 For processing graphic processing method and the system of subgraph unit

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6978230B1 (en) * 2000-10-10 2005-12-20 International Business Machines Corporation Apparatus, system, and method for draping annotations on to a geometric surface
CN108345440A (en) * 2017-01-22 2018-07-31 亿度慧达教育科技(北京)有限公司 A kind of method and its device of the geometric figure auxiliary line of display addition
CN109976614B (en) * 2019-03-28 2021-04-06 广州视源电子科技股份有限公司 Method, device, equipment and medium for marking three-dimensional graph
CN112308946B (en) * 2020-11-09 2023-08-18 电子科技大学中山学院 Question generation method and device, electronic equipment and readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163340A (en) * 2011-04-18 2011-08-24 宁波万里电子科技有限公司 Method for labeling three-dimensional (3D) dynamic geometric figure data information in computer system
CN106504181A (en) * 2015-09-08 2017-03-15 想象技术有限公司 For processing graphic processing method and the system of subgraph unit

Also Published As

Publication number Publication date
CN113778281A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
US8958644B2 (en) Creating tables with handwriting images, symbolic representations and media images from forms
CN109685870B (en) Information labeling method and device, labeling equipment and storage medium
TWI464678B (en) Handwritten input for asian languages
CN111507330B (en) Problem recognition method and device, electronic equipment and storage medium
KR20130058053A (en) Method and apparatus for segmenting strokes of overlapped handwriting into one or more groups
JP6914260B2 (en) Systems and methods to beautify digital ink
US20140184610A1 (en) Shaping device and shaping method
US9159147B2 (en) Method and apparatus for personalized handwriting avatar
CN111488732B (en) Method, system and related equipment for detecting deformed keywords
CN115393872B (en) Method, device and equipment for training text classification model and storage medium
CN114730241B (en) Gesture and stroke recognition in touch user interface input
CN113673432A (en) Handwriting recognition method, touch display device, computer device and storage medium
US7911452B2 (en) Pen input method and device for pen computing system
CN106650720A (en) Method, device and system for network marking based on character recognition technology
CN110211032B (en) Chinese character generating method and device and readable storage medium
JP2017090998A (en) Character recognizing program, and character recognizing device
CN113778281B (en) Auxiliary information generation method and device, electronic equipment and storage medium
CN116311300A (en) Table generation method, apparatus, electronic device and storage medium
US7133556B1 (en) Character recognition device and method for detecting erroneously read characters, and computer readable medium to implement character recognition
KR101159323B1 (en) Handwritten input for asian languages
JP7410532B2 (en) Character recognition device and character recognition program
CN113536169B (en) Method, device, equipment and storage medium for typesetting characters of webpage
CN111949141B (en) Handwritten character input method and device, electronic equipment and storage medium
CN118230339A (en) Text recognition method and device and electronic equipment
CN116503870A (en) Character recognition method, character recognition device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant