CN112306231A - Method for selecting high-altitude free-hand three-dimensional target based on half-contour highlight - Google Patents

Method for selecting high-altitude free-hand three-dimensional target based on half-contour highlight Download PDF

Info

Publication number
CN112306231A
CN112306231A CN202010979995.6A CN202010979995A CN112306231A CN 112306231 A CN112306231 A CN 112306231A CN 202010979995 A CN202010979995 A CN 202010979995A CN 112306231 A CN112306231 A CN 112306231A
Authority
CN
China
Prior art keywords
contour
highlight
user
pixel
selection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010979995.6A
Other languages
Chinese (zh)
Other versions
CN112306231B (en
Inventor
万华根
李婷
韩晓霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010979995.6A priority Critical patent/CN112306231B/en
Publication of CN112306231A publication Critical patent/CN112306231A/en
Application granted granted Critical
Publication of CN112306231B publication Critical patent/CN112306231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for selecting a high-altitude free-hand three-dimensional target based on semi-contour highlight, which comprises the following steps of: (1) in the preparation stage, the system presents a scene to a user, and simultaneously configures a half-contour occlusion model for each alternative object; (2) in the calibration-highlight feedback stage, 4 or less than 4 objects are selected by adopting a dynamic calibration method based on cone rays to become an alternative set for selection confirmation, and the objects are subjected to half-contour highlight feedback; (3) and in the confirmation-result feedback stage, the user executes corresponding gesture selection confirmation operation according to different highlighting forms of different objects, then the system obtains the confirmation intention of the user according to the analysis and recognition of the gesture, matches the corresponding highlighted object and gives feedback of the selection result to the user. By using the method and the device, the user can finish accurate expression of the selection intention through non-accurate operation, and further the selection performance and the user experience of the object are improved.

Description

Method for selecting high-altitude free-hand three-dimensional target based on half-contour highlight
Technical Field
The invention belongs to the technical field of human-computer interaction, and particularly relates to an air free-hand three-dimensional target selection method based on semi-contour highlight.
Background
The selection technology of the target object is one of core technologies in human-computer interaction. In human-computer interaction, before a target object is operated, the target object needs to be selected. With the appearance of a novel interaction mode, natural human-computer interaction enters the visual field of people, gesture interaction becomes a research hotspot as one important mode, and the gesture interaction becomes a research hotspot in three-dimensional human-computer interaction due to wide application value.
Where bare-handed interaction is considered the "mouse" of the next era of interaction. Through the mode of free-hand operation, the user can communicate with the computer by using gestures to realize the interaction of the three-dimensional scene, for example, the user can operate the object in the virtual scene by free-hand operation, draw, model and the like.
The interactive operation of the three-dimensional scene is mainly divided into selection, operation, navigation, system control and the like, wherein the selection is the basis of other operations except navigation, and the importance of the whole interactive process is self-evident.
However, in the method of selecting three-dimensional by using a freehand, the operation intention of the user is difficult to recognize by a computer due to the ambiguity of the freehand mode, namely the user is used to the fuzzy operation. Specifically, it is mainly expressed that, on the device layer, the precision of the freehand data acquisition device for acquiring the hand data is limited; in the interaction layer, the heisenberg effect, that is, when a user performs selection confirmation, hand operation causes unexpected jitter, which results in no target object being selected; on the user level, the hands may involuntarily tremble during overhead operations.
It is the input inaccuracy from the three layers of user, equipment and interaction that has a gap with the accuracy required by the three-dimensional selection task itself, which affects the accuracy of the three-dimensional object selection.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method for selecting a high-altitude bare-handed three-dimensional target based on semi-contour highlighting, so that the efficiency of bare-handed three-dimensional selection and the user experience are improved.
A method for selecting a high-altitude free-hand three-dimensional target based on semi-contour highlight comprises the following steps:
(1) a preparation selection stage: the system presents the scene to a user, and simultaneously configures a half-contour occlusion model for each alternative object; a user searches for a target object in a scene;
(2) calibration-highlight feedback stage: selecting 4 or less than 4 objects as an alternative set for confirming selection by adopting a dynamic calibration method based on cone rays, and carrying out half-contour highlight feedback on the objects;
(3) confirmation-result feedback phase: the user executes corresponding gesture selection and confirmation operations according to different highlighting forms of different objects, then the system obtains the user's confirmation intention according to analysis and recognition of gestures, matches the corresponding highlighted object, gives feedback of the user's selection result, and after the user receives the feedback of the selection result, the current selection task is ended.
According to the method, the highlight is used as a link to closely link the three-dimensional selection with the free-hand mode, so that a user can accurately express the selection intention through non-accurate operation, and further complete the selection, the problem of inaccurate input in an equipment layer, a user layer and an interaction layer in the free-hand three-dimensional selection is effectively solved or avoided, and the efficiency of the free-hand three-dimensional selection and the user experience are improved.
In the step (2), the dynamic calibration method based on the cone ray comprises the following specific processes:
the user executes calibration operation according to the direction designated by the calibration gesture, the system analyzes the calibration direction of the user, and selection is performed through a dynamic cone range in the pointing direction by adopting cone rays; and if the selected object comprises the target object, terminating the calibration by the user, and otherwise, continuing the calibration.
The semi-contour highlight feedback adopts an object semi-contour edge calculation mode, and comprises the following specific steps:
(2-1) rendering the shape of the object into a cache according to the position and the size of the object in the original scene, and recording the shape as Ori _ Buffer;
(2-2) adopting a Gaussian blur method to perform outward expansion on the shape of an object in the Ori _ Buffer on an image layer, adjusting the softness of a highlight edge by setting related parameters, and recording the result into a cache to be recorded as Gauss _ Buffer;
(2-3) carrying out pixel subtraction on the Gaussian result and the original shape of the object, and recording the result into a cache as a Contour _ Buffer;
and (2-4) calculating a central line in the vertical direction or the horizontal direction according to the object shape cache Ori _ Buffer, calculating the central line in the vertical direction of the object shape if upper and lower half outlines are obtained, and calculating the central line in the horizontal direction of the object shape if left and right half outlines are obtained. When a vertical direction center line is obtained, pixel vertical coordinates (y1, y2) of the highest point and the lowest point of the object are found, and then pixel vertical coordinates ((y1+ y2)/2) of a middle line are obtained and are recorded as C; when a horizontal center line is obtained, finding out the pixel horizontal coordinates (x1, x2) of the leftmost point and the rightmost point of the object, and then obtaining the pixel horizontal coordinate ((x1+ x2)/2) of a middle line, wherein the pixel horizontal coordinate is marked as C;
(2-5) according to the C, carrying out pixel screening on the Contour _ Buffer, if the Contour _ Buffer is the upper half outline, leaving a pixel point above a central line in the vertical direction, namely leaving a pixel point meeting the requirement that the vertical coordinate y > C of the pixel in the Counter _ Buffer; if the contour is the lower half contour, a pixel point below the center line in the vertical direction is left, namely a pixel point meeting the pixel vertical coordinate y < C in the Counter _ Buffer is left. If the left half contour is obtained, a pixel point which is left of the center line in the horizontal direction is left, namely a pixel point which meets the pixel horizontal coordinate x < C in the Counter _ Buffer is left; if the contour is a right half contour, a pixel point right of the center line in the horizontal direction is left, namely a pixel point meeting the pixel horizontal coordinate x > C in the Counter _ Buffer is left.
In the step (2), three priorities are set to allocate the highlights of different objects, specifically as follows:
the first priority is visual excessive fluent priority, and the visual fluent excess of the highlight is preferentially ensured; if the highlight of an object in the previous frame is the upper half outline, the highlight of the next frame is still the upper half outline if the object is still highlighted.
The second priority is that the non-shielding part is prior, and the highlight of the non-shielding half is preferentially distributed; if the left half of a certain object is blocked, the object is preferentially allocated to be a right half outline highlight.
The third priority is harmony priority between the relative position and the highlight half, and the mapping between the relative position of the object and four kinds of half highlights, namely an upper half outline, a lower half outline, a left half outline and a right half outline, is established; if the relative position of an object is above, the object is preferentially allocated as the highlight of the upper half outline.
In the step (3), the gesture confirmation operation includes two gestures, namely a dynamic instruction expression based on the direction movement information and a static arc expression based on the half-contour structure information.
Further, the dynamic command expression based on the direction movement information expresses the four information of left, right, up, and down by sliding left, right, up, and down, respectively, thereby completing the confirmation. The mode establishes mapping between highlight and operation through the motion process, and is convenient for users to learn and understand.
Further, the static arc expression based on the half-outline structure information expresses the four direction information of up, down, left and right by drawing up the half-arcs of up, down, left and right, thereby completing the confirmation. The mode establishes mapping between highlight and operation through the motion process, and is convenient for users to learn and understand.
In the method, through the exploration of the up-to-the-air free-hand three-dimensional selection, the highlight design in the selection process plays a role in starting up, and the influence caused by inaccurate input brought by three layers of users, equipment and interaction and the precision required by the three-dimensional selection task can be effectively reduced. The highlight design can fully utilize the output of a computer to help a user to finish non-accurate free-hand operation, so that the purpose of selection is achieved, and the efficiency of three-dimensional selection and the experience of the user are improved. The highlight design of the half contour provided by the invention meets the following requirements:
1. the certainty of highlighting design is satisfied, namely, the user can clearly see which object is selected from all the objects. The method can continue the characteristic description of the outline of the object as the full-outline highlight, and keeps the correlation between the highlight and the object, so that each object still has integrity when being highlighted, and different objects can be distinguished through the highlight under the condition of not deviating from the original visual inertia.
2. The difference of highlight design is met, and the selected limited number of objects can be distinguished through 4 different highlight display forms;
3. the method meets the inspiration of highlight design, the semi-contour highlight has obvious direction meaning, and therefore the confirmation process and the gesture for selecting the object can be designed by means of expression of the direction.
Drawings
FIG. 1 is a schematic diagram of a human-computer interaction process of the method of the present invention;
FIG. 2 is a schematic diagram of a half-outline highlight design;
FIG. 3 is a schematic diagram of a semi-outline highlighting expansion design;
FIG. 4 is a schematic flow chart of an image-based post-processing semi-outline highlighting algorithm;
FIG. 5 is a schematic diagram of a method for selecting a person in air based on semi-outline highlighting;
FIG. 6 is a schematic view of a cone ray calibration;
FIG. 7 is a schematic diagram of a dynamic command representation based on directional motion information;
FIG. 8 is a schematic diagram of a static arc representation based on half-outline structure information;
FIG. 9 is an exemplary diagram of a system flow for a volitional selection method based on semi-outline highlighting.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
The interactive process of the method of the present invention is shown in fig. 1, and from the viewpoint of user input, the interactive process includes two major operations: calibrating operation and confirming operation; from the perspective of system output, two major feedbacks are included: highlight feedback and result feedback. The user executes the calibration operation to complete the input, the computer collects and processes the input data and informs the user of the currently selected object (set) through the highlight object, and after multiple interactive cycles, the user finally selects the target object, and the calibration process is finished. The calibration and highlight feedback supplement each other in the calibration process, and the aim of calibrating the target object is fulfilled through mutual communication between the user and the display system through continuous input and output. And after the calibration is finished, entering a link of 'confirmation-result feedback', inputting confirmation operation by a user, processing input data by the system, and outputting a selected result to the user.
Wherein, the highlight plays the role of starting and stopping in the selection process. The method is characterized in that immediate feedback is given to the calibration operation of a user to inform the user that the user is the object pointed by you; and the starting is to enable the user to determine the next operation through highlight, if the highlight object is not the target object, the calibration is continued, and if the highlight object is the target object, the confirmation operation is performed.
In addition, how the user performs the confirmation operation can also be highlighted, and as shown in fig. 2, the selection of the corresponding object can be performed according to the position of the highlighted half-outline on the object. If the peach-shaped object with the high left half outline is selected, a dynamic gesture instruction of leftward sliding or a gesture instruction of drawing a left semicircular arc can be utilized; when the star-shaped object with the highlighted right half outline is selected, a dynamic gesture instruction sliding rightwards or a gesture instruction for drawing a right half circular arc is utilized; when the heart-shaped object with the highlighted upper half outline is selected, a dynamic gesture instruction sliding upwards or a gesture instruction for drawing an upper semicircular arc is utilized; and when the quadrilateral object with the highlighted lower half outline is selected, utilizing a downward sliding dynamic gesture instruction or a gesture instruction for drawing a lower semicircular arc.
In the process of realizing highlight display, because the highlight object is continuously changed due to calibration, the cognition and the user experience of the user can be seriously influenced if the highlight object is not processed. The invention sets the following three priorities to allocate highlights of different objects:
first, visual transition is prioritized, i.e., highlighting is preferentially guaranteed to be visually excessive. If the highlight of an object in the previous frame is the upper half outline, the highlight of the next frame is still the upper half outline if the object is still highlighted.
Second, the non-occluded part is prioritized, i.e., for occluded objects, highlights of the non-occluded half are preferentially assigned. If the left half of a certain object is blocked, the object is preferentially allocated to be the right half of the outline highlight.
And thirdly, harmony and priority are given to the relative position and the highlight half, namely, mapping between the relative position of the object and four kinds of half highlights, namely an upper half outline, a lower half outline, a left half outline and a right half outline is established. If the relative position of an object is above, the object is preferentially allocated as the highlight of the upper half outline.
If a scene with higher density is encountered, which causes difficulty in selection, for example, a case that more than 4 objects completely block and overlap each other exists on a ray. For such extreme cases, an expansion of the half-contour highlight design may be adopted, that is, 8 or less than 8 objects may be selected, as shown in fig. 3, the display form of the highlight in the calibration is obtained, the complete contour is cut by four straight lines including a vertical line, a horizontal line and y ═ x to form 8 half-contour highlight forms, and the confirmation operation is completed by expressing eight directions.
The highlight of the half contour needs to realize two effects, namely, the half contour tracing of the object is carried out, the highlight tracing is still visible under the shielding condition, and the specific implementation mode can adopt an image-based post-processing method. The whole idea is to render the original scene according to the normal rendering process, then perform independent rendering on the target object and calculate the half-outline edge of the target object on the image level, and finally paste the edge pixel points back to the original scene graph. Taking the upper half contour as an example, as shown in fig. 4, in the calculation of the edge of the object half contour, five steps are divided in total:
1. rendering the shape of the object into a cache according to the position and the size of the object in the original scene, and recording the shape as Ori _ Buffer;
2. and (3) carrying out outward expansion on the shape of an object in the Ori _ Buffer on the image layer, and selecting a Gaussian blur method in order to make the delineation and highlight more natural and softer and more beautiful in vision. Adjusting the softness of the highlight edge by setting related parameters, and recording the result into a cache as Gauss _ Buffer;
3. carrying out pixel subtraction on the Gaussian result and the original shape of the object, and recording the result into a cache as content _ Buffer;
4. and calculating the central line in the vertical direction or the horizontal direction according to the object shape cache Ori _ Buffer, calculating the central line in the vertical direction of the object shape if the upper half contour and the lower half contour are obtained, and calculating the central line in the horizontal direction of the object shape if the left half contour and the right half contour are obtained. If the central line in the vertical direction is obtained, the pixel vertical coordinates (y1, y2) of the highest point and the lowest point of the object are found, and then the pixel vertical coordinate ((y1+ y2)/2) of the middle line is obtained and is marked as C;
5. and C, carrying out pixel screening on the content _ Buffer, if the content _ Buffer is the upper half Contour, leaving pixel points above the central line, namely leaving pixel points meeting the pixel vertical coordinate y > C in the Counter _ Buffer.
In the method of the high-altitude freehand three-dimensional selection based on the half-outline highlight, the selection process is divided into three stages in total, as shown in fig. 5.
(1) Preparing a selection stage; the system presents the scene to a user, and simultaneously configures a half-contour occlusion model for each alternative object; the user looks for a target object (a cube in fig. 5) in the scene.
(2) And in the calibration-highlight feedback stage, a dynamic calibration method based on cone rays is adopted. The method abandons the fine positioning in the selection calibration process, so that a user can dynamically select a target object or an alternative set containing the target object through coarse operation in the calibration process, and meanwhile, the calculation pressure brought by the pointing ray and the object being submitted to a selection system is avoided, and the efficiency is improved. The user designates a direction to execute calibration operation through the calibration gesture, the system analyzes the calibration direction of the user, and selection is performed through a dynamic cone range in the pointing direction by adopting cone rays. As shown in fig. 6, a cube is a target object, a sphere is an interfering object, the pointing direction of the user is the central axis direction of the cone ray, the system can dynamically adjust the cone angle to select 4 (or less than 4) objects within the range of the cone, and perform half-contour highlight feedback on the objects. And if the selected object comprises the target object, terminating the calibration by the user, and otherwise, continuing the calibration.
(3) And in the confirmation-result feedback stage, the user executes corresponding confirmation gesture operation according to different highlight forms of different objects, then the system analyzes and identifies the gesture to obtain the confirmation intention of the user, the corresponding highlight object is matched, the user is given feedback of the selection result, and after the user receives the feedback of the selection result, the current selection task is ended.
The invention designs two gesture confirmation operation modes, one mode is dynamic instruction expression based on direction movement information, and the other mode is static arc expression based on semi-contour structure information. Fig. 7 is a dynamic command representation based on directional motion information, which represents four information, i.e., left, right, up, and down, by sliding left, right, up, and down, respectively, to complete the confirmation. The mode establishes mapping between highlight and operation through the motion process, and is convenient for users to learn and understand. Fig. 8 is a static arc representation based on the half-outline structure information, and the four pieces of direction information, i.e., up, down, left, and right, can be represented by drawing up half arcs of up, down, left, and right. The expression mode is more free, for example, when a user expresses the left half arc, the user can draw from top to bottom or draw from bottom to top according to own habits.
According to the embodiment of the invention, a projector EPSON CB-X04 with the resolution of 1024X 768 is used as display equipment to provide an effective display area of 80cm X120 cm, the height of a projection center is 145cm from the ground, a Leap Motion is hand data acquisition equipment, the Leap Motion is 80cm from the ground, and a user stands at a position 200cm away from a display screen to perform an experiment.
Fig. 9 is a system flow example diagram of a method for selecting a bare-handed object based on semi-outline highlighting in the embodiment of the present invention.
(1) Preparatory selection phase
The pre-processing module will present the scene to the user while configuring a half-contour occlusion model for each candidate object.
(2) Calibration-highlight feedback stage
The user performs an object selection operation with a gesture. Firstly, starting calibration according to a set calibration gesture, and acquiring gesture data of a user hand by a freehand data acquisition module; then the gesture segmentation and recognition module recognizes the gesture of the gesture and calculates a calibration direction; then, selecting 4 or less than 4 objects as an alternative selection set for confirming selection by a calibration module according to a dynamic calibration method of the centrum ray; and finally, the highlight module allocates highlights for the objects in the alternative set according to the highlight priority allocation rule and renders the objects through a half-contour highlight rendering algorithm. In the calibration-highlight feedback stage, the interaction cycle of 'user (understanding) -input (user gesture operation) -computer (calculation and processing) -output (computer highlight feedback)' is continuously repeated until the user selects a target object, the gesture is changed into a calibration ending gesture at the moment, the highlight stability of the object in the scene does not change at the current moment, and the feedback confirmation stage is started.
(3) Acknowledgement-result feedback phase
The method comprises the following steps that a bare-handed data acquisition module acquires and records hand trajectory data, and a feedback module presents a gesture trajectory to a user; then, real-time gesture segmentation is carried out by a gesture segmentation and recognition module by utilizing real-time recorded track data, a gesture end point is found through real-time calculation and analysis of speed and position information, and the end point marks that the user confirms that the operation is finished; then, according to data in the segmented effective gesture segment, a gesture segmentation and recognition module recognizes the gesture, two confirmation mechanisms of dynamic instruction expression and static arc expression are designed according to the highlighted direction meaning, and two different recognition algorithms are called according to different confirmation mechanisms of the current selection method; after the gesture is recognized, matching the gesture meaning, the half-outline highlight and the object corresponding to the alternative set by the confirmation module; and finally, the result feedback module feeds back the selection result to the user, and the current selection operation is finished.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (7)

1. A method for selecting a high-altitude free-hand three-dimensional target based on semi-contour highlight is characterized by comprising the following steps:
(1) a preparation selection stage: the system presents the scene to a user, and simultaneously configures a half-contour occlusion model for each alternative object; a user searches for a target object in a scene;
(2) calibration-highlight feedback stage: selecting 4 or less than 4 objects as an alternative set for confirming selection by adopting a dynamic calibration method based on cone rays, and carrying out half-contour highlight feedback on the objects;
(3) confirmation-result feedback phase: the user executes corresponding gesture selection and confirmation operations according to different highlighting forms of different objects, then the system obtains the user's confirmation intention according to analysis and recognition of gestures, matches the corresponding highlighted object, gives feedback of the user's selection result, and after the user receives the feedback of the selection result, the current selection task is ended.
2. The method for selecting the flying freehand three-dimensional target based on the semi-contour highlight as recited in claim 1, wherein in the step (2), the dynamic calibration method based on the cone ray comprises the following specific processes:
the user executes calibration operation according to the direction designated by the calibration gesture, the system analyzes the calibration direction of the user, and selection is performed through a dynamic cone range in the pointing direction by adopting cone rays; and if the selected object comprises the target object, terminating the calibration by the user, and otherwise, continuing the calibration.
3. The method for selecting the three-dimensional target in the air based on the semi-contour highlight according to claim 1, wherein in the step (2), the semi-contour highlight feedback is obtained by an object semi-contour edge calculation method, and the method comprises the following specific steps:
(2-1) rendering the shape of the object into a cache according to the position and the size of the object in the original scene, and recording the shape as Ori _ Buffer;
(2-2) adopting a Gaussian blur method to perform outward expansion on the shape of an object in the Ori _ Buffer on an image layer, adjusting the softness of a highlight edge by setting related parameters, and recording the result into a cache to be recorded as Gauss _ Buffer;
(2-3) carrying out pixel subtraction on the Gaussian result and the original shape of the object, and recording the result into a cache as a Contour _ Buffer;
(2-4) calculating a central line in the vertical direction or the horizontal direction according to the object shape cache Ori _ Buffer, calculating the central line in the vertical direction of the object shape if upper and lower half outlines are obtained, and calculating the central line in the horizontal direction of the object shape if left and right half outlines are obtained; when a vertical direction center line is obtained, pixel vertical coordinates (y1, y2) of the highest point and the lowest point of the object are found, and then pixel vertical coordinates ((y1+ y2)/2) of a middle line are obtained and are recorded as C; when a horizontal center line is obtained, finding out the pixel horizontal coordinates (x1, x2) of the leftmost point and the rightmost point of the object, and then obtaining the pixel horizontal coordinate ((x1+ x2)/2) of a middle line, wherein the pixel horizontal coordinate is marked as C;
(2-5) according to the C, carrying out pixel screening on the Contour _ Buffer, if the Contour _ Buffer is the upper half outline, leaving a pixel point above a central line in the vertical direction, namely leaving a pixel point meeting the requirement that the vertical coordinate y > C of the pixel in the Counter _ Buffer; if the contour is the lower half contour, a pixel point below the center line in the vertical direction is left, namely a pixel point meeting the pixel vertical coordinate y < C in the Counter _ Buffer is left; if the left half contour is obtained, a pixel point which is left of the center line in the horizontal direction is left, namely a pixel point which meets the pixel horizontal coordinate x < C in the Counter _ Buffer is left; if the contour is a right half contour, a pixel point right of the center line in the horizontal direction is left, namely a pixel point meeting the pixel horizontal coordinate x > C in the Counter _ Buffer is left.
4. The method for selecting the three-dimensional target in the air or freehand area based on the semi-contour highlight as claimed in claim 1, wherein in the step (2), three priorities are set to allocate the highlights of different objects, specifically as follows:
the first priority is visual excessive fluent priority, and the visual fluent excess of the highlight is preferentially ensured; the second priority is that the non-shielding part is prior, and the highlight of the non-shielding half is preferentially distributed; the third priority is that the relative position and the highlight half-edge are harmonious and preferential, and the mapping between the relative position of the object and the four kinds of half-highlights of the upper half outline, the lower half outline, the left half outline and the right half outline is established.
5. The method for selecting the three-dimensional target in the air or freehand environment based on the semi-contour highlighting as claimed in claim 1, wherein in the step (3), the gesture confirmation operation comprises two gestures, namely a dynamic instruction expression based on the direction movement information and a static arc expression based on the semi-contour structure information.
6. The method for selecting the three-dimensional target in the air or in bare hands based on the semi-contour highlight as claimed in claim 5, wherein the dynamic instruction expression based on the direction movement information respectively expresses the four information of left, right, up and down by sliding left, right, up and down, thereby completing the confirmation.
7. The method for selecting the three-dimensional target in the air according to claim 5, wherein the static arc expression based on the half-contour structure information expresses the four direction information of up, down, left and right by drawing up the semi-arcs of up, down, left and right, thereby completing the confirmation.
CN202010979995.6A 2020-09-17 2020-09-17 Method for selecting high-altitude free-hand three-dimensional target based on half-contour highlight Active CN112306231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010979995.6A CN112306231B (en) 2020-09-17 2020-09-17 Method for selecting high-altitude free-hand three-dimensional target based on half-contour highlight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010979995.6A CN112306231B (en) 2020-09-17 2020-09-17 Method for selecting high-altitude free-hand three-dimensional target based on half-contour highlight

Publications (2)

Publication Number Publication Date
CN112306231A true CN112306231A (en) 2021-02-02
CN112306231B CN112306231B (en) 2021-11-30

Family

ID=74483464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010979995.6A Active CN112306231B (en) 2020-09-17 2020-09-17 Method for selecting high-altitude free-hand three-dimensional target based on half-contour highlight

Country Status (1)

Country Link
CN (1) CN112306231B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112915530A (en) * 2021-04-06 2021-06-08 腾讯科技(深圳)有限公司 Virtual article selection method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104777997A (en) * 2014-01-13 2015-07-15 Lg电子株式会社 Display apparatus and method for operating the same
CN105659592A (en) * 2014-09-22 2016-06-08 三星电子株式会社 Camera system for three-dimensional video
US20160371821A1 (en) * 2014-03-28 2016-12-22 Fujifilm Corporation Image processing device, imaging device, image processing method, and program
CN109271023A (en) * 2018-08-29 2019-01-25 浙江大学 A kind of selection method based on three dimensional object appearance profile free hand gestures manual expression
CN110837326A (en) * 2019-10-24 2020-02-25 浙江大学 Three-dimensional target selection method based on object attribute progressive expression

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104777997A (en) * 2014-01-13 2015-07-15 Lg电子株式会社 Display apparatus and method for operating the same
US20160371821A1 (en) * 2014-03-28 2016-12-22 Fujifilm Corporation Image processing device, imaging device, image processing method, and program
CN105659592A (en) * 2014-09-22 2016-06-08 三星电子株式会社 Camera system for three-dimensional video
CN109271023A (en) * 2018-08-29 2019-01-25 浙江大学 A kind of selection method based on three dimensional object appearance profile free hand gestures manual expression
CN110837326A (en) * 2019-10-24 2020-02-25 浙江大学 Three-dimensional target selection method based on object attribute progressive expression

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
万华根: "面向三维场景造型的徒手手势交互方法", 《北京理工大学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112915530A (en) * 2021-04-06 2021-06-08 腾讯科技(深圳)有限公司 Virtual article selection method, device, equipment and medium
CN112915530B (en) * 2021-04-06 2022-11-25 腾讯科技(深圳)有限公司 Virtual article selection method, device, equipment and medium

Also Published As

Publication number Publication date
CN112306231B (en) 2021-11-30

Similar Documents

Publication Publication Date Title
US20220084279A1 (en) Methods for manipulating objects in an environment
US10958891B2 (en) Visual annotation using tagging sessions
CN112509151B (en) Method for generating sense of reality of virtual object in teaching scene
US11488380B2 (en) Method and apparatus for 3-D auto tagging
US6624833B1 (en) Gesture-based input interface system with shadow detection
JP2021193599A (en) Virtual object figure synthesizing method, device, electronic apparatus, and storage medium
CN106598227B (en) Gesture identification method based on Leap Motion and Kinect
Segen et al. Shadow gestures: 3D hand pose estimation using a single camera
CN111294665B (en) Video generation method and device, electronic equipment and readable storage medium
CN110825245A (en) System and method for three-dimensional graphical user interface with wide usability
CN110392251B (en) Dynamic projection method and system based on virtual reality
CN110688948A (en) Method and device for transforming gender of human face in video, electronic equipment and storage medium
KR20200107957A (en) Image processing method and device, electronic device and storage medium
CN110837326B (en) Three-dimensional target selection method based on object attribute progressive expression
CN112306231B (en) Method for selecting high-altitude free-hand three-dimensional target based on half-contour highlight
CN104820584B (en) Construction method and system of 3D gesture interface for hierarchical information natural control
CN112488059B (en) Spatial gesture control method based on deep learning model cascade
CN108401452B (en) Apparatus and method for performing real target detection and control using virtual reality head mounted display system
KR101743888B1 (en) User Terminal and Computer Implemented Method for Synchronizing Camera Movement Path and Camera Movement Timing Using Touch User Interface
CN109960766A (en) For the visualization presentation of network structure data and exchange method under immersive environment
CN110688012B (en) Method and device for realizing interaction with intelligent terminal and vr equipment
Leubner et al. Computer-vision-based human-computer interaction with a back projection wall using arm gestures
CN106951087A (en) A kind of exchange method and device based on virtual interacting plane
CN113434046A (en) Three-dimensional interaction system, method, computer device and readable storage medium
CN111273773B (en) Man-machine interaction method and system for head-mounted VR environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant