WO2022089043A1 - 显示设备、几何图形识别方法及多图层叠加显示方法 - Google Patents

显示设备、几何图形识别方法及多图层叠加显示方法 Download PDF

Info

Publication number
WO2022089043A1
WO2022089043A1 PCT/CN2021/117796 CN2021117796W WO2022089043A1 WO 2022089043 A1 WO2022089043 A1 WO 2022089043A1 CN 2021117796 W CN2021117796 W CN 2021117796W WO 2022089043 A1 WO2022089043 A1 WO 2022089043A1
Authority
WO
WIPO (PCT)
Prior art keywords
hand
drawn
display
layer
display device
Prior art date
Application number
PCT/CN2021/117796
Other languages
English (en)
French (fr)
Inventor
李保成
王敏
张振宝
曹颖
刘加山
于洪
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011188310.2A external-priority patent/CN112181207B/zh
Priority claimed from CN202011528031.6A external-priority patent/CN112672199B/zh
Priority claimed from CN202110171543.XA external-priority patent/CN112799627B/zh
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Priority to CN202180066094.0A priority Critical patent/CN116324689A/zh
Publication of WO2022089043A1 publication Critical patent/WO2022089043A1/zh
Priority to US18/157,324 priority patent/US11984097B2/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1601Constructional details related to the housing of computer displays, e.g. of CRT monitors, of flat displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/16Indexing scheme relating to G06F1/16 - G06F1/18
    • G06F2200/161Indexing scheme relating to constructional details of the monitor
    • G06F2200/1614Image rotation following screen orientation, e.g. switching from landscape to portrait mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04804Transparency, e.g. transparent or translucent windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • the present application relates to display device technology, and in particular, to a display device, a geometric figure recognition method, and a multi-layer overlay display method.
  • the display device with touch function usually installs the "presentation whiteboard” application.
  • the display can display the drawing area, and the user can draw in the drawing area by sliding the touch command.
  • a specific touch action trajectory is obtained, and the controller determines the touch action pattern through the touch action detected by the touch component, and controls the display to display it in real time to satisfy the demonstration effect.
  • Some embodiments of the present application provide a display device including a display, an input/output interface, and a controller.
  • the display is configured to display a user interface;
  • the input/output interface is configured to connect an input device;
  • the controller is configured to perform the following program steps: obtaining a hand-drawn graphic trajectory input by the user through the input/output interface; traversing the The coordinates of each hand-drawn point in the hand-drawn graphic trajectory to obtain a first characteristic direction, where the first characteristic direction is when the positional relationship between at least two of the hand-drawn points in the hand-drawn graphic trajectory satisfies a preset positional relationship The direction of the connecting line is located; the angle between the first characteristic direction and the preset judgment direction is detected; the hand-drawn graphic trajectory is rotated according to the angle, so that the first characteristic direction and the The preset judgment direction is parallel; the coordinates of each hand-drawn point in the rotated hand-drawn graphic trajectory are traversed to obtain a second feature direction, and the
  • Some embodiments of the present application provide a geometric figure recognition method, which is applied to a display device, where the display device includes a display and a controller, and the display device also has a built-in or external input device, and the method includes: acquiring a hand-drawn hand-drawn input by a user.
  • the first characteristic direction is that the positional relationship between at least two of the hand-drawn points in the hand-drawn graphic trajectory meets a predetermined The direction of the connection line when the positional relationship is set; the angle between the first feature direction and the preset judgment direction is detected; the hand-drawn graphic trajectory is rotated according to the angle, so that the first feature The direction is parallel to the preset judgment direction; the coordinates of each hand-drawn point in the rotated hand-drawn graphic trajectory are traversed to obtain a second feature direction, and the second feature direction satisfies a preset geometric relationship with the first feature direction direction; draw a standard geometric figure according to the first characteristic direction and the second characteristic direction; rotate the standard geometric figure according to the included angle.
  • Some embodiments of the present application provide a display device, including: a display, a touch control component, and a controller.
  • the display is configured to display a user interface
  • the touch component is configured to detect the touch track input by the user
  • the controller is configured to perform the following program steps: acquiring the touch track pattern in the first layer, and acquiring The background pattern in the second layer, the second layer is a layer located one layer below the first layer; according to the background pattern, an interpolation operation is performed on the touch track pattern to generate converting a pattern, the resolution of the converting pattern is equal to the resolution of the background pattern; superimposing the conversion pattern and the background pattern to control the display to display the superimposed result in real time.
  • Some embodiments of the present application further provide a multi-layer overlay display method.
  • the multi-layer overlay display method is applied to a display device, and the display device includes a display, a touch component, and a controller, wherein the touch component is Configured to detect a touch track input by a user, the multi-layer overlay display method includes: acquiring a touch track pattern in a first layer, and acquiring a background pattern in a second layer, where the second layer is A layer located one layer below the first layer; according to the background pattern, performing an interpolation operation on the touch track pattern to generate a conversion pattern, the resolution of the conversion pattern being equal to the resolution of the background pattern resolution; superimposing the conversion pattern and the background pattern to control the display to display the superimposed result in real time.
  • Some embodiments of the present application further provide a display device, the display device includes: a display; a touch component configured to detect a touch track input by a user; and a controller, the controller is configured to: display on the display In the second rotation state, draw a response track on the backup image of the original image according to the touch coordinates in the first rotation state corresponding to the touch track, wherein the original image is before the touch track is detected , the image displayed on the display corresponds to the image in the first rotation state; and the image displayed on the display is updated according to the drawn image.
  • the present application provides a multi-layer overlay display method, the method includes: detecting a touch track input by a user; when the display is in a second rotation state, according to the touch track corresponding to the touch track in the first rotation state touch coordinates, and draw a response trajectory on the backup image of the original image, wherein the original image is the image in the first rotation state corresponding to the image displayed on the display before the touch trajectory is detected; The image updates the image displayed by the display.
  • FIG. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus according to one or more embodiments of the present application;
  • FIG. 2 is a block diagram of a hardware configuration of a display device 200 according to one or more embodiments of the present application;
  • FIG. 3 is a block diagram of the hardware configuration of the control device 100 according to one or more embodiments of the present application;
  • FIG. 4 is a schematic diagram of software configuration in a display device 200 according to one or more embodiments of the present application.
  • FIG. 5 is a schematic diagram showing an icon control interface of an application in a display device 200 according to one or more embodiments of the present application;
  • 6A is a schematic interface diagram of an electronic whiteboard application according to one or more embodiments of the present application.
  • 6B is a schematic diagram of layer stacking according to one or more embodiments of the present application.
  • FIGS. 7A-7B are schematic diagrams of drawing geometric figures according to one or more embodiments of the present application.
  • FIG. 9 is a schematic diagram of drawing a geometric figure according to one or more embodiments of the present application.
  • FIG. 10 is a schematic diagram of extreme points according to one or more embodiments of the present application.
  • FIG. 13-15 are schematic diagrams of multi-layer overlay according to one or more embodiments of the present application.
  • 16 is a schematic diagram of coordinate system conversion according to one or more embodiments of the present application.
  • 17 is a schematic diagram of a vertical screen state of a display device according to one or more embodiments of the present application.
  • FIG. 18 is a schematic diagram of display rotation according to one or more embodiments of the present application.
  • 19-20 are schematic interface diagrams of an electronic whiteboard application according to one or more embodiments of the present application.
  • FIG. 1 is a schematic diagram of an operation scenario between a display device and a control device according to one or more embodiments of the present application.
  • a user can operate the display device 200 through a mobile terminal 300 and the control device 100 .
  • the control apparatus 100 may be a remote control, and the communication between the remote control and the display device includes infrared protocol communication, Bluetooth protocol communication, and wireless or other wired ways to control the display device 200 .
  • the user can control the display device 200 by inputting user instructions through keys on the remote control, voice input, control panel input, and the like.
  • mobile terminals, tablet computers, computers, notebook computers, and other smart devices may also be used to control the display device 200 .
  • the mobile terminal 300 may install a software application with the display device 200 to implement connection communication through a network communication protocol, so as to achieve the purpose of one-to-one control operation and data communication.
  • the audio and video content displayed on the mobile terminal 300 may also be transmitted to the display device 200 to realize a synchronous display function.
  • the display device 200 also performs data communication with the server 400 through various communication methods.
  • the display device 200 may be allowed to communicate via local area network (LAN), wireless local area network (WLAN), and other networks.
  • the server 400 may provide various contents and interactions to the display device 200 .
  • the display device 200 may be a liquid crystal display, an OLED display, or a projection display device.
  • the display device 200 may additionally provide an intelligent network television function that provides a computer-supported function in addition to the function of broadcasting and receiving television.
  • FIG. 2 exemplarily shows a configuration block diagram of the control apparatus 100 according to an exemplary embodiment.
  • the control device 100 includes a controller 110 , a communication interface 130 , a user input/output interface 140 , a memory, and a power supply.
  • the control device 100 can receive the user's input operation instruction, and convert the operation instruction into an instruction that the display device 200 can recognize and respond to, and play an intermediary role between the user and the display device 200 .
  • the communication interface 130 is used for external communication, and includes at least one of a WIFI chip, a Bluetooth module, NFC or an alternative module.
  • the user input/output interface 140 includes at least one of a microphone, a touchpad, a sensor, a key or an alternative module.
  • FIG. 3 is a block diagram showing a hardware configuration of the display apparatus 200 according to an exemplary embodiment.
  • the display device 200 includes a tuner 210 , a communicator 220 , a detector 230 , an external device interface 240 , a controller 250 , a display 260 , an audio output interface 270 , a memory, a power supply, and a user interface 280 .
  • the controller includes a central processing unit, a video processing unit, an audio processing unit, a graphics processing unit, a RAM, a ROM, and a first interface to an nth interface for input/output.
  • the display 260 may be at least one of a liquid crystal display, an OLED display, a touch display, and a projection display, and may also be a projection device and a projection screen.
  • the tuner-demodulator 210 receives broadcast television signals through wired or wireless reception, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
  • the detector 230 is used to collect external environment or external interaction signals.
  • the controller 250 and the tuner 210 may be located in different separate devices, that is, the tuner 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
  • the controller 250 controls the operation of the display device and responds to user operations.
  • the controller 250 controls the overall operation of the display apparatus 200 .
  • a user may input a user command on a graphical user interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the graphical user interface (GUI).
  • GUI graphical user interface
  • the user may input a user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through a sensor to receive the user input command.
  • a "user interface” is a medium interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user.
  • the commonly used form of user interface is Graphical User Interface (GUI), which refers to a user interface related to computer operations displayed in a graphical manner. It can be an icon, window, control and other interface elements displayed on the display screen of the electronic device, wherein the control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, etc. at least one of the visual interface elements.
  • GUI Graphical User Interface
  • FIG. 4 is a schematic diagram of software configuration in the display device 200 according to one or more embodiments of the present application.
  • the system is divided into four layers. Layer)
  • the Application Framework layer referred to as the "framework layer”
  • the Android runtime (Android runtime)
  • the system library layer referred to as the “system runtime layer”
  • the kernel layer contains at least one of the following drivers: audio driver, display driver, Bluetooth driver, camera driver, WIFI driver, USB driver, HDMI driver, sensor driver (such as fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive etc.
  • FIG. 5 is a schematic diagram showing an icon control interface of an application in a display device 200 according to one or more embodiments of the present application.
  • the application layer includes at least one application that can display a corresponding icon control on the display. , such as: live TV application icon control, video on demand application icon control, media center application icon control, application center icon control, game application icon control, etc.
  • Live TV app that can provide live TV from different sources.
  • Video-on-demand application that can provide video from different storage sources. Unlike live TV applications, video-on-demand provides a display of video from certain storage sources.
  • a media center application that can provide a variety of multimedia content playback applications. The application center can provide storage of various applications.
  • an electronic whiteboard application can be installed on the display device.
  • the user can perform operations such as writing and drawing lines, and the display device can generate a touch track according to the user's touch action to realize whiteboard presentation or entertainment.
  • function. 6A is a schematic interface diagram of an electronic whiteboard application according to one or more embodiments of the present application.
  • the application interface of the electronic whiteboard may be provided with a toolbar area T and a drawing area D, wherein the toolbar area T can display
  • the drawing area D can be a rectangular area, and the user can draw graphics in the drawing area D.
  • the area other than the toolbar area T may be the drawing area D, or the area of the drawing area D may also be a small area in the area other than the toolbar area T At this time, the drawing area D can display a frame to prompt the user to draw within the frame.
  • the display device 200 may display the hand-drawing process by superimposing multiple layers.
  • the display device 200 can use one layer to display the sliding touch action trajectory corresponding to the user's hand-drawn in real time, and another layer can be used to display the presentation whiteboard interface, and the picture presented on the final display 260 is composed of These two layers are superimposed.
  • the layer used for displaying the touch track pattern in real time is called the first layer
  • the layer used for displaying the whiteboard interface is called the second layer.
  • the layers that can be presented by the display device 200 not only include the above two layers, but may also include other layers for displaying different picture contents.
  • the display device 200 may include three layers, which are a first layer: a group of pictures (GOP) layer. ), the second layer: on-screen menu adjustment layer (on-screen display, OSD) and the third layer: video layer (Video).
  • the GOP layer is also called the GOP2 layer or the acceleration layer, and can be used to display the temporarily drawn content displayed on the upper layer of the menu.
  • the OSD layer also known as the middle layer or menu layer, is used to display the application interface, application menu, toolbar, etc.
  • the Video layer also known as the bottom layer, can generally be used to display the picture content corresponding to the external signal connected to the TV.
  • a hierarchical relationship can be set between different layers to achieve a specific display effect.
  • the hierarchical relationship of the GOP layer, the OSD layer and the Video layer can be sequentially: GOP layer-OSD layer-Video layer, that is, the Video layer is displayed at the bottom layer to display the content of the external signal, and the OSD layer is displayed above the Video layer, thus
  • the application menu can be displayed on top of the external signal screen, and the GOP layer is displayed on the OSD layer, so that when the user inputs and draws graphics, it can be highlighted.
  • the display device 200 can update the drawn pattern to the OSD layer for display, and continue to display other touch track content through the GOP layer.
  • Such a display mode can enable the pattern generated by the new drawing action to cover the pattern generated by the previous drawing action, so as to adapt to the user's operating habits.
  • the representation form of the patterns can be in the ARGB form, that is, on the basis of the traditional RGB form, with transparency information, so as to facilitate the multiple graphics Layer screen overlay.
  • the part drawn by the brush is a specific touch track pattern, and the other part is a completely transparent pattern, so as to avoid the part not drawn by the user from obscuring the content in the underlying layer. Therefore, based on the above-mentioned multiple layers, the display device 200 can present the final picture according to the specific pattern content and transparency in each layer.
  • FIGS. 7A-7B are schematic diagrams of drawing geometric figures according to one or more embodiments of the present application.
  • Geometric figure recognition means that the display device 200 recognizes patterns similar to hand-drawn patterns by performing graphical analysis on patterns drawn by the user.
  • the standard geometry process is shown in Figure 7A.
  • the pattern drawn by the user may be completed by the user through a touch screen, or may be completed by other input devices 500, such as a mouse, a hand-painted tablet, a somatosensory handle, and the like.
  • the user can generate a hand-drawn graphic trajectory in a specified interface by inputting an action, and the display device 200 then recognizes the input hand-drawn graphic trajectory to determine a standard geometric figure similar to the hand-drawn graphic trajectory.
  • the display device 200 can realize the input of the hand-drawn figure track and the recognition of the geometric figures by running a specific application program.
  • the standard geometric figures are a series of figure types determined according to preset identification rules, including but not limited to polygons, circles, ellipses, and the like.
  • the recognition frequency can be set as "polygon>circle>ellipse", that is, when the hand-drawn graphic track input by the user is close to a polygon and an ellipse, the polygon is used as the recognition result.
  • the standard graphics can be identified by analyzing the characteristics of the user's hand-drawn graphics track, determining the standard geometry type corresponding to the hand-drawn graphics track, and then determining the standard geometry parameters according to some parameters in the hand-drawn graphics track input by the user, so as to generate the corresponding parameters.
  • standard geometry For example, when the trajectory of the hand-drawn graphic input by the user shows arc transitions everywhere, and the radian of the arc changes within a certain threshold range, it can be recognized that the hand-drawn graphic trajectory input by the user may be a circle, and then measure the center of the graph and the graph. The distance between each hand-drawn point in the track, and the average value of the distance is calculated to obtain the diameter of the circle, and a standard circle is generated according to the diameter.
  • the hand-drawn graphic trajectory may be composed of a plurality of hand-drawn points, and each hand-drawn point may correspond to a unique position coordinate according to its position in the interface.
  • the relative positional relationship between each hand-drawn point can be determined according to the positional coordinates. For example, the relative distance between two hand-drawn points can be calculated through the position coordinates; the orientation relationship between the two hand-drawn points can be determined by comparing the position coordinate values.
  • the azimuth relationship between a plurality of hand-drawn points it can also be determined whether the hand-drawn points are in a continuous state in a certain area, and can further determine characteristic information such as radians and angles of the continuous state.
  • a polygon has multiple vertices, and the hand-drawn points at the vertices have the characteristics of a corner shape; the radian changes of the corresponding parts of the circular pattern track tend to be consistent; the radian of an ellipse has corresponding positions at the corresponding positions of the long axis and the short axis. changing relationships, etc.
  • a feature matching table can be established in the application program. After the user inputs the hand-drawn graphic trajectory, the identified features in the hand-drawn graphic trajectory are matched with the feature list, thereby determining the standard geometry corresponding to the current graphic trajectory. .
  • auxiliary shapes suitable for the figures can also be determined according to the hand-drawn figure trajectory input by the user, so as to limit the generation area of the figures.
  • a rectangular area can be determined according to the minimum coordinate values of each hand-drawn point in each direction (x-axis and y-axis) in the hand-drawn graphic trajectory input by the user, The long side of the rectangular area is taken as the long axis of the ellipse, and the short side of the rectangular area is taken as the short axis of the ellipse.
  • a standard ellipse pattern can be generated in a rectangular area.
  • this method is only applicable when the user-drawn graphics are in the forward state.
  • the user must control the long axis of the ellipse to be parallel to the horizontal direction by hand drawing.
  • the requirement of this positive state increases the difficulty of hand-painting for users, which severely limits the application scenarios of pattern recognition.
  • the geometrical figure identified by the coordinate values is too different from the figure that the user wants to input, which reduces the recognition accuracy of the geometrical figure.
  • some embodiments of the present application provide a display device and a method for recognizing geometric figures, which can be used to detect a trajectory input by a user during a hand-drawn presentation, so as to convert the hand-drawn action trajectory into a standard geometric figure.
  • FIG. 8 is a flowchart of geometric figure recognition according to one or more embodiments of the present application
  • FIG. 9 is a schematic diagram of geometric figure drawing according to one or more embodiments of the present application, as shown in FIGS. 8 and 9 .
  • the display device 200 may include a display 275 and a controller 250, and the display device 200 also has a built-in or external input device 500, and the method includes the following steps:
  • the controller 250 of the display device 200 may acquire the hand-drawn figure track input by the user from the input device 500 .
  • the hand-drawn graphic trajectory is a data set composed of coordinates of a plurality of hand-drawn points.
  • a user can input a drawing action through the built-in touch component or an external input device 500, and the drawing action will generate a voltage change on the touch component or the input device 500, and this voltage change can be detected and transmitted by the input. and storage, so as to realize the detection of hand-painted points.
  • the touch component or the input device 500 then converts the detected hand-drawn point data into input data that can be recognized by the controller 250 .
  • the method of detecting the drawing action input by the user is also different.
  • the built-in touch component of the display device 200 can form a touch screen with the display 275, and the touch component can detect the position of the user's touch point, and then detect the hand-drawn graphic track input by the user.
  • the input device 500 may be a peripheral device such as a mouse.
  • the cursor on the interface of the display device 200 also moves.
  • click events of the mouse can be detected, such as pressing the left mouse button and releasing it. Click the left button of the mouse, and detect the moving position of the cursor in the two click events, determine the position data of the cursor, and realize the detection of the hand-drawn graphic track input by the user.
  • the input drawing action can be detected according to the start time and end time when the user performs a drawing. For example, when a user performs a drawing action through a finger touch operation, the drawing action starts when the finger first touches the touch screen, and the drawing action ends when the finger leaves the touch screen.
  • the coordinates of the position point can constitute the hand-drawn graphic track input by the user.
  • the controller 250 can extract the coordinates of each hand-drawn point in the hand-drawn graphic trajectory, and determine the first characteristic direction by analyzing the coordinate change law and the relative positional relationship between the coordinates.
  • the first characteristic direction is the direction of the connecting line when the positional relationship between at least two hand-drawn points in the hand-drawn graphic trajectory satisfies the preset positional relationship.
  • the distance between any two hand-drawn points in the hand-drawn trajectory can be calculated to generate the first distance; and then the first distances between all the hand-drawn points can be compared to obtain the most Two hand-drawn points farther from the first distance D max ; connect a line between the two hand-drawn points farthest from the first distance to generate a first feature direction according to the direction of the connection.
  • the first feature direction may also be determined by the coordinate change law between multiple hand-drawn points. For example, in the process of identifying the polygon, it may be determined whether the plurality of consecutive hand-drawn points constitute the vertex of the polygon according to the coordinate change rule between the plurality of consecutive hand-drawn points.
  • the specific algorithm may include: comparing the position coordinates of multiple consecutive hand-drawn points to obtain the coordinate change values of two adjacent hand-drawn points; , determine that the hand-drawn points input by the user are linearly distributed; fit the sides of the polygon according to the coordinates of the hand-drawn points; extract the slopes and slope change points of each side, and determine the slope change points as polygon vertices. Then, according to the positional relationship of the multiple vertices, the first characteristic direction is determined. For example, for a trapezoid, the direction in which the two parallel sides lie can be determined as the first characteristic direction.
  • the angle between the first characteristic direction and the preset judgment direction is detected.
  • the angle of inclination of the hand-drawn graphic by the user may be determined according to the included angle between the first characteristic direction and the preset judgment direction.
  • the preset judgment direction is a reference direction calibrated according to the drawing interface, which may be a horizontal direction, a vertical direction, and other specific tilt angle directions.
  • the angle between the direction of the long axis and the horizontal direction can be determined by detection, thereby determining the inclination angle of the hand-drawn ellipse.
  • polygons such as trapezoids
  • the angle between the base and the horizontal direction can be detected, so as to determine the inclination angle of the hand-drawn trapezoid.
  • the hand-drawn graphic trajectory is rotated according to the included angle, so that the first characteristic direction is parallel to the preset judgment direction.
  • the hand-drawn graphic can be rotated according to the detected angle, so that the hand-drawn graphic is transformed into a positive state.
  • the hand-drawn graphic track can be controlled to be rotated by 30 degrees, so that the long axis direction is parallel to the horizontal direction.
  • the direction of rotation may be determined according to the relative angle direction, that is, +30 degrees means clockwise rotation, and -30 degrees means counterclockwise rotation.
  • the rotation origin can be determined according to the position of the center of the graph, that is, after the user enters the hand-drawn graph trajectory, the minimum and maximum coordinate values of the hand-drawn point in the horizontal and numerical directions are determined according to the coordinate value of the hand-drawn point, so as to determine the minimum and maximum coordinate values of the hand-drawn point in the horizontal and numerical directions.
  • Coordinate value to solve the coordinates of the center point that is, the coordinates of the center point are shown in the following formulas 1 and 2:
  • x min , y min are the minimum coordinate values in the x-axis direction and the y-axis direction, respectively;
  • x max , y max are the maximum coordinate values in the x-axis direction and the y-axis direction, respectively.
  • the coordinates of each hand-drawn point in the rotated hand-drawn graphic trajectory can also be traversed again, so as to obtain the second feature direction.
  • the second characteristic direction is a direction satisfying a preset geometric relationship with the first characteristic direction.
  • the second characteristic direction may have a specific geometric relationship with the first characteristic direction according to a specific graphic type.
  • the second characteristic direction may be perpendicular to the first characteristic direction, or may be parallel to the first characteristic direction.
  • the second distance may be generated by calculating the distance between two hand-drawn points in the hand-drawn trajectory in a direction perpendicular to the first feature; and then comparing the second distances between all hand-drawn points to obtain the most The two hand-drawn points corresponding to the second distance L max are far away; thus, a line is connected between the two hand-drawn points with the farthest second distance, so as to generate a second feature direction according to the connection direction. It can be seen that by determining the second characteristic direction, the short-axis direction of the ellipse can be obtained.
  • the coordinates of a plurality of consecutive hand-drawn points on the hand-drawn trajectory in the direction parallel to the first feature can be extracted, and the coordinate change values in the direction perpendicular to the first feature can be compared. If the coordinates change If the value is within the preset fluctuation range, the direction in which the line connecting the two ends of the multiple continuous hand-drawn points is determined as the second characteristic direction. It can be seen that, through the parallel relationship between the second characteristic direction and the first characteristic direction, the positions of two sides parallel to each other in the trapezoid or parallelogram can be determined.
  • Standard geometry is drawn according to the first feature direction and the second feature direction.
  • a standard geometric figure can be drawn according to the first characteristic direction and the second characteristic direction, combined with the geometric figure type determined by the feature of the hand-drawn point in the trajectory.
  • a long-axis end point may be located in the first direction, and the long-axis end point is two hand-drawn points corresponding to the farthest first distance; a circumscribed rectangle is generated according to the second distance and the long-axis end point; according to the circumscribed rectangle Generate standard geometry. It can be seen that through the first feature direction and the second feature direction, the end points of the long axis and the short axis can be determined respectively, and a circumscribed rectangle can be generated, and then the ellipse shape can be determined.
  • the long base end point may be positioned in the first feature direction, and the short base end point may be positioned in the second feature direction; and then the long base end point and the short base end point may be used as vertices to draw a polygonal pattern. It can be seen that the positions of the two bases of the trapezoid can be determined respectively through the first characteristic direction and the second characteristic direction, and two waists can be drawn in combination with the corresponding endpoint positions, and then a trapezoidal pattern can be drawn.
  • the drawn figure can be rotated according to the angle between the previously detected first feature direction and the preset judgment direction, so as to restore the recognized figure to the inclined state at the time of drawing, and complete the matching Recognition of user hand-drawn actions.
  • the geometric figure recognition method provided in the above-mentioned embodiments can be configured in the controller 250 of the display device 200 for recognizing the hand-drawn figure when the user performs the hand-drawn figure input, and converts the hand-drawn figure into a standard geometric figure. graphics for better drawing results.
  • the method can eliminate the interference of the inclined state of the hand-drawn graphics on the graphics parameters by rotating the trajectory of the hand-drawn graphics, so as to facilitate the matching of graphics recognition templates, improve the accuracy of graphics recognition, and alleviate the problem of low accuracy of traditional geometric graphics identification methods.
  • the first characteristic direction can be determined according to the direction of the line connecting the two hand-drawn points with the farthest distance by comparing the distance between each two hand-drawn points.
  • the first characteristic direction can also be obtained in the following manner:
  • the endpoints are located according to the coordinate extrema.
  • a third distance between the extreme point and the endpoint is calculated.
  • the third distance is compared to obtain two endpoints that are closest to the extreme point.
  • FIG. 10 is a schematic diagram of extreme points according to one or more embodiments of the present application.
  • endpoints P6 and P7 are closer to the extreme points than endpoints P5 and P8, Therefore, the two endpoints closest to the extreme point are determined as endpoints P6 and P7.
  • the endpoints P5' and P8' are closer to the extreme point than the endpoints P6' and P7', so it is determined that the two endpoints that are closest to the extreme point are the endpoints P5' and P8'.
  • a line is connected between two end points closest to the extreme value point, so as to generate the first characteristic direction according to the connection direction.
  • the first characteristic direction may be determined by connecting a line between the two endpoints, and the angle between the first characteristic direction and the preset judgment direction may be detected. Wait for the next steps to finalize the standard geometry.
  • the first characteristic direction can be determined by comparing the distances of fewer times through the end point and the extreme point, which greatly reduces the time consumed for determining the first characteristic direction, and improves the real-time response speed of the demonstration process.
  • the step of acquiring the hand-drawn graphic trajectory input by the user further includes:
  • the coordinates of the hand-drawn points in the hand-drawn graphic trajectory can be calculated to determine the coordinate change rule between the hand-drawn points.
  • a feature recognition model can be built into the drawing application. Multiple feature labels can be built in the recognition model. After the hand-drawn graphic trajectory is input into the model, the classification probability of the current hand-drawn graphic trajectory relative to the feature label can be input to determine whether the coordinate change law is the same as the preset shape law.
  • the coordinate change law is the same as the preset shape law, it is determined that the hand-drawn graphic input by the user is a standard geometric figure that can be recognized, so it is possible to traverse the coordinates of each hand-drawn point in the hand-drawn graphic trajectory to obtain the first characteristic direction. step, and complete the identification of the hand-drawn graphics according to the identification method in the above-mentioned embodiment.
  • the coordinate change rule is different from the preset shape rule, it is determined that the hand-drawn graphic input by the user may be a more complex graphic, such as written text, so the monitor can be controlled to display the hand-drawn graphic trajectory in real time to ensure a normal presentation effect.
  • the hand-drawn graphic trajectory input by the user can be detected in real time, and when it conforms to the preset shape rule, Then perform geometric figure recognition, and when it does not conform to the preset shape rule, the pattern drawn by the user is still displayed, so as to realize the geometric figure recognition function and ensure the normal demonstration effect.
  • the step of rotating the standard geometric figure according to the included angle further includes:
  • the angle threshold is to reversely rotate the standard geometric figure according to the included angle, and the reverse rotation direction of the standard geometric figure is opposite to the rotation direction performed by the hand-drawn graphic trajectory; control the display to display the reversely rotated the standard geometry.
  • the tilt state of the geometry can be detected, that is, by comparing the angle between the first characteristic direction and the preset judgment direction and the preset angle threshold, Determine the tilt state.
  • the generated standard geometric figure can be directly displayed, thereby realizing the positive display of the generated standard geometric figure.
  • the inclination angle is large, that is, the included angle is greater than the included angle threshold, it is determined that the graph drawn by the user is in an inclined state, so the step of reversely rotating the standard geometric graph according to the included angle can be performed. It is clear that the reverse rotation direction of standard geometric figures is opposite to the direction of rotation performed by the hand-drawn figure trajectory.
  • the forward state in some embodiments of the present application may include a forward state relative to the horizontal direction and a forward state relative to the vertical direction. Therefore, in practical applications, the angle between the first characteristic direction and the horizontal direction and the angle between the first characteristic direction and the vertical direction can be detected respectively, and then the smaller angle is compared with the preset angle threshold. Thereby, it is determined whether the drawn graph is in a forward state.
  • the angle between the long axis of the ellipse and the horizontal or vertical direction is determined.
  • a certain threshold such as 15 degrees
  • the recognized ellipse is adjusted to Parallel to the horizontal direction
  • a certain threshold eg 15 degrees
  • a polygon such as a rectangle, a parallelogram, a trapezoid, etc.
  • the angle between one of the parallel sides and the horizontal or vertical direction is determined. If the angle between the parallel side and the horizontal direction is less than a certain threshold (15 degrees) , then adjust the recognized polygon to have parallel sides parallel to the horizontal direction. When the angle between the parallel sides and the vertical direction is less than a certain threshold (15 degrees), adjust the recognized polygon to have parallel sides parallel to the vertical direction.
  • the steps before the step of comparing the included angle with the preset included angle threshold, the steps further include:
  • an automatic angle adjustment switch function may be implemented in an application program through a specific interactive UI or a specific setting program.
  • a switch button may be displayed in the drawing interface or the setting interface to indicate the on and off of the automatic angle adjustment function. The user can adjust the switch state of the automatic angle adjustment switch by clicking, sliding, checking and other actions.
  • a geometric figure automatic angle adjustment switch can be added on the drawing interface. If the user turns on the switch, the figure angle can be adjusted automatically when the geometric figure is recognized; if the user turns off the switch, the automatic angle adjustment is no longer performed.
  • the geometric figure recognition method may further include the following steps:
  • controlling the display to display the hand-drawn graphic track in real time
  • the display is controlled to cancel the display of the hand-drawn figure track and display the standard geometric figure.
  • the display device 200 can display the hand-drawn graphic trajectory in real time according to the control instruction input by the user. And, after the standard geometric figure is identified, the display of the hand-drawn figure track is canceled, and the standard geometric figure is displayed at the corresponding position, so as to adapt to the user's input of the hand-drawn figure.
  • FIGS. 11-12 are flowcharts of geometric figure recognition according to one or more embodiments of the present application.
  • a display device 200 is also provided in some embodiments of the present application,
  • the display 275, the input/output interface 255, and the controller 250 are included.
  • the display 275 is configured to display a user interface;
  • the input/output interface 255 is configured to connect to the input device 500;
  • the controller 250 is configured to perform the following program steps:
  • Standard geometric figures are generated from the hand-drawn figure trajectories.
  • the standard geometric figure has the same inclination angle as the trajectory of the hand-drawn graphic, and the standard geometric figure is drawn according to the trajectory of the hand-drawn graphic after being rotated, and is then generated after reverse rotation.
  • the display device 200 provided in this embodiment can be connected to the input device 500 through the input/output interface 255, so that the user can perform interaction through the input device 500 and input the hand-drawn graphic trajectory, so that the controller 250 can generate the hand-drawn graphic trajectory according to the hand-drawn graphic trajectory.
  • Standard geometry Specifically, the controller 250 determines the first characteristic direction by traversing the left side of each hand-drawn point in the hand-drawn graphic trajectory, and rotates the hand-drawn graphic according to the angle between the first characteristic direction and the preset judgment direction, thereby determining the second characteristic
  • the standard geometric figure is drawn according to the first characteristic direction and the second characteristic direction, and finally the standard geometric figure is adapted to the hand-drawn figure position through rotation.
  • the display device can eliminate the interference of the inclined state of the hand-drawn graphics on the graphics parameters by rotating the trajectory of the hand-drawn graphics, improve the accuracy of graphics recognition, and alleviate the problem of low accuracy of traditional geometric graphics identification methods.
  • a display device 200 including a display 275 , a touch control component, and a controller.
  • the display 275 is configured to display a user interface;
  • the touch component is configured to obtain a user's touch input;
  • the controller 250 is configured to perform the following program steps:
  • Standard geometric figures are generated from the hand-drawn figure trajectories.
  • the standard geometric figure has the same inclination angle as the trajectory of the hand-drawn graphic, and the standard geometric figure is drawn according to the trajectory of the hand-drawn graphic after being rotated, and is then generated after reverse rotation.
  • the display device 200 can detect the user input through the built-in touch component, so as to obtain the hand-drawn graphic track input by the user.
  • the controller then generates a standard geometric figure according to the hand-drawn graphic trajectory, that is, determines the first characteristic direction according to the input hand-drawn graphic trajectory, and determines the second characteristic direction after rotating, so as to draw the standard according to the first characteristic direction and the second characteristic direction.
  • the display device 200 can cooperate with the display 275 to form a touch screen through built-in touch components, which is convenient for user input, and by rotating the hand-drawn graphics, the influence of the tilted state on the graphics recognition process is alleviated, and the accuracy of the graphics recognition is improved.
  • the patterns in each layer may have different picture resolutions.
  • the resolution of the GOP layer is 2k
  • the resolution of the OSD layer and the video layer is 4k.
  • the different resolutions will make it difficult to align the patterns in each layer, resulting in display deviation or mistake.
  • an interpolation operation can be performed on the patterns in the lower-resolution layer to improve the resolution of the layer's picture.
  • FIGS. 13-15 are schematic diagrams of multi-layer stacking according to one or more embodiments of the present application. As shown in FIG.
  • the interpolation operation is an image interpolation algorithm, and the pixel content to be inserted can be calculated from the adjacent pixel content in the image, so as to improve the resolution of the image.
  • the layers to be superimposed include transparency information, and different layers are often set with different transparency, when the interpolation algorithm is performed, the content of adjacent pixels will be affected by the transparency, and the edge position of the drawn pattern will pass through. After the interpolation algorithm, the problem of display error occurs.
  • the writing process of the electronic whiteboard is generally displayed on the GOP layer, the drawn lines after writing are displayed on the OSD layer, and the actual electronic whiteboard interface displays the superposition of the GOP layer and the OSD layer.
  • the GOP2 layer (2K) is to be superimposed with the OSD layer (4K)
  • the resolution of GOP2 needs to be increased to 4K first.
  • an interpolation algorithm needs to be performed on the pixels.
  • the background of GOP2 is transparent (that is, the background color is : 0X00000000), at the border of the drawn line, the line drawing color and the transparent color will be interpolated. Since the transparent color does not work in the interpolation algorithm, the aliasing problem will occur due to switching from 2K to 4K after interpolation. If the color value of the transparent color is considered to be 000000, it will become the line drawing color and transparent black for the interpolation algorithm. After interpolation, there will be semi-transparent black, which is manifested as the effect of black borders at the border of the drawn line.
  • some embodiments of the present application provide a multi-layer overlay display method, which can be applied to a display device 200, where the display device 200 includes a display 260, a touch The component 276 and the controller 250, wherein the touch component 276 is configured to detect the touch track input by the user, as shown in FIG. 14 and FIG. 15 , the multi-layer overlay display method includes the following steps:
  • the first layer is used to display the touch track pattern
  • the second layer is used to display interface contents such as application interface, application menu, and toolbar. Therefore, the second layer is a layer one layer below the first layer.
  • the first layer is the GOP2 layer
  • the second layer is the OSD layer.
  • the user can click the application icon to start the relevant application through the application launching interface.
  • the application started by the user is an application capable of using the first layer
  • the application interface may be displayed in the second layer.
  • the touch track input by the user is detected in real time by the touch component 276, and the touch track pattern is presented in the first layer according to the user input action.
  • the content displayed in the second layer not only includes application content such as the application interface, application menu, toolbar, etc., but also includes the painting content synchronized to the second layer after a touch action is completed. Therefore,
  • the application interface content displayed in the second layer is called a background pattern.
  • the controller 250 may also perform an interpolation algorithm on the touch track pattern according to the background pattern, so as to convert the touch track pattern into a A converted pattern with a resolution equal to the resolution of the background pattern.
  • the interpolation algorithm is used to change the resolution of the touch track pattern.
  • the interpolation algorithm can adopt different forms, such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation method, directional interpolation, etc.
  • the adjacent interpolation method as an example, when a 2k image needs to be transformed into a 4k image, the pixel value in each pixel in the 2k image can be traversed, and the average value of the pixel values of two adjacent pixels can be calculated to obtain the value to be inserted. The pixel value corresponding to the pixel point.
  • the pixel image data can be extracted from the edge of the touch track pattern and the position near the background pattern respectively, so as to calculate the image of the interpolated pixel points according to the pixel data extracted from the background pattern and the touch track pattern. data.
  • the color of the touch track pattern can be extracted to obtain image data (192, 0, 255, 0), that is, the user hand-painted a pure green graphic with 75% opacity, and at this time Color extraction is performed on the background pattern to obtain image data (255, 255, 255, 255) of the background pattern, that is, the background pattern is a pure white interface.
  • the conversion pattern and the background pattern are superimposed to control the display to display the superimposed result in real time.
  • the controller 250 may also perform superposition according to the result of the interpolation operation and the background pattern, and display the superposition result on the display 260 as a matter of fact. It can be seen that, in the above embodiment, since the interpolation pixel data calculated is determined by the background pattern and the touch track pattern during the interpolation operation process, when the two layer images are superimposed, there is no touch Black borders or jagged edges appear on the edge of the track pattern, which improves the display effect of the layer overlay process.
  • the step of acquiring the touch track pattern in the first layer further includes: receiving the touch track input by the user in real time; and in response to the touch track, extracting the foreground color; and then according to the foreground color, in the first A touch track is presented in a layer to generate a touch track pattern.
  • the user can input a touch track through a touch action.
  • the touch component 276 detects the touch track, it can send it to the controller 250, so that the controller 250 can respond to the touch track. Touch the track to extract the foreground color.
  • the foreground color is the color of the brush selected by the user during the painting demonstration process.
  • the user can select a brush shape and set the foreground color to green. Then, after the user subsequently inputs a sliding touch operation, a green touch track can be formed in the whiteboard interface. Therefore, in order to obtain pixel point data corresponding to the touch track pattern, the controller 250 can directly extract the foreground color, and present the touch track in the first layer according to the foreground color to generate the touch track pattern.
  • the controller 250 may also retain the extracted foreground color as pixel point data corresponding to the touch track pattern. That is, when the interpolation algorithm is executed, the interpolation calculation can be performed directly through the foreground color data and the background color data extracted from the second layer.
  • a transparent color picker can be set in the demonstration whiteboard program of the display device 200, and the color of the background part of the GOP layer is set to the color of the transparent color picker, so that the color of the drawn line and the color of the transparent color picker are not interpolated after interpolation. Boundary lines will appear.
  • the brush color or the interface color in the transparent layer is single, select the transparent color to select the brush color or interface color, and set it to full transparency, so that the color at the border of the line drawing or the interface and the transparent layer is interpolated translucent
  • the value of the brush color, the border will not have a distinct black or other border color.
  • the brush used by the user may not be a fixed color, that is, the brush may be a multi-color colored pen, and the colored pen will appear in multiple colors along with the extension of the touch track. combined form.
  • the controller 250 uses the foreground color as the pixel data of the touch track pattern, the extracted pixel data will not match the actual touch track pattern, which affects the calculation of the interpolation algorithm result.
  • the step of performing an interpolation operation on the touch track pattern further includes: first extracting the border color and border position of the touch track pattern; , extract the background color in the area associated with the boundary position to obtain an interpolation result according to the boundary color and the background color, and perform an interpolation operation on the touch track pattern.
  • the controller 250 can extract the border of the touch track pattern by executing an image analysis program during the process of presenting the touch track pattern on the first layer, and obtain the border color and the position of the border pixel point.
  • the image boundary can be determined by traversing all the pixels in the image and determining the color value gap between two adjacent pixels. The boundary of the trace pattern. Since the first layer is displayed on the topmost layer, for superimposed display, the corresponding opacity of pixels on the first layer that are not the touch track pattern is 0%, so the boundary of the touch track pattern can be determined according to the opacity.
  • the controller 250 may also extract the background color in the second layer. If the background pattern in the second layer is a solid color background, the background color can be extracted from any pixel in the background pattern; if the background pattern in the second layer is not a solid color background, it needs to be based on the boundary position, Search in the second layer, and determine that the color of the pixel point corresponding to the position of the border at the position of the background pattern is the background color.
  • the boundary of the touch action track pattern is a two-dimensional array composed of multiple pixels
  • the background color extracted from the background pattern is also a two-dimensional array composed of multiple pixels.
  • the controller 250 then obtains the interpolation result according to the boundary color and the background color, so as to convert the touch track pattern into a conversion pattern with a higher resolution, so as to perform a superposition operation between multiple layers.
  • the colors in the lines are not fixed. If the transparent color picker selects one of the colors, there will still be different colors at the border. In this case, the transparent color picker can choose to select the color of the OSD layer. Because the resolution will be superimposed eventually, so select the color of the next layer to be superimposed.
  • the background color is a single color
  • the color of the transparent color picker selects the full transparent value of the single color; if the background color is non- A single transparent color, the transparent color picker is a two-dimensional array. For the area in the GOP layer to display content, obtain the color array of the corresponding position of the area in the OSD layer, and then take the color in the color array as the transparent color picker color by taking its full transparency value.
  • This transparent color picker is used as the background of the content to be displayed.
  • the border color after interpolation is the color to be superimposed.
  • the translucent value of so there will be no abnormal border line or border color after superimposition.
  • the pixels located at the border position can be directly determined, and at the same time, the background color associated with the border pixels can be determined to adapt to the touch track in the first layer.
  • the color change and the color change in the background pattern in the second layer so that when the boundary of the touch track pattern is interpolated, an interpolation result suitable for the color of the two layers can be obtained, so as to improve the image quality of the boundary area .
  • the step of performing an interpolation operation on the touch track pattern further includes: detecting the resolutions of the touch track pattern and the background pattern; and then executing different programs according to the detection results. step, if the resolution of the touch track pattern is smaller than the resolution of the background pattern, perform the step of extracting the border color and border position of the touch track pattern; if the resolution of the touch track pattern is equal to the resolution of the background pattern, the touch The trace pattern and the background pattern perform superimposition.
  • the resolutions of the touch track pattern and the background pattern may be obtained from the screen resolution supported by the display 260 of the display device 200 or the resolution supported by the currently running application. After the resolutions of the touch track pattern and the background pattern are detected, the resolutions of the two layers can be compared, and the superposition method can be determined according to the comparison result.
  • Pattern that is, executing an interpolation algorithm on the touch track pattern, that is, executing the step of extracting the border color and border position of the touch track pattern.
  • the interpolation algorithm when executing the interpolation algorithm, it is also necessary to determine the number of pixels inserted in the interpolation algorithm according to the resolution of the background pattern. For example, if the resolution of the GOP layer is 2k and the resolution of the OSD layer is 4k, it is necessary to insert double the number of pixels in the touch track pattern in the GOP layer, so that the touch track pattern is also converted into a 4k pattern.
  • the touch track pattern can be directly interpolated without interpolating the touch track pattern.
  • the track pattern and the background pattern are superimposed.
  • the display device 200 can also display different types of application program interfaces by superimposing multiple layers, that is, in addition to the first layer and the second layer, a third layer such as a video layer needs to be superimposed.
  • a third layer such as a video layer needs to be superimposed.
  • the display device 200 displays the content of the external signal through the video layer
  • the program interface is displayed through the OSD layer, and the presentation function is completed through the GOP layer.
  • the controller 250 extracts the background pattern in the second layer, the transparent area may be extracted, thereby affecting the result of the interpolation algorithm and the result of superposition.
  • the step of performing an interpolation operation on the touch track pattern further includes: detecting the transparency of the background pattern, and according to the detection result, if the background The transparency of the pattern is fully transparent or semi-transparent, the bottom layer pattern in the third layer is obtained, and then the background pattern and the bottom layer pattern are superimposed; thus the superimposed background pattern is presented in the second layer.
  • the transparency of the background pattern can also be detected to determine whether the background pattern in the second layer is fully transparent or not.
  • Translucent type of pattern The specific detection process can be performed by traversing the opacity values of each pixel in the background pattern. If there are pixels or areas with an opacity value of 0 in the background pattern, or the ratio of pixels with an opacity value of 0 to the total number of pixels If it is greater than the set value, the transparency of the background pattern is determined to be fully transparent or semi-transparent.
  • the background pattern is a fully transparent or semi-transparent pattern
  • the second layer and the third layer can be superimposed first, and then the pattern in the first layer can be interpolated.
  • the third layer is a layer located one layer below the second layer.
  • the resolution of the second layer and the third layer can also be detected before the interpolation algorithm is performed on the touch track pattern, so that the resolutions are adjusted to be consistent, and then the overlay. That is, in the step of superimposing the background pattern and the underlying pattern, the resolutions of the background pattern and the underlying pattern may also be detected. If the resolution of the background pattern is smaller than that of the underlying pattern, the underlying pattern is extracted from the third layer. Color, perform an interpolation algorithm on the background pattern according to the underlying color; perform superimposition on the background pattern processed by the interpolation algorithm and the underlying pattern.
  • the display device 200 may not be able to directly obtain the bottom layer pattern. Therefore, when extracting the color in the bottom layer pattern, the video layer can be intercepted first. , after taking the screenshot, extract the underlying color from the screenshot image.
  • the background pattern in the second layer can be processed before the interpolation algorithm and overlay display is performed on the image in the first layer, so that the background pattern displayed in the second layer can be A valid background color can always be extracted, so that when the interpolation algorithm is performed on the first layer, a reasonable interpolation process can be performed on the boundary of the touch track pattern, and the boundary display defects can be alleviated.
  • the following effects can be obtained: For example, if the OSD layer is full-screen non-transparent, after stacking, the OSD layer will completely cover the Video layer. The user appears to be displaying the content in the OSD layer and the GOP2 layer. If the OSD layer has transparency, the superimposed effect is the effect of the GOP2 layer, the OSD layer and the Video layer.
  • a display device 200 is also provided in some embodiments of the present application, including: a display 260 , a touch component 276 and a controller 250 .
  • the display 260 is configured to display a user interface
  • the touch component 276 is configured to detect the touch track input by the user
  • the controller 250 is configured to execute the following program steps:
  • the conversion pattern and the background pattern are superimposed to control the display to display the superimposed result in real time.
  • the display device 200 can perform interpolation operation on the touch track pattern in the first layer according to the background pattern in the second layer after acquiring the touch track pattern, so as to improve the performance of the touch track pattern.
  • the display device can perform an interpolation operation on the touch track pattern according to the background pattern, which can alleviate the influence of the transparency of the first layer on the result of the edge interpolation algorithm of the touch track pattern, and alleviate the occurrence of sawtooth or black borders when the touch track patterns are superimposed. problem, improve the real-time display effect.
  • the electronic whiteboard application can be installed on a display device with touch function, and the display device with touch function can also be a rotating TV.
  • the display device can be provided with a base and a rotating bracket, the base of the display device can be fixed on the wall, and the display of the display device can be rotated around the base on a vertical plane through the rotating bracket.
  • the display of the display device may have multiple fixed rotation states, such as a first rotation state, a second rotation state, a third rotation state, etc., wherein the first rotation state may be a landscape state, and the second rotation state may be a landscape state.
  • the rotation state may be a vertical screen state, and the third rotation state may be a tilted state, such as a state of 45 degrees from the horizontal plane.
  • the display device can be rotated from one rotational state to another rotational state.
  • the fixed rotational state of the display device may also include only the first rotational state and the second rotational state.
  • the user when the display of the display device is rotated, the user can press a preset button to pause the rotation, so that the display device stays at the rotation angle required by the user, for example, the preset button can be the OK button on the remote control .
  • FIG. 16 is a schematic diagram of coordinate system conversion according to one or more embodiments of the present application.
  • the default coordinate system is the coordinate system in the horizontal screen state, and in the default coordinate system, the coordinate origin It is A(0,0).
  • the new coordinate system is the coordinate system in the vertical screen state.
  • the origin of the coordinates is A1(0,0)
  • the original coordinates The coordinates of the origin A(0,0) in the new coordinate system are A0(1080,0).
  • an embodiment of the present application provides a multi-layer overlay display method, which can ensure that the image displayed by the display device corresponds to the user's touch operation through coordinate transformation.
  • the controller of the display device may, according to the current rotation state being a non-landscape state, such as a portrait state or a tilt state, display multiple layers of the interface to be displayed Respectively adjust the display directions to correspond to the first rotation state, and then combine the adjusted layers to obtain the original image.
  • a non-landscape state such as a portrait state or a tilt state
  • the method for the display device to adjust a layer so that the display direction corresponds to the first rotation state may be: first, according to the current rotation state of the display, rotate the layer, and rotate the layer as For a horizontal screen layer, for example, according to the current rotation state of the display being the vertical screen state, the layer is rotated 90 degrees counterclockwise; then, the pattern in the layer is instantly rotated 90 degrees clockwise, so that the display direction of the pattern is horizontal. screen orientation.
  • the content of the GOP2 layer may be empty; the controller may store the original image in the graphics buffer, then copy the original image to obtain a backup image, and move the backup image to the native (local service) layer in the system, so that after the user inputs the touch track, the response track can be drawn on the backup image according to the touch track; finally, the original image in the graphics buffer is updated according to the drawn image, so that the display device Updates the image displayed on the monitor based on the image in the graphics buffer.
  • the display device can obtain the coordinates of the touch point from the touch track.
  • the touch The coordinates of the point are the coordinates in the vertical screen coordinate system, for example, the coordinates of a touch point may be M(X1, Y1).
  • the display screen when the display screen is in the vertical screen state, the display screen can be rotated counterclockwise by 90 degrees to convert to the horizontal screen state. According to the rotation relationship, the coordinates in the vertical screen coordinate system and the coordinate system in the horizontal screen state The relationship of the coordinates below is shown in formulas 3 and 4:
  • M(X1, Y1) is the coordinate in the vertical screen coordinate system
  • m(x, y) is the coordinate corresponding to the coordinate system of M(X1, Y1) in the horizontal screen state
  • h is the horizontal screen state.
  • the formulas for converting the touch coordinates in the touch track to the touch coordinates in the horizontal screen state are 5 and 6:
  • the backup image in the graphics buffer is an image in a coordinate system in a horizontal screen state. After the touch coordinates in a vertical screen are converted into touch coordinates in a horizontal screen, the touch coordinates in the horizontal screen can be adjusted according to touch coordinates in the horizontal screen. The coordinates are used to draw the response trajectory on the backup image, and the response image of the GOP2 layer is obtained. The response image and the backup image are superimposed to obtain the drawn image.
  • the user operation area in the drawn image can also be obtained according to the coordinate range of the response image, and the to-be-updated area of the original image is obtained according to the user-operation area, wherein the to-be-updated area of the original image is on the original image can be the same as the position of the user operation area on the drawn image.
  • the drawn image is rotated to the current rotation state to obtain the to-be-displayed image, and the currently displayed image can be refreshed to the to-be-displayed image.
  • the drawn image can be copied as the original image to obtain a new backup image, and the new backup image can be moved to the image buffer, so as to facilitate the response after receiving the new touch track.
  • FIG. 17 is a schematic diagram of a display device in a vertical screen state according to one or more embodiments of the present application. As shown in FIG. 17 , using the image display method in the above embodiment, after a user draws in a vertical screen state, the display can display For a response trajectory consistent with the user's touch trajectory, for example, the user draws the letter "A", the display device may display the letter "A" at the user's drawing position.
  • the user may press a preset button on the remote control to send a rotation instruction to the display device, and the display device starts to rotate the display according to the rotation instruction, and the controller of the display device may be configured to rotate the display by 90 degrees by default. Stop spinning.
  • the display device When the display device is in the landscape state, if it receives a rotation command, it will rotate clockwise to the portrait state; when the display device is in the portrait state, if it receives a rotation command, it will rotate counterclockwise to the landscape state.
  • the display of the display device may stay at other rotation angles such as 30 degrees, 60 degrees, and so on, in addition to the landscape and portrait states.
  • the user can press the preset button on the remote control to send a rotation instruction to the display device, and the display device starts to rotate the display according to the rotation instruction.
  • the preset button on the remote control Press the key to send a pause rotation instruction to the display device, and the display device can stop the rotation according to the pause rotation instruction to keep the display at this angle.
  • FIG. 18 is a schematic diagram of display rotation according to one or more embodiments of the present application.
  • the display when the display is rotated, it can be rotated around the center C (xcenter, ycenter) of the display to an angle ⁇ Afterwards, the user can input a pause rotation instruction to the display device to make the display device stay at the angle ⁇ .
  • A(0,0) of the display becomes A2 after rotation, and under the angle ⁇ , the upper left vertex of the rotated image is A4, the upper right vertex is A2, the lower left vertex is A6, and the lower right vertex is A5.
  • the controller of the display device may calculate the width and height of the rotated image according to the current angle ⁇ and the length of the diagonal of the display.
  • the maximum width of the rotated image is the horizontal distance between A4 and A5
  • the maximum height of the rotated image is the vertical distance between A2 and A6.
  • the touch coordinates in the touch track can be converted into the touch coordinates in the horizontal screen state, and then the response image can be drawn, and then the image displayed on the display can be updated.
  • the display device may be configured to also respond to a touch operation when rotated, and according to the coordinate transformation in the tilted state, a response trajectory in a landscape state corresponding to the touch trajectory can be obtained, and then a response trajectory can be generated according to the response trajectory. image to be displayed.
  • the controller of the display device may also detect the rotation state of the display. During the rotation of the display device, it may be detected that the rotation state of the display changes from the first rotation state to the second rotation state. When the rotation state between the first rotation state and the second rotation state is reached, the image displayed on the display can be rotated to the display direction consistent with the second rotation state, for example, when rotating from the landscape state to the portrait state In the process, when the rotation angle is 45 degrees, the image displayed on the display can be switched to the image in the vertical screen state, so that the user can view the display content of the display during the rotation process.
  • the application program on the display may also be displayed in a non-full-screen display, such as a half-screen display.
  • a non-full-screen display such as a half-screen display.
  • FIG. 19-20 are schematic interface diagrams of an electronic whiteboard application according to one or more embodiments of the present application.
  • the electronic whiteboard application in a vertical screen state, can be displayed in a non-full screen state.
  • the electronic whiteboard application in a landscape screen state , can also be displayed in non-full screen.
  • the vertical screen coordinate system is converted to After the coordinate system in the landscape state, the offset between the coordinate system in the landscape state and the default coordinate system of the display includes xoffset and yoffset.
  • the electronic whiteboard application is displayed in a non-full screen
  • it is necessary to subtract the offset from the coordinates in the horizontal screen state that is, the horizontal coordinate Subtract xoffset, subtract yoffset from the vertical coordinate in the coordinates in the horizontal screen state, then draw the response image according to the coordinates in the horizontal screen state minus the offset, and then update the image displayed on the display.
  • boundary processing if the content displayed by the electronic whiteboard application is located at the boundary of the display, as shown in FIG. 20 , boundary processing needs to be performed.
  • the boundary processing may include left boundary processing and upper boundary processing.
  • the left boundary processing includes: after the touch coordinates in the touch track are converted into coordinates in the horizontal screen state, if the starting coordinates of the coordinates in the horizontal screen state are startx, if the start coordinate is If the start coordinate startx is less than or equal to xoffset, the image pixel to be copied is xffset-startx, and the start coordinate is xoffset.
  • the upper boundary processing includes: after converting the touch coordinates in the touch track to the coordinates in the horizontal screen state, if the starting coordinate of the coordinates in the horizontal screen state is starty, if the starting coordinate starty is less than or equal to yoffset, the picture to be copied The pixel is yoffset-starty, and the starting coordinate is yoffset.
  • the response image is drawn according to the above picture pixels to be copied, and then the image displayed on the display is updated.
  • the display device pre-synthesizes multiple layers, and after obtaining the touch track, it only needs to superimpose the touch track on the pre-synthesized image, and it is not necessary to use the SurfaceFlinger service after obtaining the touch track. Multiple layers are combined to improve the efficiency of image display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种显示设备、几何图形识别及多图层叠加显示方法,所述方法可以在显示器显示用户界面的同时,通过触控组件检测用户输入的触控轨迹,并在第一图层中实时呈现触控轨迹图案。在获取触控轨迹图案后,还可以根据第二图层中的背景图案对第一图层中的触控轨迹图案执行内插运算,提高触控轨迹图案的分辨率,最后将内插运算后的转换图案与背景图案进行叠加,并实时通过显示器进行显示。

Description

显示设备、几何图形识别方法及多图层叠加显示方法
本申请要求于2021年2月8日提交的、申请号为202110171543.X;于2020年12月22日提交的、申请号为202011528031.6;于2020年10月30日提交的、申请号为202011188310.2的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及显示设备技术,具体而言,涉及一种显示设备、几何图形识别方法及多图层叠加显示方法。
背景技术
在会议场景、教育场景下,带触摸功能的显示设备通常安装“演示白板”应用,当用户打开“演示白板”应用后,显示器可以呈现绘图区域,用户可以通过滑动触控指令在绘图区域中画出特定触控动作轨迹,控制器则通过触控组件检测的触控动作,确定触控动作图案,并控制显示器实时进行显示,以满足演示效果。
发明内容
本申请一些实施例提供一种显示设备,包括显示器、输入/输出接口以及控制器。其中所述显示器被配置为显示用户界面;输入/输出接口被配置为连接输入装置;所述控制器被配置为执行以下程序步骤:通过所述输入/输出接口获取用户输入的手绘图形轨迹;遍历所述手绘图形轨迹中各手绘点坐标,以获得第一特征方向,所述第一特征方向为所述手绘图形轨迹中至少两个所述手绘点之间的位置关系在满足预设位置关系时的连线所处方向;检测所述第一特征方向与预设判断方向之间的夹角;按照所述夹角对所述手绘图形轨迹执行旋转,以使所述第一特征方向与所述预设判断方向平行;遍历旋转后的手绘图形轨迹中各手绘点坐标,以获得第二特征方向,所述第二特征方向是与所述第一特征方向满足预设几何关系的方向;根据所述第一特征方向和所述第二特征方向绘制标准几何图形;按照所述夹角旋转所述标准几何图形。
本申请一些实施例提供一种几何图形识别方法,应用于显示设备,所述显示设备包括显示器和控制器,所述显示设备还内置或外接有输入装置,所述方法包括:获取用户输入的手绘图形轨迹;遍历所述手绘图形轨迹中各手绘点坐标,以获得第一特征方向,所述第一特征方向为所述手绘图形轨迹中至少两个所述手绘点之间的位置关系在满足预设位置关系时的连线所处方向;检测所述第一特征方向与预设判断方向之间的夹角;按照所述夹角对所述手绘图形轨迹执行旋转,以使所述第一特征方向与所述预设判断方向平行;遍历旋转后的手绘图形轨迹中各手绘点坐标,以获得第二特征方向,所述第二特征方向是与所述第一特征方向满足预设几何关系的方向;根据所述第一特征方向和所述第二特征方向绘制标准几何图形;按照所述夹角旋转所述标准几何图形。
本申请一些实施例提供一种显示设备,包括:显示器、触控组件和控制器。其中,所述显示器被配置为显示用户界面,触控组件被配置为检测用户输入的触控轨迹,控制器被配置为执行以下程序步骤:获取第一图层中的触控轨迹图案,以及获取第二图 层中的背景图案,所述第二图层是位于所述第一图层下一层的图层;根据所述背景图案,对所述触控轨迹图案执行内插运算,以生成转换图案,所述转换图案的分辨率等于所述背景图案的分辨率;叠加所述转换图案与所述背景图案,以控制所述显示器实时显示叠加结果。
本申请一些实施例还提供一种多图层叠加显示方法,所述多图层叠加显示方法应用于显示设备,所述显示设备包括显示器、触控组件和控制器,其中所述触控组件被配置为检测用户输入的触控轨迹,所述多图层叠加显示方法包括:获取第一图层中的触控轨迹图案,以及获取第二图层中的背景图案,所述第二图层是位于所述第一图层下一层的图层;根据所述背景图案,对所述触控轨迹图案执行内插运算,以生成转换图案,所述转换图案的分辨率等于所述背景图案的分辨率;叠加所述转换图案与所述背景图案,以控制所述显示器实时显示叠加结果。
本申请一些实施例还提供了一种显示设备,该显示设备包括:显示器;触控组件,被配置为检测用户输入的触控轨迹;控制器,所述控制器被配置为:在所述显示器为第二旋转状态时,根据所述触控轨迹对应的第一旋转状态下的触摸坐标,在原始图像的备份图像上绘制响应轨迹,其中,所述原始图像为检测到所述触控轨迹前,所述显示器显示的图像对应的第一旋转状态下的图像;根据绘制后的图像更新所述显示器显示的图像。
本申请提供了一种多图层叠加显示方法,该方法包括:检测用户输入的触控轨迹;在所述显示器为第二旋转状态时,根据所述触控轨迹对应的第一旋转状态下的触摸坐标,在原始图像的备份图像上绘制响应轨迹,其中,所述原始图像为检测到所述触控轨迹前,所述显示器显示的图像对应的第一旋转状态下的图像;根据绘制后的图像更新所述显示器显示的图像。
附图说明
图1为根据本申请一个或多个实施例的显示设备与控制装置之间操作场景的示意图;
图2为根据本申请一个或多个实施例的显示设备200的硬件配置框图;
图3为根据本申请一个或多个实施例的控制设备100的硬件配置框图;
图4为根据本申请一个或多个实施例的显示设备200中软件配置示意图;
图5为根据本申请一个或多个实施例的显示设备200中应用程序的图标控件界面显示示意图;
图6A为根据本申请一个或多个实施例的电子白板应用的界面示意图;
图6B为根据本申请一个或多个实施例的图层叠加示意图;
图7A-7B为根据本申请一个或多个实施例的几何图形绘制示意图;
图8为根据本申请一个或多个实施例的几何图形识别流程图;
图9为根据本申请一个或多个实施例的几何图形绘制示意图;
图10为根据本申请一个或多个实施例的极值点示意图;
图11-12为根据本申请一个或多个实施例的几何图形识别流程图;
图13-15为根据本申请一个或多个实施例的多图层叠加示意图;
图16为根据本申请一个或多个实施例的坐标系转换示意图;
图17为根据本申请一个或多个实施例的显示设备的竖屏状态示意图;
图18根据本申请一个或多个实施例的显示器旋转示意图;
图19-20为根据本申请一个或多个实施例的电子白板应用的界面示意图。
具体实施方式
为使本申请的目的、实施方式和优点更加清楚,下面将结合本申请示例性实施例中的附图,对本申请示例性实施方式进行清楚、完整地描述,显然,所描述的示例性实施例仅是本申请一部分实施例,而不是全部的实施例。
基于本申请描述的示例性实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请所附权利要求保护的范围。此外,虽然本申请中公开内容按照示范性一个或几个实例来介绍,但应理解,可以就这些公开内容的各个方面也可以单独构成一个完整实施方式。需要说明的是,本申请中对于术语的简要说明,仅是为了方便理解接下来描述的实施方式,而不是意图限定本申请的实施方式。除非另有说明,这些术语应当按照其普通和通常的含义理解。
图1为根据本申请一个或多个实施例的显示设备与控制装置之间操作场景的示意图,如图1所示,用户可通过移动终端300和控制装置100操作显示设备200。控制装置100可以是遥控器,遥控器和显示设备的通信包括红外协议通信、蓝牙协议通信,无线或其他有线方式来控制显示设备200。用户可以通过遥控器上按键,语音输入、控制面板输入等输入用户指令,来控制显示设备200。在一些实施例中,也可以使用移动终端、平板电脑、计算机、笔记本电脑、和其他智能设备以控制显示设备200。
在一些实施例中,移动终端300可与显示设备200安装软件应用,通过网络通信协议实现连接通信,实现一对一控制操作的和数据通信的目的。也可以将移动终端300上显示音视频内容传输到显示设备200上,实现同步显示功能显示设备200还与服务器400通过多种通信方式进行数据通信。可允许显示设备200通过局域网(LAN)、无线局域网(WLAN)和其他网络进行通信连接。服务器400可以向显示设备200提供各种内容和互动。显示设备200,可以液晶显示器、OLED显示器、投影显示设备。显示设备200除了提供广播接收电视功能之外,还可以附加提供计算机支持功能的智能网络电视功能。
图2示例性示出了根据示例性实施例中控制装置100的配置框图。如图2所示,控制装置100包括控制器110、通信接口130、用户输入/输出接口140、存储器、供电电源。控制装置100可接收用户的输入操作指令,且将操作指令转换为显示设备200可识别和响应的指令,起用用户与显示设备200之间交互中介作用。通信接口130用于和外部通信,包含WIFI芯片,蓝牙模块,NFC或可替代模块中的至少一种。用户输入/输出接口140包含麦克风,触摸板,传感器,按键或可替代模块中的至少一种。
图3示出了根据示例性实施例中显示设备200的硬件配置框图。如图3所示显示设备200包括调谐解调器210、通信器220、检测器230、外部装置接口240、控制器250、显示器260、音频输出接口270、存储器、供电电源、用户接口280中的至少一种。控制器包括中央处理器,视频处理器,音频处理器,图形处理器,RAM,ROM,用于输入/输出的第一接口至第n接口。显示器260可为液晶显示器、OLED显示器、触控显示器以及投影显示器中的至少一种,还可以为一种投影装置和投影屏幕。调谐解调器210通过有线或无线接收方式接收广播电视信号,以及从多个无线或有线广播电视信号中解调出音视频信号,如以及EPG数据信号。检测器230用于采集外部环境或与外部交互的信号。控制器 250和调谐解调器210可以位于不同的分体设备中,即调谐解调器210也可在控制器250所在的主体设备的外置设备中,如外置机顶盒等。
在一些实施例中,控制器250,通过存储在存储器上中各种软件控制程序,来控制显示设备的工作和响应用户的操作。控制器250控制显示设备200的整体操作。用户可在显示器260上显示的图形用户界面(GUI)输入用户命令,则用户输入接口通过图形用户界面(GUI)接收用户输入命令。或者,用户可通过输入特定的声音或手势进行输入用户命令,则用户输入接口通过传感器识别出声音或手势,来接收用户输入命令。
在一些实施例中,“用户界面”,是应用程序或操作***与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面常用的表现形式是图形用户界面(Graphic User Interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的一个图标、窗口、控件等界面元素,其中控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素中的至少一种。
图4为根据本申请一个或多个实施例的显示设备200中软件配置示意图,如图4所示,将***分为四层,从上至下分别为应用程序(Applications)层(简称“应用层”),应用程序框架(Application Framework)层(简称“框架层”),安卓运行时(Android runtime)和***库层(简称“***运行库层”),以及内核层。内核层至少包含以下驱动中的至少一种:音频驱动、显示驱动、蓝牙驱动、摄像头驱动、WIFI驱动、USB驱动、HDMI驱动、传感器驱动(如指纹传感器,温度传感器,压力传感器等)、以及电源驱动等。
图5为根据本申请一个或多个实施例的显示设备200中应用程序的图标控件界面显示示意图,如图5中所示,应用程序层包含至少一个应用程序可以在显示器中显示对应的图标控件,如:直播电视应用程序图标控件、视频点播应用程序图标控件、媒体中心应用程序图标控件、应用程序中心图标控件、游戏应用图标控件等。直播电视应用程序,可以通过不同的信号源提供直播电视。视频点播应用程序,可以提供来自不同存储源的视频。不同于直播电视应用程序,视频点播提供来自某些存储源的视频显示。媒体中心应用程序,可以提供各种多媒体内容播放的应用程序。应用程序中心,可以提供储存各种应用程序。
在一些实施例中,显示设备可安装一个电子白板应用,在该应用的应用界面,用户可进行写字、划线等操作,显示设备可根据用户的触摸动作生成触摸轨迹,以实现白板演示或娱乐的功能。图6A为根据本申请一个或多个实施例的电子白板应用的界面示意图,参考图6A该电子白板的应用界面上可设置有工具栏区域T和绘制区域D,其中,工具栏区域T可显示多个绘制控件,如绘制颜色控件、删除控件、撤销控件、分享控件等等,绘制区域D可为一个矩形区域,用户可在绘制区域D内绘制图形。在一些实施例中,在电子白板的应用界面,除了工具栏区域T之外的区域可为绘制区域D,或者,绘制区域D的面积也可除了工具栏区域T之外的区域中的一个小区域,此时,绘制区域D可显示一个边框,从而提示用户在边框内进行绘制。
<图层的介绍>
为了实现实时显示效果,显示设备200可以通过多个图层叠加的方式对手绘过程进行显示。通常,显示设备200可以使用一个图层对用户手绘对应的滑动触控动作轨迹进行实时显示,可以再使用一个图层对演示白板界面进行显示,而在最终显示器260上所呈现的画面,是由这两个图层进行叠加而成。为了便于区分,在本申请实施例中, 将用于实时显示触控轨迹图案的图层称为第一图层,将用于显示白板界面的图层称为第二图层。显然,为了呈现最终画面,显示设备200所能够呈现的图层不仅包括上述两个,还可以包括其他图层,用于展示不同的画面内容。图6B为根据本申请一个或多个实施例的图层叠加示意图,如图6B所示,显示设备200可以包括三个图层,分别为第一图层:画面组层(Group of Pictures,GOP)、第二图层:屏幕菜单调节层(on-screen display,OSD)以及第三图层:视频层(Video)。其中,GOP层又称为GOP2层或加速层,可以用来显示临时绘制的、在菜单上层显示的内容。OSD层又称为中间层或菜单层,用于展示应用界面、应用菜单、工具栏等内容。Video层又称为底层,一般可以用来显示电视连接的外接信号对应的画面内容。
在一些实施例中,不同图层之间可以设置层级关系,以达到特定的显示效果。例如,GOP层、OSD层以及Video层的层级关系可以依次为:GOP层-OSD层-Video层,即Video层显示在最底层,展示外接信号画面内容,OSD层显示在Video层之上,从而使应用菜单可以浮于外接信号画面之上进行显示,而GOP层显示在OSD层之上,以便用户输入绘制图形时,可以突出显示。
其中,对于GOP层,由于其用于显示临时绘制的内容,使得GOP层中显示的画面会随着用户绘制动作的输入而呈现为不同的内容。因此,在实际应用中,为了满足作画要求,在一次滑动触控动作输入完成后,显示设备200可以将绘制的图案更新到OSD层进行显示,并通过GOP层继续显示其他触控轨迹内容。这样的显示方式,可以使新的作画动作所产生的图案能够覆盖之前作画动作所产生的图案,以适应用户操作习惯。
需要说明的是,对于显示设备200所能够呈现的多个图层中的图案,其图案的表现形式可以为ARGB形式,即在传统RGB形式的基础上,带有透明度信息,以便于多个图层画面的叠加。例如,对于用户绘制的画面,其画笔绘制的部分为具体触控轨迹图案,而其他部分为完全透明的图案,以避免用户未绘制的部分对底层图层中内容造成遮挡。因此,基于上述多个图层,显示设备200可以根据各图层中的具体图案内容和透明度呈现最终画面。
<绘制椭圆>
在一些实施例中,图7A-7B为根据本申请一个或多个实施例的几何图形绘制示意图,几何图形识别是指显示设备200通过对用户绘制的图案进行图形分析,识别出与手绘图案相似的标准几何图形的过程,如图7A所示。其中,用户绘制的图案可以由用户通过触摸屏完成,也可以有其他输入装置500完成,例如鼠标、手绘板、体感手柄等。用户通过输入动作,可以在指定的界面中生成手绘图形轨迹,显示设备200再对输入的手绘图形轨迹进行识别,以确定与手绘图形轨迹相似的标准几何图形。
为了能够识别出标准几何图形,显示设备200可以通过运行特定的应用程序,实现手绘图形轨迹的输入和对几何图形的识别。所述标准几何图形是根据预设识别规则确定的一系列图形类型,包括但不限于多边形、圆形、椭圆形等。根据实际应用环境的不同,不同类型的几何图形可以设置不同的识别频率和容差范围。在一些实施例中,可以设置识别频率依次为“多边形>圆形>椭圆形”,即实现在用户输入的手绘图形轨迹即接近多边形又接近椭圆形时,将多边形作为识别结果。
对标准图形的识别可以通过对用户手绘图形轨迹的特征进行分析,确定手绘图形 轨迹对应的标准几何类型,再根据用户输入的手绘图形轨迹中的部分参数确定标准几何图形参数,从而生成对应参数下的标准几何图形。例如,当用户输入的手绘图形轨迹各处呈现圆弧过渡,并且圆弧的弧度变化在一定的阈值范围内,则可以识别出用户输入的手绘图形轨迹可能为圆形,再测量图形中心与图形轨迹中各手绘点之间的距离,并计算距离平均值,从而获得圆的直径,并按照该直径生成标准圆形。
显然,所述手绘图形轨迹可以由多个手绘点组成,每个手绘点可以根据其在界面中的位置,对应有唯一的位置坐标。根据位置坐标可以确定各个手绘点之间的相对位置关系。例如,可以通过位置坐标计算两个手绘点之间的相对距离;通过对比位置坐标数值,确定两个手绘点之间的方位关系等。其中,通过多个手绘点之间的方位关系,还可以确定手绘点在一定区域内是否呈连续状态,并能够进一步确定连续状态的弧度、角度等特征信息。
不同类型的标准几何图形拥有不同的特征信息。例如,多边形具有多个顶点,顶点处的手绘点呈现为拐角形状的特征;圆形图案轨迹对应各部分的弧度变化趋于一致;椭圆形的弧度在长轴和短轴对应位置处具有对应的变化关系等。实际应用中,可以在应用程序中建立一个特征匹配表,在用户输入手绘图形轨迹后,将手绘图形轨迹中的识别出的特征与特征列表进行匹配,从而确定当前图形轨迹所对应的标准几何图形。
为了提高对几何图形的识别成功率,在实际应用中还可以根据用户输入的手绘图形轨迹确定与图形相适应的辅助形状,以限制图形的生成区域。例如,如图7B所示,在识别椭圆图案时,可以根据用户输入的手绘图形轨迹中,各个手绘点在各个方向(x轴和y轴)上对应的最小坐标值,从而确定一个矩形区域,并将矩形区域的长边作为椭圆的长轴,将矩形区域的短边作为椭圆的短轴。在确定椭圆的长轴和短轴后,即可在矩形区域中生成标准的椭圆形图案。
然而,这种方法仅适用于用户手绘的图形是正向状态的情况。例如,用户必须通过手绘,将椭圆的长轴控制在与水平方向平行的状态下。显然这种正向状态的要求增加了用户的手绘难度,严重限制了图形识别的应用场景。对于用户需要绘制倾斜状态的图形时,通过坐标值所识别出的几何图形与用户想要输入的图形相差过大,降低了几何图形的识别准确率。为此,本申请一些实施例提供一种显示设备及几何图形识别方法,可以用于实现对用户手绘演示过程中输入的轨迹进行检测,从而将手绘动作轨迹转化为标准几何图形。
在一些实施例中,图8为根据本申请一个或多个实施例的几何图形识别流程图;图9为根据本申请一个或多个实施例的几何图形绘制示意图,如图8、图9所示,所述显示设备200可以包括显示器275和控制器250,并且所述显示设备200还内置或外接有输入装置500,所述方法包括以下步骤:
获取用户输入的手绘图形轨迹。
在进行几何图形识别时,显示设备200的控制器250可以从输入装置500中获取用户输入的手绘图形轨迹。所述手绘图形轨迹是由多个手绘点坐标组成的数据集合。对于显示设备200,用户可以通过其内置的触控组件或外接的输入装置500输入绘制动作,绘制动作将在触控组件或输入装置500上产生电压变化,这种电压变化可以被输检测、传输以及存储,从而实现对手绘点的检测。触控组件或输入装置500再将检测的手绘点数据进行转化,转化成控制器250可以识别的输入数据。
根据输入装置500的类型不同,对用户输入的绘制动作的检测方式也不同。例如,对于显示设备200内置的触控组件,可与显示器275构成触摸屏,则通过触控组件可以检测用户的触摸点位置,进而检测用户输入的手绘图形轨迹。又例如,输入装置500可以是鼠标等外设,当用户移动鼠标时,显示设备200界面上的光标也随之移动,此时可以通过检测鼠标的点击事件,如按下鼠标左键和松开鼠标左键,并检测在两次点击事件中光标的移动位置,确定光标经过的位置数据,实现检测用户输入的手绘图形轨迹。
显然,由于用户输入绘制动作的过程是一个持续的过程,因此用户需要消耗一定时间才能完成对手绘图形轨迹的输入。通常,对于部分较简单的图形,可以按照用户执行一次绘制的开始时间和结束时间对输入的绘制动作进行检测。例如,用户通过手指触控操作执行绘制动作时,当手指刚开始接触触摸屏时表示绘制动作开始,当手指离开触摸屏时表示绘制动作结束,则在手指接触触摸屏的时间段内,手指所经过的所有位置点坐标即可构成用户输入的手绘图形轨迹。
遍历手绘图形轨迹中各手绘点坐标,以获得第一特征方向。
在获取用户输入的手绘图形轨迹后,控制器250可以对手绘图形轨迹中各手绘点的坐标进行提取,并通过分析坐标变化规律以及坐标之间的相对位置关系确定第一特征方向。其中,所述第一特征方向为手绘图形轨迹中至少两个手绘点之间的位置关系在满足预设位置关系时的连线所处方向。在一些实施例中,为了识别椭圆中的长轴,可以计算手绘轨迹中任意两个手绘点之间的距离,以生成第一距离;再对比所有手绘点之间的第一距离,以获得最远第一距离D max的两个手绘点;在第一距离最远的两个手绘点之间连线,以根据连线方向生成第一特征方向。
在另一些实施例中,还可以通过多个手绘点之间的坐标变化规律确定第一特征方向。例如,在对多边形进行识别的过程中,可以根据多个连续手绘点之间的坐标变化规律,确定多个连续手绘点是否构成多边形的顶点。具体算法可以包括:对比多个连续手绘点的位置坐标,获得相邻两个手绘点的坐标变化值;对比多个连续手绘点对应坐标变化值,如果所述坐标变化值在预设波动误差范围内,确定用户输入的手绘点为线性分布;根据手绘点坐标拟合出多边形的各边;提取各边斜率以及斜率变化点,确定斜率变化点为多边形顶点。再根据多个顶点的位置关系,确定第一特征方向。例如,对于梯形,可以将两个平行边所处的方向确定为第一特征方向。
检测所述第一特征方向与预设判断方向之间的夹角。
在获得第一特征方向后,可以根据第一特征方向与预设判断方向之间的夹角确定用户手绘图形的倾斜角度。其中,所述预设判断方向为根据绘制界面标定的参考方向,可以是水平方向、竖直方向以及其他特定倾斜角度方向。例如,将椭圆中长轴所在方向设置为第一特征方向后,可以通过检测确定长轴所在方向与水平方向的夹角,从而确定手绘椭圆图形的倾斜角度。同理,对于梯形等多边形,可以在将底边作为第一特征方向后,检测底边与水平方向的夹角,从而确定手绘梯形的倾斜角度。
按照夹角对手绘图形轨迹执行旋转,以使第一特征方向与预设判断方向平行。
在检测到第一特征方向与预设判断方向的夹角后,可以按照检测的夹角对手绘图形进行旋转,以使手绘图形变换至正向状态的状态。例如,在检测到椭圆长轴方向与水平方向相差30度时,可以控制将手绘图形轨迹旋转30度,从而使长轴方向与水平 方向相平行。其中,旋转的方向可以按照相对夹角方向确定,即+30度表示顺时针旋转,-30度表示逆时针旋转。
旋转原点则可以按照图形中心位置确定,即在用户输入手绘图形轨迹后,根据手绘点坐标值,确定手绘点在水平和数值方向上的最小坐标值和最大坐标值,从而根据最小坐标值和最大坐标值求解中心点坐标,即中心点坐标见如下公式1、2:
x’=(x min+x max)/2    (1)
y’=(y min+y max)/2      (2)
其中,x min,y min分别为在x轴方向和y轴方向上的最小坐标值;x max,y max分别为在x轴方向和y轴方向上的最大坐标值。
需要说明的是,在本申请一些实施例中,对手绘图形进行旋转的同时,也要对手绘图形轨迹中各手绘点的坐标进行变换,以便进行后续判断。
遍历旋转后的手绘图形轨迹中各手绘点坐标,以获得第二特征方向。
在对手绘图形轨迹执行旋转后,还可以再次遍历旋转后手绘图形轨迹中每个手绘点坐标,从而获得第二特征方向。其中,所述第二特征方向是与第一特征方向满足预设几何关系的方向。第二特征方向可以根据具体的图形类型,与第一特征方向具有特定的几何关系。例如,第二特征方向可以垂直于第一特征方向,也可以平行于第一特征方向。
在一些实施例中,可以通过计算手绘轨迹中在垂直于第一特征方向上两个手绘点之间的距离,以生成第二距离;再对比所有手绘点之间的第二距离,以获得最远第二距离L max对应的两个手绘点;从而在第二距离最远的两个手绘点之间连线,以根据连线方向生成第二特征方向。可见,通过确定第二特征方向,可以获得椭圆的短轴方向。
同理在另一些实施例中,可以通过提取手绘轨迹在平行于第一特征方向上多个连续手绘点的坐标,并对比在垂直于第一特征方向上的坐标变化值,如果所述坐标变化值在预设波动区间内,则确定多个连续手绘点中两端连线所处的方向为第二特征方向。可见,通过第二特征方向与第一特征方向之间的平行关系,可以确定梯形或平行四边形中相互平行的两条边所在的位置。
根据所述第一特征方向和所述第二特征方向绘制标准几何图形。
在确定第一特征方向和第二特征方向后,可以根据第一特征方向和第二特征方向,结合轨迹中手绘点的特征确定的几何图形类型,绘制标准几何图形。在一些实施例中,可以在第一方向上定位长轴端点,所述长轴端点为最远第一距离对应的两个手绘点;根据第二距离以及长轴端点生成外接矩形;按照外接矩形生成标准几何图形。可见,通过在第一特征方向和第二特征方向,可以分别确定长轴和短轴的端点,并生成外接矩形,进而确定椭圆形状。
在另一些实施例中,可以在第一特征方向上定位长底端点,在第二特征方向上定位短底端点;再以长底端点和短底端点作为顶点,绘制多边形图案。可见,通过第一特征方向和第二特征方向可以分别确定梯形的两个底的位置,并结合相应的端点位置,绘制出两个腰,进而绘制出梯形图案。
按照所述夹角旋转标准几何图形。
在绘制出标准几何图形后,可以按照之前检测的第一特征方向与预设判断方向之间的夹角,对绘制的图形进行旋转,从而将识别的图形还原至绘制时的倾斜状态,完 成对用户手绘动作的识别。
由以上描述可知,上述实施例中提供的几何图形识别方法可以配置在显示设备200的控制器250中,用于在用户执行手绘图形输入时对手绘图形进行识别,并将手绘图形转化为标准几何图形,以获得更好的绘制效果。所述方法可以通过旋转手绘图形轨迹的方式,消除手绘图形的倾斜状态对图形参数的干扰,便于匹配图形识别模板,提高图形识别的准确率,以缓解传统几何图形识别方法准确率低的问题。
在上述实施例中,可以通过对比每两个手绘点之间的距离,从而按照最远距离的两个手绘点连线方向确定第一特征方向。但在实际应用中,如果对比所有手绘点之间的距离消耗的时间较长,因此在本申请的部分实施例中,还可以通过以下方式获得第一特征方向:
遍历手绘图形轨迹中手绘点坐标极值,定位极值点。
在获取手绘图形轨迹后,可以通过遍历手绘图形轨迹中的所有手绘点坐标,确定坐标极值,即在x轴和y轴方向上的最小坐标值和最大坐标值。再定位包含最小坐标值和最大坐标值的极值点。
例如,通过遍历手绘图形轨迹中手绘点坐标,可以确定x轴方向的坐标极值分别为Xmin和Xmax,对应的极值点分别为P1=(Xmin,y)和P2=(Xmax,y);同理y轴方向的坐标极值分别为Ymin和Ymax,对应的极值点分别为P3=(x,Ymin)和P4=(x,Ymax)。可见,所定位的极值点是手绘图形轨迹的四个边界点。
根据所述坐标极值定位端点。
在获得坐标极值后,可以通过x轴和y轴方向上的最小坐标值和最大坐标值进行组合,获得端点坐标。例如,根据x轴方向的坐标极值Xmin和Xmax,以及y轴方向的坐标极值Ymin和Ymax,可以确定四个端点坐标,分别为P5=(Xmin,Ymin)、P6=(Xmin,Ymax)、P7=(Xmax,Ymin)、P8=(Xmax,Ymax)。
计算所述极值点与所述端点之间的第三距离。
在获得端点坐标以后,可以根据端点坐标和极值点坐标,计算端点与极值点之间的距离。例如,端点P5与极值点P1之间的距离L51=y-Ymin;端点P5与极值点P3之间的距离L53=x-Xmin;依次计算端点与极值点之间的距离,从而获得8个第三距离。
对比所述第三距离,以获得与所述极值点距离最近的两个端点。
在计算获得第三距离后,可以对第三距离进行对比,确定与极值点距离最近的两个端点。例如,图10为根据本申请一个或多个实施例的极值点示意图,如图10所示,在左侧图中,端点P6和P7相对于端点P5和P8与极值点距离更近,因此确定与极值点距离最近的两个端点为端点P6和P7。在右侧图中,端点P5’和P8’相对于端点P6’和P7’与极值点距离更近,因此确定与极值点距离最近的两个端点为端点P5’和P8’。
在与所述极值点距离最近的两个端点之间连线,以根据所述连线方向生成所述第一特征方向。
在确定与极值点距离最近的两个端点之后,可以通过在两个端点之间连线,确定第一特征方向,以及执行检测所述第一特征方向与预设判断方向之间的夹角等后续步骤,以最终确定标准几何图形。
由以上描述可知,本实施例通过端点和极值点可以通过较少次数的距离比较确定第一特征方向,大大缩减了确定第一特征方向所消耗的时长,提高演示过程的实时响 应速度。
在本申请的部分实施例中,为了确定第一特征方向与第二特征方向之间的相对位置关系,在获取用户输入的手绘图形轨迹的步骤中,还包括:
遍历所述手绘图形轨迹中手绘点坐标变化规律;如果所述坐标变化规律与预设形状规律相同,执行遍历所述手绘图形轨迹中各手绘点坐标,以获得第一特征方向的步骤;如果所述坐标变化规律与预设形状规律不同,控制显示器显示所述手绘图形轨迹。
本申请一些实施例中,可以通过手绘图形轨迹中的手绘点坐标进行计算,确定手绘点之间的坐标变化规律。为了能够对坐标变化规律进行遍历,可以在绘制应用程序中内置一个特征识别模型。识别模型中可以内置多个特征标签,在将手绘图形轨迹输入到该模型后,可以输入当前手绘图形轨迹相对于特征标签的分类概率,从而确定坐标变化规律与预设形状规律是否相同。
当坐标变化规律与预设形状规律相同时,即确定用户输入的手绘图形是能够识别出的标准几何图形,因此可以执行遍历所述手绘图形轨迹中各手绘点坐标,以获得第一特征方向的步骤,并按照上述实施例中的识别方法完成对手绘图形的识别。当坐标变化规律与预设形状规律不同时,确定用户输入的手绘图形可能是更复杂的图形,例如书写文字,因此可以控制显示器实时显示手绘图形轨迹,以保证正常的演示效果。
可见,在本申请一些实施例中,通过遍历所述手绘图形轨迹中手绘点坐标变化规律,可以实现在正常的演示过程中,实时检测用户输入的手绘图形轨迹,在符合预设形状规律时,再进行几何图形识别,而在不符合预设形状规律时,仍显示用户绘制的图案,实现几何图形的识别功能的同时,保证正常的演示效果。
在部分应用场景中,当用户进行手绘操作时,由于没有合适的参考系,在绘制正向状态的几何图形时,并不能使图形准确处于横平竖直的状态,而是出现倾斜情况,对于这种场景,可以通过自动校正程序,将识别出的图形调整至正向状态。在本申请的部分实施例中,按照所述夹角旋转所述标准几何图形的步骤还包括:
对比所述夹角与预设夹角阈值;如果所述夹角小于或等于所述预设夹角阈值,控制所述显示器显示生成的所述标准几何图形;如果所述夹角大于所述夹角阈值,按照所述夹角反向旋转所述标准几何图形,所述标准几何图形的反向旋转方向与所述手绘图形轨迹所执行的旋转方向相反;控制所述显示器显示反向旋转后的所述标准几何图形。
为了实现对图形的自动校正,在绘制出标准几何图形后,可以对几何图形的倾斜状态进行检测,即通过对比第一特征方向与预设判断方向之间的夹角与预设夹角阈值,确定倾斜状态。
当倾斜角度较小,即所述夹角小于或等于所述预设夹角阈值,可以直接将生成的标准几何图形进行显示,从而实现正向显示生成的标准几何图形。当倾斜角度较大,即夹角大于所述夹角阈值,则确定用户绘制的图形本身就是处于倾斜状态的,因此可以执行按照所述夹角反向旋转所述标准几何图形的步骤。显然标准几何图形的反向旋转方向与所述手绘图形轨迹所执行的旋转方向相反。
需要说明的是,本申请一些实施例中的正向状态可以包括相对于水平方向的正向状态和相对于竖直方向的正向状态。由此,在实际应用中可以分别检测第一特征方向与水平方向的夹角和第一特征方向与竖直方向的夹角,再将较小的一个夹角与预设夹 角阈值进行对比,从而确定所绘制的图形是否处于正向状态。
例如,用户绘制的是椭圆形,判定椭圆形的长轴与水平方向或垂直方向的角度,当长轴与水平方向的角度小于一定阈值(如15度)时,则将识别的椭圆形调整为与水平方向平行,当长轴与垂直方向的角度小于一定阈值(如15度)时,将识别的椭圆形调整为与垂直方向平行。
又例如,如果识别的是多边型,如矩形、平行四边形、梯形等,则判定其中一条平行边与水平方向或垂直方向的夹角,若平行边与水平方向的角度小于一定阈值(15度)时,则将识别的多边形调整为平行边与水平方向平行,当平行边与垂直方向的角度小于一定阈值(15度)时,将识别的多边形调整为平行边与垂直方向平行。
显然,上述自动调整过程可以根据实际需要启用或停止,在本申请的部分实施例中,对比所述夹角与预设夹角阈值的步骤前,还包括:
检测自动角度调整开关的开关状态;
如果所述开关状态为已开启,执行对比所述夹角与预设夹角阈值的步骤;
如果所述开关状态为未开启,执行按照所述夹角反向旋转所述标准几何图形的步骤。
本申请一些实施例中,可以在应用程序中通过特定的交互UI或者特定设置程序,实现自动角度调整开关功能。例如,可以在绘制界面或者设置界面中显示一个开关按钮,用于指示自动角度调整功能的开启和关闭。用户可以通过单击、滑动、勾选等动作调整自动角度调整开关的开关状态。
例如,可以在绘制图形界面上添加几何图形自动角度调整开关,如果用户打开开关,则在识别几何图形时可以自动进行图形角度的调整;如果用户关闭开关,则不再进行自动角度调整。
在本申请的部分实施例中,为了呈现更好的演示效果,所述几何图形识别方法还可以包括以下步骤:
获取用户输入的用于形成手绘图形轨迹的控制指令;
响应于所述控制指令,控制所述显示器实时显示所述手绘图形轨迹;
在按照所述夹角旋转所述标准几何图形的步骤后,控制所述显示器取消显示所述手绘图形轨迹,以及显示所述标准几何图形。
实际应用中,显示设备200可以根据用户输入的控制指令实时展示手绘图形轨迹。并且,在识别出标准几何图形后,取消显示手绘图形轨迹,并在对应的位置上显示标准几何图形,从而适应用户对手绘图形的输入。
基于上述几何图形识别方法,图11-12为根据本申请一个或多个实施例的几何图形识别流程图,如图11所示,在本申请的部分实施例中还提供一种显示设备200,包括显示器275、输入/输出接口255以及控制器250。其中所述显示器275被配置为显示用户界面;输入/输出接口255被配置为连接输入装置500;所述控制器250被配置为执行以下程序步骤:
通过所述输入/输出接口255获取用户输入的手绘图形轨迹;
根据所述手绘图形轨迹生成标准几何图形。
其中,所述标准几何图形具有与所述手绘图形轨迹相同的倾斜角度,所述标准几何图形根据经旋转后的所述手绘图形轨迹绘制,再经反向旋转后生成。
由以上描述可知,本实施例提供的显示设备200可以通过输入/输出接口255连接输入装置500,使用户可以通过输入装置500执行交互,输入手绘图形轨迹,使控制器250可以根据手绘图形轨迹生成标准几何图形。具体的,控制器250通过遍历手绘图形轨迹中各手绘点的左边确定第一特征方向,并根据第一特征方向与预设判断方向之间的夹角对手绘图形进行旋转,从而确定第二特征方向,再根据第一特征方向与第二特征方向绘制标准几何图形,最后再通过旋转,使标准几何图形与手绘的图形位置相适应。所述显示设备可以通过旋转手绘图形轨迹的方式,消除手绘图形的倾斜状态对图形参数的干扰,提高图形识别的准确率,以缓解传统几何图形识别方法准确率低的问题。
如图12所示,在一些实施例中,还提供一种显示设备200,包括显示器275、触控组件以及控制器。其中所述显示器275被配置为显示用户界面;所述触控组件被配置为获取用户的触控输入;所述控制器250被配置为执行以下程序步骤:
通过所述触控组件获取用户输入的手绘图形轨迹;
根据所述手绘图形轨迹生成标准几何图形。
其中,所述标准几何图形具有与所述手绘图形轨迹相同的倾斜角度,所述标准几何图形根据经旋转后的所述手绘图形轨迹绘制,再经反向旋转后生成。
由以上描述可知,本实施例提供的显示设备200可以通过内置触控组件,实现对用户输入的检测,以获取用户输入的手绘图形轨迹。控制器再根据所述手绘图形轨迹生成标准几何图形,即根据输入的手绘图形轨迹确定第一特征方向,并进行旋转后确定第二特征方向,以根据第一特征方向和第二特征方向绘制标准几何图形。所述显示设备200通过内置触控组件,可以配合显示器275形成触摸屏,方便用户输入,并且通过旋转手绘图形,缓解倾斜状态对图形识别过程的影响,提高图形识别的准确率。
<图层叠加>
在一些实施例中,由于各图层所用于显示的具体内容不同,因此各图层中的图案可以拥有不同的画面分辨率。例如,GOP层的分辨率为2k级,而OSD层和video层的分辨率为4k级,则在画面叠加时,由于分辨率的不同会使得各图层中的图案难以对齐,造成显示偏差或错误。为了能够叠加显示,当不同图层之间存在分辨率差异时,可以对较低分辨率的图层中的图案执行内插运算,以提高该图层画面的分辨率。例如,图13-15为根据本申请一个或多个实施例的多图层叠加示意图,如图13所示,当GOP2层、OSD层和Video层叠加时,因GOP2层分辨率是2K的,而OSD层和Video层分辨率是4K的,因此GOP2层要先通过内插算法,提高到4K的分辨率,然后再和其他两层进行叠加。
其中,内插运算是一种图像画面的插值算法,可以通过图像中相邻的多个像素内容计算待***的像素内容,用于提高画面的分辨率。然而,由于进行叠加的图层中包括透明度信息,并且不同的图层往往设置有不同的透明度,因此进行内插算法时,相邻像素的内容会受到透明度影响,而使得绘制图案的边缘位置经过内插算法后,出现显示错误的问题。
以显示设备200的电子白板应用为例,电子白板的书写过程一般在GOP层显示,书写后的画线在OSD层显示,实际电子白板界面显示的是GOP层和OSD层的叠加。在叠加时,若存在图层分辨率不同的情况,一般会将低分辨率的图案通过内插法放大 到高分辨率,然后再进行叠加。当GOP2层(2K)要和OSD层(4K)叠加时,需要先将GOP2提高到4K的分辨率,此时需要对像素点做内插算法,若GOP2的背景是透明的(即背景颜色为:0X00000000),在画线的边界处,会将画线颜色和透明颜色做内插算法。由于内插算法中透明颜色不起作用,使得插值后因由2K切换到4K,会出现锯齿问题,若考虑透明色的颜色值000000,则变成画线颜色和透明黑色做内插算法,则在插值后会出现半透明黑色的情况,表现为画线的边界处有黑边的效果。
为了改善上述触控轨迹图案边缘显示错误的问题,本申请的部分实施例中提供一种多图层叠加显示方法,该方法可应用于显示设备200,所述显示设备200包括显示器260、触控组件276和控制器250,其中所述触控组件276被配置为检测用户输入的触控轨迹,如图14、图15所示,所述多图层叠加显示方法包括以下步骤:
获取第一图层中的触控轨迹图案,以及获取第二图层中的背景图案。
其中,第一图层用于展示触控轨迹图案,第二图层用于展示应用界面、应用菜单、工具栏等界面内容。因此,所述第二图层是位于所述第一图层下一层的图层。例如,第一图层为GOP2层,第二图层为OSD层。
用户可以通过应用启动界面点击应用图标启动相关应用。如果用户启动的应用为能够使用第一图层的应用,则可以在第二图层中显示应用界面。同时通过触控组件276实时检测用户输入的触控轨迹,并根据用户输入动作,将触控轨迹图案呈现在第一图层中。本申请实施例中,在第二图层中展示的内容不仅包括应用界面、应用菜单、工具栏等应用内容,而且包括一次触控动作结束后,同步至第二图层中的绘画内容,因此为了便于描述,将第二图层中显示的应用界面内容称为背景图案。
在获取第一图层中的触控轨迹图案和第二图层中的背景图案后,控制器250还可以根据背景图案,对触控轨迹图案执行内插算法,以将触控轨迹图案转化成分辨率等于背景图案分辨率的转换图案。
其中,内插算法用于改变触控轨迹图案的分辨率,根据所处理图像的效果要求不同,内插算法可以采用不同的形式,如,最邻近插值法、双线性插值法、双立方插值法、方向插值法等。以邻近插值法为例,当需要将一个2k图像变换为4k图像时,可以遍历2k图像中每个像素中的像素值,并计算相邻两个像素点像素值的平均值,以获得待***的像素点对应的像素值。即,两个相邻像素点分别为(0,255,0)和(255,255,255)时,可以分别针对RGB通道中的数值进行计算,即R通道***像素点值为(0+255)/2=128,G通道***像素点值为(255+255)/2=255,B通道***像素点值为(0+255)/2=128。
在执行内插算法时,可以分别从触控轨迹图案的边缘和背景图案相近位置中提取像素点图像数据,从而根据背景图案与触控轨迹图案中提取的像素点数据,计算插值像素点的图像数据。
例如,在获取触控轨迹图案后,可以对触控轨迹图案的颜色进行提取,获得图像数据(192,0,255,0),即用户手绘75%不透明度的纯绿色图形,同时在此时背景图案中进行颜色提取,获得背景图案的图像数据(255,255,255,255),即背景图案为纯白界面。因此,可以按照上述提取的图像数据,计算插值像素点数据为,透明度通道数值维持为192(即75%不透明度),R通道***像素点值为(0+255)/2=128,G通道***像素点值为(255+255)/2=255,B通道***像素点值为(0+255)/2=128,即 内插像素点为(192,128,255,128),即在执行内插算法以增加分辨率时,在触控轨迹图案的边缘***(192,128,255,128)的像素点。
叠加所述转换图案与所述背景图案,以控制所述显示器实时显示叠加结果。
在将触控轨迹图案执行内插运算后,控制器250还可以根据内插运算的结果和背景图案执行叠加,并将叠加结果事实显示在显示器260上。可见,在上述实施例中,由于在执行内插运算的过程中,计算的插值像素数据由背景图案和触控轨迹图案确定,因此在将两个图层画面进行叠加时,不会在触控轨迹图案边缘出现黑边或锯齿,提高图层叠加过程的显示效果。
在上述实施例中,对触控轨迹图案执行内插运算时需要分别从第一图层和第二图层中提取图案数据,而对于第一图层中所呈现的画面,由于其图案是由用户的触控输入生成,因此对第一图层中图案的提取,可以直接从用户的触控操作中获得。即在一些实施例中,获取第一图层中的触控轨迹图案的步骤还包括:实时接收用户输入的触控轨迹;并响应于触控轨迹,提取前景颜色;再按照前景颜色,在第一图层中呈现触控轨迹,以生成触控轨迹图案。
在显示设备200启动运行演示白板应用等应用后,用户可以通过触控动作输入触控轨迹,触控组件276将触控轨迹检测后,可以发送给控制器250,以使控制器250响应于该触控轨迹,提取前景颜色。
本申请实施例中,前景颜色即用户在绘画演示过程中,所选择的画笔颜色。例如,用户可以在演示白板应用界面的工具栏窗口中,选择画笔形状,并设置前景色为绿色。则用户在后续输入滑动触控操作后,可以在白板界面中形成绿色的触控轨迹。因此,为了获取触控轨迹图案对应的像素点数据,控制器250可以直接提取前景色,并按照前景色在第一图层中呈现触控轨迹,以生成触控轨迹图案。
在生成触控轨迹图案的同时,控制器250还可以将提取的前景色进行保留,以作为触控轨迹图案对应的像素点数据。即在执行内插算法时,可以直接通过前景色数据与第二图层中提取的背景色数据进行插值计算。
例如,可以在显示设备200的演示白板程序中设置一个透明取色器,将GOP层背景部分颜色设置为透明取色器的颜色,使得画线的颜色和透明取色器的颜色内插后不会出现边界线的情况。
针对画笔颜色或透明层中的界面颜色单一的情况,透明取色去选取画笔颜色或界面颜色,并设置为全透明,这样画线或界面与透明层的边界处的颜色是内插的半透明的画笔颜色的值,边界不会有明显的黑色或其他边界色。
然而在部分演示或绘画过程中,用户所使用的画笔可能不是一种固定的颜色,即画笔可以为包括多种颜色彩色笔,这种彩色笔会随着触控轨迹的延伸,呈现为多种颜色组合的形式。当用户使用这种彩色笔进行绘画时,控制器250再通过前景色作为触控轨迹图案的像素点数据时,将造成提取的像素点数据与实际触控轨迹图案不符,影响内插算法的计算结果。为此,在一些实施例中,根据所述背景图案,对所述触控轨迹图案执行内插运算的步骤还包括:先提取触控轨迹图案的边界颜色和边界位置;再从第二图层中,提取边界位置关联区域中的背景颜色,以根据边界颜色和背景颜色求解插值结果,对触控轨迹图案执行内插运算。
控制器250可以在第一图层呈现触控轨迹图案的过程中,通过执行图像分析程序, 对触控轨迹图案的边界进行提取,并获得边界颜色以及边界像素点所在的位置。其中,图像边界可以通过遍历图像中的全部像素点,并确定相邻两个像素点之间的色值差距,当差距较大时,确定这两个相邻像素点所处的位置为触控轨迹图案的边界。而由于第一图层在最顶层显示,为了叠加显示,第一图层上不是触控轨迹图案的像素点对应不透明度为0%,因此可以根据不透明度确定触控轨迹图案的边界。
在获取边界颜色和边界位置后,控制器250还可以在第二图层中提取背景颜色。如果第二图层中的背景图案为纯色背景时,则背景颜色可以从背景图案中的任一像素点进行提取;如果第二图层中的背景图案不是纯色背景时,则需要按照边界位置,在第二图层中进行查找,确定边界位置对应在背景图案位置上的像素点的颜色为背景颜色。
显然,触控动作轨迹图案边界是由多个像素点构成的二维数组,因此在背景图案中提取的背景颜色也是由多个像素点构成的二维数组。控制器250再根据边界颜色和背景颜色求解插值结果,从而将触控轨迹图案转化为较高分辨率的转换图案,以便进行多图层间的叠加运算。
例如,针对彩色画线,画线中的颜色是不固定的,透明取色器若选取其中的一种颜色,则在边界处仍然会出现颜色不同的情况。针对此种情况,透明取色器可以选择选取OSD层的颜色。因提高分辨率后最终会叠加到一起,因此选取要叠加的下一层的颜色,若背景颜色是单一颜色,则透明取色器的颜色选取的是单一颜色的全透明值;若背景色是非单一透明颜色,则透明取色器是一二维数组。针对GOP层中要展示内容的区域,获取该区域在OSD层的对应位置的颜色数组,然后对颜色数组中的颜色取其全透明值作为透明取色器颜色。
采用此透明取色器作为要展示内容的背景,进行叠加时,因透明取色器中的颜色是要叠加的边界区域下一层的颜色值,此时内插后边界颜色是要叠加的颜色的半透明值,所以叠加后不会存在边界线或边界颜色异常的情况。
根据上述实施例可知,通过对触控轨迹图案的边界颜色和边界位置,可以直接确定位于边界位置的像素点,同时确定边界像素点相关联的背景颜色,适应第一图层中触控轨迹的颜色变化以及第二图层中背景图案中的颜色变化,从而在对触控轨迹图案边界进行内插运算时,可以获得与两个图层颜色相适应的插值结果,以便提高边界区域的图像质量。
由于在实际应用中,内插算法是在多个图层之间具有不同分辨率时才执行的运算,而在图层间分辨率相同时,则可以无需对触控轨迹图案执行内插算法处理,即在一些实施例中,根据所述背景图案,对所述触控轨迹图案执行内插运算的步骤还包括:检测触控轨迹图案和背景图案的分辨率;再根据检测结果执行不同的程序步骤,如果触控轨迹图案的分辨率小于背景图案的分辨率,执行提取触控轨迹图案的边界颜色和边界位置的步骤;如果触控轨迹图案的分辨率等于背景图案的分辨率,对触控轨迹图案和背景图案执行叠加。
触控轨迹图案和背景图案的分辨率可以通过显示设备200的显示器260所支持的屏幕分辨率或者当前运行的应用程序所支持的分辨率获取。在检测到触控轨迹图案和背景图案的分辨率后,可以对两个图层的分辨率进行对比,并根据对比结果确定叠加方式。
当触控轨迹图案的分辨率小于背景图案的分辨率,即第一图层中所显示内容的分辨率小于第二图层中所显示内容的分辨率,此时,需要增加分辨率较小的图案,即对触控轨迹图案执行内插算法,即执行提取所述触控轨迹图案的边界颜色和边界位置的步骤。
显然,在执行内插算法时,还需要根据背景图案的分辨率,确定内插算法中***的像素点数量。例如,GOP层为2k分辨率,OSD层为4k分辨率,则需要在GOP层中的触控轨迹图案中***一倍数量的像素点,从而使触控轨迹图案也转化为4k图案。
当触控轨迹图案的分辨率等于背景图案的分辨率,即第一图层和第二图层中的图案分辨率相同,此时可以无需对触控轨迹图案进行内插处理,直接对触控轨迹图案和背景图案执行叠加处理。
显示设备200对于不同类型的应用程序界面,还可以通过多个图层叠加进行显示,即除第一图层、第二图层外,还需要叠加诸如video层等第三图层。在显示设备200通过video层显示外接信号内容的同时,通过OSD层显示程序界面,同时通过GOP层完成演示功能。此时,不仅第一图层具有透明度设置,第二图层也具有透明度设置。使得在控制器250在第二图层中提取背景图案时,可能提取到透明区域,进而影响内插算法结果和叠加结果。
因此,在一些实施例中,显示特定外接信号中的内容根据所述背景图案,对所述触控轨迹图案执行内插运算的步骤还包括:检测背景图案的透明度,根据检测的结果,如果背景图案的透明度为全透明或半透明,获取第三图层中的底层图案,再对背景图案与底层图案执行叠加;从而在第二图层中呈现经叠加后的背景图案。
为了缓解第二图层中透明区域对内插算法结果的影响,在执行内插算法前,还可以先对背景图案的透明度进行检测,以确定第二图层中的背景图案是否为全透明或半透明类型的图案。具体检测过程可以通过遍历背景图案中各像素点中的不透明度值,如果背景图案中存在不透明度值为0的像素点或者区域,或者不透明度值为0的像素点占全部像素点数量的比例大于设定值,则确定背景图案的透明度为全透明或半透明。
当背景图案为全透明或半透明图案时,则确定内插算法因第二图层中的透明图案产生影响,而部分或全部出现边界缺陷。此时,可以对第二图层和第三图层先进行叠加,在对第一图层中的图案进行内插运算。其中,所述第三图层是位于所述第二图层下一层的图层。例如,在检测到OSD层显示的图案为透明或半透明图案时,可以对video层显示的底层图案进行提取,并将底层图案和第二图层中的背景图案进行叠加处理,消除第二层的背景图案中透明区域的影响,使后续在第二图层提取背景颜色时不会提取到透明色,保证触控轨迹边界的显示效果。
同理,对于多个图层情况,还可以在对触控轨迹图案执行内插算法前,对第二图层和第三图层的分辨率进行检测,从而将分辨率调整一致后,再进行叠加。即,对所述背景图案与所述底层图案执行叠加的步骤中还可以检测背景图案和底层图案的分辨率,如果背景图案的分辨率小于底层图案的分辨率,在第三图层中提取底层颜色,根据底层颜色对背景图案执行内插算法;对内插算法处理后的背景图案与底层图案执行叠加。
需要说明的是,由于第三图层作为底层可以用于显示外接信号的画面内容,显示设备200可能无法直接获取到底层图案,因此在提取底层图案中的颜色时,可以先对 video层进行截取,获得截图后,再从截图图像中提取底层颜色。
可见,在上述实施例中,可以在对第一图层中的图像进行内插算法和叠加显示前,先对第二图层中的背景图案进行处理,使第二图层中显示的背景图案始终能够被提取到有效的背景颜色,以便在对第一图层执行内插算法时,能够对触控轨迹图案边界进行合理的插值处理,缓解边界显示缺陷。
基于上述实施例,在实际图层叠加过程中,可以获得如下效果:例如,若OSD层是全屏非透明的,则叠加后,OSD层会完全遮盖Video层。用户看上去显示的是OSD层和GOP2层中的内容。若OSD层存在透明度,则叠加的效果是GOP2层、OSD层和Video层三层的效果。
基于上述实施例提供的多图层叠加显示方法,在本申请的部分实施例中还提供一种显示设备200,包括:显示器260、触控组件276和控制器250。其中,所述显示器260被配置为显示用户界面,触控组件276被配置为检测用户输入的触控轨迹,控制器250被配置为执行以下程序步骤:
获取第一图层中的触控轨迹图案,以及获取第二图层中的背景图案,所述第二图层是位于所述第一图层下一层的图层;
根据所述背景图案,对所述触控轨迹图案执行内插运算,以生成转换图案,所述转换图案的分辨率等于所述背景图案的分辨率;
叠加所述转换图案与所述背景图案,以控制所述显示器实时显示叠加结果。
由以上描述可知,上述实施例提供的显示设备200可以在获取触控轨迹图案后,还可以根据第二图层中的背景图案对第一图层中的触控轨迹图案执行内插运算,提高触控轨迹图案的分辨率,最后将内插运算后的转换图案与背景图案进行叠加,并实时通过显示器进行显示。所述显示设备可以根据背景图案对触控轨迹图案执行内插运算,可以缓解第一图层透明度对触控轨迹图案边缘插值算法结果产生的影响,缓解触控轨迹图案叠加时出现锯齿或黑边的问题,提高实时显示效果。
<旋转电视>
在一些实施例中,电子白板应用可安装在具有触控功能的显示设备上,具有触控功能的显示设备还可为旋转电视。显示设备可设置有底座和旋转支架,显示设备的底座可固定在墙面上,显示设备的显示器可通过旋转支架绕底座在竖直平面上旋转。
在一些实施例中,显示设备的显示器可具有多个固定的旋转状态,如第一旋转状态、第二旋转状态和第三旋转状态等,其中,第一旋转状态可为横屏状态,第二旋转状态可为竖屏状态,第三旋转状态可为倾斜状态,如与水平面呈45度角的状态。显示设备可从一种旋转状态旋转至另一种旋转状态。
在一些实施例中,显示设备的固定的旋转状态还可仅包括第一旋转状态和第二旋转状态。
在一些实施例中,显示设备的显示器在旋转时,用户可按预设按键暂停旋转,使显示设备停留在用户需要的旋转角度,示例性的,该预设按键可为遥控器上的OK键。
然而,在显示设备为旋转电视时,在显示器旋转后,GOP2层会根据当前的旋转状态重新确立坐标系。在一些实施例中,图16为根据本申请一个或多个实施例的坐标系转换示意图,如图16所示,默认坐标系为横屏状态下的坐标系,在默认坐标系下,坐标原点为A(0,0),在显示器旋转至竖屏状态后,新的坐标系为竖屏状态下的坐标系,在新的坐 标系中,坐标原点为A1(0,0),原来的坐标原点A(0,0)在新的坐标系下的坐标为A0(1080,0)。在用户进行触摸操作后,电视获取的触摸点的坐标为新的坐标系下的坐标,然而,电视仍然按照默认坐标系进行图像更新,会导致显示器旋转后电视生成的触摸轨迹与用户的触摸操作不一致。为解决上述技术问题,本申请实施例提供了一种多图层叠加显示方法,该方法可通过坐标转换,确保显示设备显示的图像与用户的触摸操作相对应。
在一些实施例中,在接收用户输入的触摸轨迹之前,显示设备的控制器可根据当前的旋转状态为非横屏状态,如竖屏状态或倾斜状态,将需要显示的界面的多个图层分别调整至显示方向与第一旋转状态相对应,然后再将调整后的多个图层进行合成,得到原始图像。
在一些实施例中,显示设备将一个图层调整至显示方向与第一旋转状态相对应的方法可为:首先,根据显示器当前的旋转状态,将该图层进行旋转,将该图层旋转为横屏图层,例如,根据显示器当前的旋转状态为竖屏状态,将该图层逆时针旋转90度;然后,将该图层内的图案瞬时针旋转90度,使图案的显示方向为横屏方向。
在一些实施例中,在用户输入触摸轨迹之前,GOP2层的内容可能为空;控制器可将所述原始图像存储至图形缓冲区,然后将原始图像进行复制,得到备份图像,将备份图像移动到***中的native(本地服务)层,从而在用户输入触摸轨迹后,可根据触摸轨迹在备份图像上绘制响应轨迹;最后,根据绘制后的图像更新图形缓冲区中的原始图像,使显示设备根据图形缓冲区中的图像更新在显示器上显示的图像。
在一些实施例中,为避免出现响应轨迹与用户的触摸动作不一致的问题,在用户输入触摸轨迹后,显示设备可从触摸轨迹中获取触摸点的坐标,当显示设备为竖屏状态时,触摸点的坐标为竖屏坐标系下的坐标,例如,一个触摸点的坐标可为M(X1,Y1)。
在一些实施例中,在显示屏为竖屏状态时,将显示屏逆时针旋转90度可变换为横屏状态,根据该旋转关系,竖屏坐标系下的坐标与横屏状态下的坐标系下的坐标的关系见公式3、4:
X 1=h-y     (3)
Y 1=x      (4)
式中,M(X1,Y1)为竖屏坐标系下的坐标,m(x,y)为M(X1,Y1)在横屏状态下的坐标系下对应的坐标,h为横屏状态下的纵坐标轴长度,在横屏状态下的分辨率为1920*1080时,h=1080。根据(1)式,将触摸轨迹中的触摸坐标转换为横屏状态下的触摸坐标的公式为5、6:
x=Y 1     (5)
y=h-X 1      (6)
在一些实施例中,图形缓冲区中的备份图像为横屏状态下的坐标系下的图像,在将竖屏下的触摸坐标转换为横屏下的触摸坐标后,可根据横屏下的触摸坐标在备份图像上绘制响应轨迹,得到GOP2层的响应图像,将该响应图像与备份图像进行叠加,得到绘制后的图像。
在一些实施例中,还可根据响应图像的坐标范围,得到绘制后的图像中的用户操作区域,根据用户操作区域得到原始图像的待更新区域,其中,原始图像的待更新区域在原始图像上的位置可与用户操作区域在绘制后的图像上的位置相同。在得到绘制后的图像后,将绘制后的图像旋转至当前的旋转状态后,得到待显示图像,可将当前显示的图像刷新为该待显示图像。在得到绘制后的图像后,还可将绘制后的图像作为原始图像,进行复制, 得到新的备份图像,将新的备份图像移动至图像缓冲区,便于接收新的触摸轨迹后进行响应。
在一些实施例中,还可将原始图像的待更新区域位置的图像替换为用户操作区域的图像,得到新的原始图像,从而使电子白板应用可将显示器上当前显示的图像更新为待显示图像。图17为根据本申请一个或多个实施例的显示设备的竖屏状态示意图,如图17所示,利用上述实施例中的图像显示方法,用户在竖屏状态下进行绘制后,显示器可显示与用户的触摸轨迹一致的响应轨迹,例如,用户绘制字母“A”,显示设备可在用户的绘制位置显示该字母“A”。
在一些实施例中,用户可按遥控器上的预设按键,向显示设备发出旋转指令,显示设备根据该旋转指令开始旋转显示器,显示设备的控制器可被配置为默认将显示器旋转90度后停止旋转。显示设备处于横屏状态时,如果接收到旋转指令,则顺时针旋转至竖屏状态;显示设备处于竖屏状态时,如果接收到旋转指令,则逆时针旋转至横屏状态。在一些实施例中,除了横屏状态和竖屏状态,显示设备的显示器还可停留在其他旋转角度如30度、60度等等。用户可按遥控器上的预设按键,向显示设备发出旋转指令,显示设备根据该旋转指令开始旋转显示器,当显示设备的显示器旋转到某一角度后,用户可按遥控器上的该预设按键,向显示设备发出暂停旋转指令,显示设备可根据该暂停旋转指令,停止旋转,使显示器保持在该角度。
在一些实施例中,图18根据本申请一个或多个实施例的显示器旋转示意图,如图18所示,显示器在旋转时,可绕显示器的中心C(xcenter,ycenter)旋转,旋转到角度θ后,用户可向显示设备输入暂停旋转指令,使显示设备停留在角度θ。图21中,显示器的A(0,0)在旋转后成为A2,在角度θ下,旋转后的图像的左上顶点为A4,右上顶点为A2,左下顶点为A6,右下顶点为A5。
在一些实施例中,在显示器停止旋转后,显示设备的控制器可根据当前的角度θ和显示器对角线的长度,计算出旋转后的图像的宽度width和高度height。图18中,旋转后的图像的最大宽度为A4和A5之间的水平距离,旋转后的图像的最大高度为A2和A6之间的垂直距离。根据旋转后的图像的宽度width和高度height生成新的图形缓冲区,建立新的坐标系。在新的坐标系下,坐标原点为A3(0,0)。在该新的坐标系下,假设旋转后的图像的左上顶点为A4(left,top),右下顶点为A5(right,bottom),则在横屏状态下的任意一个点n(x,y),在显示器绕显示器的中心点C(xcenter,ycenter)旋转角度θ后,得到该点在新的坐标系下的坐标位置N(x1,y1),坐标位置N(x1,y1)的计算公式如下:
xcenter=(width+1)/2+left;   (7)
ycenter=(height+1)/2+top;    (8)
x1=(x-xcenter)cosθ-(y-ycenter)sinθ+xcenter;    (9)
y1=(x-xcenter)sinθ+(y-ycenter)cosθ+ycenter;     (10)
从而推测出原始坐标n(x,y)的计算公式如下:
x=x1cosθ+y1sinθ+(1-cosθ)xcenter-ycenter sinθ     (11)
y=cosθy1-sinθx1+(1-cosθ)ycenter+xcenter sinθ   (12)
根据上述公式,可将触摸轨迹中的触摸坐标转换为横屏状态下的触摸坐标,进而可绘制响应图像,然后更新显示器显示的图像。
在一些实施例中,显示设备可被配置为在旋转时也响应触摸操作,根据上述倾斜状态 下的坐标转换,可得到触摸轨迹对应的横屏状态下的响应轨迹,进而可根据该响应轨迹生成待显示的图像。
在一些实施例中,显示设备的控制器还可检测所述显示器的旋转状态,在显示设备旋转过程中,可检测到显示器的旋转状态由第一旋转状态向第二旋转状态变化,在显示器旋转了第一旋转状态到第二旋转状态中间的旋转状态时,可将所述显示器显示的图像旋转至显示方向与所述第二旋转状态相一致,例如,在由横屏状态旋转至竖屏状态过程中,可在旋转角度为45度时,将显示器显示的图像切换为竖屏状态下的图像,从而便于用户在旋转过程中观看显示器的显示内容。
在一些实施例中,显示器上的应用程序还可为非全屏显示,如半屏显示,此时,将竖屏坐标系转换为横屏状态下的坐标系后,该横屏状态下的坐标系与显示器的默认坐标系存在偏移量。
图19-20为根据本申请一个或多个实施例的电子白板应用的界面示意图,如图19所示,在竖屏状态下,该电子白板应用可非全屏显示,当然,在横屏状态下,该电子白板应用也可非全屏显示。在非全屏显示下,根据该电子白板应用与显示屏横屏状态下的左边界偏移量为xoffset,与显示屏横屏状态下的上边界偏移量为yoffset,得到竖屏坐标系转换为横屏状态下的坐标系后,该横屏状态下的坐标系与显示器的默认坐标系的偏移量即包括xoffset和yoffset。
在一些实施例中,如果电子白板应用为非全屏显示,则将触摸轨迹中触摸坐标转换为横屏状态下的坐标后,需要将该横屏状态下的坐标减去偏移量,即将横坐标减xoffset,将该横屏状态下的坐标中的纵坐标减yoffset,然后根据减去偏移量的横屏状态下的坐标绘制响应图像,然后更新显示器显示的图像。
在一些实施例中,如果电子白板应用显示的内容位于显示器的边界时,如图20,则需要进行边界处理。边界处理可包括左边界处理和上边界处理,左边界处理包括:将触摸轨迹中触摸坐标转换为横屏状态下的坐标后,如果横屏状态下的坐标的起始坐标为startx,如果该起始坐标startx小于等于xoffset,则要拷贝的图片像素为xffset-startx,起始坐标为xoffset。上边界处理包括:将触摸轨迹中触摸坐标转换为横屏状态下的坐标后,如果横屏状态下的坐标的起始坐标为starty,如果该起始坐标starty小于等于yoffset,则要拷贝的图片像素为yoffset-starty,起始坐标为yoffset。根据上述需要拷贝的图片像素绘制响应图像,然后更新显示器显示的图像。
由上述实施例可见,显示设备预先将多个图层进行合成,在得到触摸轨迹后,只需将触摸轨迹叠加到预先合成的图像上即可,不需要在得到触摸轨迹后再利用SurfaceFlinger服务对多个图层进行合成,提高了图像显示效率。
为了方便解释,已经结合具体的实施方式进行了上述说明。但是,上述在一些实施例中讨论不是意图穷尽或者将实施方式限定到上述公开的具体形式。根据上述的教导,可以得到多种修改和变形。上述实施方式的选择和描述是为了更好的解释原理以及实际的应用,从而使得本领域技术人员更好的使用实施方式以及适于具体使用考虑的各种不同的变形的实施方式。

Claims (11)

  1. 一种显示设备,包括:
    显示器;
    输入/输出接口,被配置为连接输入装置;
    控制器,被配置为:
    通过所述输入/输出接口获取用户输入的手绘图形轨迹;
    根据所述手绘图形轨迹生成标准几何图形,所述标准几何图形具有与所述手绘图形轨迹相同的倾斜角度,所述标准几何图形根据经旋转后的所述手绘图形轨迹绘制,再经反向旋转后生成。
  2. 根据权利要求1所述的显示设备,所述控制器被进一步配置为:
    在根据所述手绘图形轨迹生成标准几何图形的步骤中,遍历所述手绘图形轨迹中各手绘点坐标,以获得第一特征方向,所述第一特征方向为所述手绘图形轨迹中至少两个所述手绘点之间的位置关系在满足预设位置关系时的连线所处方向;
    检测所述第一特征方向与预设判断方向之间的夹角;
    按照所述夹角对所述手绘图形轨迹执行旋转,以使所述第一特征方向与所述预设判断方向平行;
    遍历旋转后的手绘图形轨迹中各手绘点坐标,以获得第二特征方向,所述第二特征方向是与所述第一特征方向满足预设几何关系的方向;
    根据所述第一特征方向和所述第二特征方向绘制标准几何图形;
    按照所述夹角旋转所述标准几何图形。
  3. 根据权利要求2所述的显示设备,所述控制器被进一步配置为:
    在遍历所述手绘图形轨迹中各手绘点坐标以获得第一特征方向的步骤中,计算所述手绘轨迹中任意两个手绘点之间的距离,以生成第一距离;
    对比所有手绘点之间的所述第一距离,以获得所述第一距离最远的两个手绘点;
    在所述第一距离最远的两个手绘点之间连线,以根据所述连线方向生成所述第一特征方向。
  4. 根据权利要求2所述的显示设备,所述控制器被进一步配置为:
    在遍历旋转后的手绘图形轨迹中各手绘点坐标以获得第二特征方向的步骤中,计算所述手绘轨迹中在垂直于所述第一特征方向上两个手绘点之间的距离,以生成第二距离
    对比所有手绘点之间的所述第二距离,以获得所述第二距离最远的两个手绘点;
    在所述第二距离最远的两个手绘点之间连线,以根据所述连线方向生成所述第二特征方向。
  5. 根据权利要求4所述的显示设备,其特征在于,所述控制器被进一步配置为:
    在根据所述第一特征方向和所述第二特征方向绘制标准几何图形的步骤中,在第一方向上定位长轴端点,所述长轴端点为最远所述第一距离对应的两个手绘点;
    根据所述第二距离以及所述长轴端点生成外接矩形;
    按照所述外接矩形生成标准几何图形。
  6. 根据权利要求2所述的显示设备,所述控制器被进一步配置为:
    在遍历所述手绘图形轨迹中各手绘点坐标,以获得第一特征方向的步骤中,遍历所述 手绘图形轨迹中手绘点坐标极值,定位极值点;
    根据所述坐标极值定位端点;
    计算所述极值点与所述端点之间的第三距离;
    对比所述第三距离,以获得与所述极值点距离最近的两个端点;
    在与所述极值点距离最近的两个端点之间连线,以根据所述连线方向生成所述第一特征方向。
  7. 根据权利要求2所述的显示设备,所述控制器被进一步配置为:
    在按照所述夹角旋转所述标准几何图形的步骤中,对比所述夹角与预设夹角阈值;
    如果所述夹角小于或等于所述预设夹角阈值,控制所述显示器显示生成的所述标准几何图形;
    如果所述夹角大于所述夹角阈值,按照所述夹角反向旋转所述标准几何图形,所述标准几何图形的反向旋转方向与所述手绘图形轨迹所执行的旋转方向相反;
    控制所述显示器显示反向旋转后的所述标准几何图形。
  8. 根据权利要求7所述的显示设备,所述控制器被进一步配置为:
    在对比所述夹角与预设夹角阈值的步骤前,检测自动角度调整开关的开关状态;
    如果所述开关状态为已开启,执行对比所述夹角与预设夹角阈值的步骤;
    如果所述开关状态为未开启,执行按照所述夹角反向旋转所述标准几何图形的步骤。
  9. 一种显示设备,包括:
    显示器;
    触控组件,被配置为获取用户的触控输入;
    控制器,被配置为:
    通过所述触控组件获取用户输入的手绘图形轨迹;
    根据所述手绘图形轨迹生成标准几何图形,所述标准几何图形具有与所述手绘图形轨迹相同的倾斜角度,所述标准几何图形根据经旋转后的所述手绘图形轨迹绘制,再经反向旋转后生成。
  10. 一种几何图形识别方法,应用于显示设备,所述显示设备包括显示器和控制器,所述显示设备还内置或外接有输入装置,所述方法包括:
    获取用户输入的手绘图形轨迹;
    根据所述手绘图形轨迹生成标准几何图形,所述标准几何图形具有与所述手绘图形轨迹相同的倾斜角度,所述标准几何图形根据经旋转后的所述手绘图形轨迹绘制,再经反向旋转后生成。
  11. 一种多图层叠加显示方法,所述多图层叠加方法应用于显示设备,所述显示设备包括显示器、触控组件和控制器,其中所述触控组件被配置为检测用户输入的触控轨迹,所述多图层叠加方法包括:
    获取第一图层中的触控轨迹图案,以及获取第二图层中的背景图案,所述第二图层是位于所述第一图层下一层的图层;
    根据所述背景图案,对所述触控轨迹图案执行内插运算,以生成转换图案,所述转换图案的分辨率等于所述背景图案的分辨率;
    叠加所述转换图案与所述背景图案,以控制所述显示器实时显示叠加结果。
PCT/CN2021/117796 2020-10-30 2021-09-10 显示设备、几何图形识别方法及多图层叠加显示方法 WO2022089043A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180066094.0A CN116324689A (zh) 2020-10-30 2021-09-10 显示设备、几何图形识别方法及多图层叠加显示方法
US18/157,324 US11984097B2 (en) 2020-10-30 2023-01-20 Display apparatus having a whiteboard application with multi-layer superimposition and display method thereof

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202011188310.2A CN112181207B (zh) 2020-10-30 2020-10-30 一种显示设备及几何图形识别方法
CN202011188310.2 2020-10-30
CN202011528031.6 2020-12-22
CN202011528031.6A CN112672199B (zh) 2020-12-22 2020-12-22 一种显示设备及多图层叠加方法
CN202110171543.XA CN112799627B (zh) 2021-02-08 2021-02-08 显示设备及图像显示方法
CN202110171543.X 2021-02-08

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/157,324 Continuation US11984097B2 (en) 2020-10-30 2023-01-20 Display apparatus having a whiteboard application with multi-layer superimposition and display method thereof

Publications (1)

Publication Number Publication Date
WO2022089043A1 true WO2022089043A1 (zh) 2022-05-05

Family

ID=81381866

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/117796 WO2022089043A1 (zh) 2020-10-30 2021-09-10 显示设备、几何图形识别方法及多图层叠加显示方法

Country Status (3)

Country Link
US (1) US11984097B2 (zh)
CN (1) CN116324689A (zh)
WO (1) WO2022089043A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210358202A1 (en) * 2020-05-13 2021-11-18 Electronic Caregiver, Inc. Room Labeling Drawing Interface for Activity Tracking and Detection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5121442A (en) * 1987-09-03 1992-06-09 Sharp Kabushiki Kaisha Figure input system
CN102568253A (zh) * 2010-12-21 2012-07-11 汉王科技股份有限公司 电子白板中图形的几何特征显示方法及装置
CN110851062A (zh) * 2019-08-29 2020-02-28 华为技术有限公司 一种绘图方法及电子设备
CN111625179A (zh) * 2020-06-02 2020-09-04 京东方科技集团股份有限公司 图形绘制方法、电子设备及计算机存储介质
CN112181207A (zh) * 2020-10-30 2021-01-05 海信视像科技股份有限公司 一种显示设备及几何图形识别方法
CN112672199A (zh) * 2020-12-22 2021-04-16 海信视像科技股份有限公司 一种显示设备及多图层叠加方法

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100576893C (zh) 2007-08-28 2009-12-30 西安交通大学 一种集成于视频处理芯片的图形osd控制器
CN101321240B (zh) 2008-06-25 2010-06-09 华为技术有限公司 多图层叠加的方法及其装置
US9082216B2 (en) 2009-07-01 2015-07-14 Disney Enterprises, Inc. System and method for filter kernel interpolation for seamless mipmap filtering
JP5540770B2 (ja) 2009-07-30 2014-07-02 株式会社リコー 画像処理装置、画像処理方法及び画像処理プログラム
CN102479065B (zh) 2010-11-26 2014-05-07 Tcl集团股份有限公司 旋转式显示器及其显示方法
CN102638679B (zh) 2011-02-12 2014-07-02 澜起科技(上海)有限公司 基于矩阵对图像进行插值的方法及图像处理***
CN102411790A (zh) 2011-07-21 2012-04-11 福州锐达数码科技有限公司 一种鼠标轨迹识别并自动生成图形的方法
CN102354480B (zh) 2011-09-15 2013-11-06 梁友明 Led线阵旋转扫描圆盘屏幕
KR20140124031A (ko) * 2013-04-15 2014-10-24 삼성디스플레이 주식회사 센싱 데이터 처리 방법 및 이를 수행하는 표시 장치
CN103366705B (zh) 2013-07-16 2015-07-08 苏州佳世达电通有限公司 一种液晶显示屏的画面调整方法及***
CN103399698B (zh) 2013-07-31 2016-08-24 中国船舶重工集团公司第七0九研究所 一种基于手绘草图和手势输入判断处理的笔式交互方法
CN104424473A (zh) 2013-09-06 2015-03-18 北京三星通信技术研究有限公司 一种手绘草图识别和编辑的方法及装置
US10127700B2 (en) 2014-03-31 2018-11-13 Samsung Display Co., Ltd. Generation of display overlay parameters utilizing touch inputs
US20140327689A1 (en) 2014-04-22 2014-11-06 Paul Maravelias Technique for real-time rendering of temporally interpolated two-dimensional contour lines on a graphics processing unit
KR102374160B1 (ko) * 2014-11-14 2022-03-14 삼성디스플레이 주식회사 스케일링을 사용하여 디스플레이 지연을 감소시키는 방법 및 장치
CN104574277A (zh) 2015-01-30 2015-04-29 京东方科技集团股份有限公司 图像插值方法和图像插值装置
CN104811677B (zh) 2015-05-22 2017-03-01 广东欧珀移动通信有限公司 移动终端的显示控制方法及装置
CN105049827B (zh) 2015-08-13 2017-04-05 深圳市华星光电技术有限公司 裸眼3d成像方法及***
TWI557287B (zh) 2015-09-21 2016-11-11 Zeng Hsing Ind Co Ltd Handwriting Needle Identification Method and Its Identification System
US10643067B2 (en) 2015-10-19 2020-05-05 Myscript System and method of handwriting recognition in diagrams
CN105719328B (zh) 2016-01-19 2018-06-12 华中师范大学 一种手绘几何图形规范化方法及***
CN105719332B (zh) 2016-01-20 2019-02-19 阿里巴巴集团控股有限公司 色彩补间动画的实现方法和装置
CN108604173A (zh) * 2016-02-12 2018-09-28 株式会社理光 图像处理装置、图像处理***和图像处理方法
EP3432587A4 (en) 2016-03-31 2019-02-27 Huawei Technologies Co., Ltd. METHOD FOR PROCESSING AN APPLICATION PROGRAM AND MOBILE DEVICE
CN105894554B (zh) 2016-04-11 2019-07-05 腾讯科技(深圳)有限公司 图像处理方法和装置
CN106453941B (zh) 2016-10-31 2019-10-01 努比亚技术有限公司 双屏操作方法及移动终端
CN106600664B (zh) 2016-12-08 2019-12-17 广州视源电子科技股份有限公司 对称图形的绘制方法和装置
CN107077720A (zh) 2016-12-27 2017-08-18 深圳市大疆创新科技有限公司 图像处理的方法、装置和设备
CN106874017B (zh) 2017-03-10 2019-10-15 Oppo广东移动通信有限公司 一种移动终端的显示场景识别方法、装置及移动终端
CN107193794B (zh) 2017-06-28 2021-05-18 广州视源电子科技股份有限公司 显示内容的批注方法和装置
CN107783937B (zh) 2017-10-19 2018-08-14 西安科技大学 一种在空间大地测量中求解任意旋转角三维坐标转换参数的方法
CN108874292B (zh) 2018-07-16 2021-12-03 广州视源电子科技股份有限公司 批注显示方法、装置以及智能交互平板
US11055898B2 (en) 2018-10-31 2021-07-06 Facebook Technologies, Llc Distance field color palette
CN110377264B (zh) 2019-07-17 2023-07-21 Oppo广东移动通信有限公司 图层合成方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5121442A (en) * 1987-09-03 1992-06-09 Sharp Kabushiki Kaisha Figure input system
CN102568253A (zh) * 2010-12-21 2012-07-11 汉王科技股份有限公司 电子白板中图形的几何特征显示方法及装置
CN110851062A (zh) * 2019-08-29 2020-02-28 华为技术有限公司 一种绘图方法及电子设备
CN111625179A (zh) * 2020-06-02 2020-09-04 京东方科技集团股份有限公司 图形绘制方法、电子设备及计算机存储介质
CN112181207A (zh) * 2020-10-30 2021-01-05 海信视像科技股份有限公司 一种显示设备及几何图形识别方法
CN112672199A (zh) * 2020-12-22 2021-04-16 海信视像科技股份有限公司 一种显示设备及多图层叠加方法

Also Published As

Publication number Publication date
US11984097B2 (en) 2024-05-14
US20230162704A1 (en) 2023-05-25
CN116324689A (zh) 2023-06-23

Similar Documents

Publication Publication Date Title
US11301200B2 (en) Method of providing annotation track on the content displayed on an interactive whiteboard, computing device and non-transitory readable storage medium
TWI526081B (zh) 機上盒之使用者介面
CN112672199B (zh) 一种显示设备及多图层叠加方法
CN112799627B (zh) 显示设备及图像显示方法
US9880727B2 (en) Gesture manipulations for configuring system settings
US20100229125A1 (en) Display apparatus for providing a user menu, and method for providing user interface (ui) applicable thereto
US20120293544A1 (en) Image display apparatus and method of selecting image region using the same
US20100188352A1 (en) Information processing apparatus, information processing method, and program
CN112181207B (zh) 一种显示设备及几何图形识别方法
CN102053789A (zh) 演示***和演示***的显示装置
WO2017114255A1 (zh) 一种针对投影图像的触控方法及装置
US11984097B2 (en) Display apparatus having a whiteboard application with multi-layer superimposition and display method thereof
WO2018198703A1 (ja) 表示装置
CN114157889B (zh) 一种显示设备及触控协助交互方法
US11662971B2 (en) Display apparatus and cast method
CN114237419A (zh) 显示设备、触控事件的识别方法
CN114115637A (zh) 显示设备及电子画板优化方法
CN113487695A (zh) 图形生成方法及终端设备
WO2023273761A1 (zh) 一种显示设备及图像处理方法
CN114286153A (zh) 一种基于蓝牙aoa的窗口调节方法以及显示设备
CN115243082A (zh) 一种显示设备及终端控制方法
CN115083343A (zh) 显示设备和分辨率调节方法
JP5657269B2 (ja) 画像処理装置、表示装置、画像処理方法、画像処理プログラム、記録媒体
CN115190351A (zh) 显示设备及媒资缩放控制方法
CN115550717A (zh) 一种显示设备及多指触控显示方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884773

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884773

Country of ref document: EP

Kind code of ref document: A1