CN108184054B - Preprocessing method and preprocessing device for images shot by intelligent terminal - Google Patents

Preprocessing method and preprocessing device for images shot by intelligent terminal Download PDF

Info

Publication number
CN108184054B
CN108184054B CN201711458203.5A CN201711458203A CN108184054B CN 108184054 B CN108184054 B CN 108184054B CN 201711458203 A CN201711458203 A CN 201711458203A CN 108184054 B CN108184054 B CN 108184054B
Authority
CN
China
Prior art keywords
renderer
intelligent terminal
module
view surface
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711458203.5A
Other languages
Chinese (zh)
Other versions
CN108184054A (en
Inventor
肖云鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN201711458203.5A priority Critical patent/CN108184054B/en
Publication of CN108184054A publication Critical patent/CN108184054A/en
Application granted granted Critical
Publication of CN108184054B publication Critical patent/CN108184054B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a preprocessing method and a preprocessing device for images shot by an intelligent terminal, wherein the preprocessing method comprises the following steps of: creating a view surface instance in the intelligent terminal; creating a renderer in the intelligent terminal; associating the view surface instance with the renderer; the view surface instance acquires a group of shot images from a driving layer of the intelligent terminal; and the renderer renders the shot image to form a group of preview data frames. After the technical scheme is adopted, the animation special effect is realized on the shooting preview interface of the intelligent terminal; the method does not occupy the main thread resource of the application program, does not influence the operation feedback of the user, and improves the user experience.

Description

Preprocessing method and preprocessing device for images shot by intelligent terminal
Technical Field
The invention relates to the field of intelligent terminals, in particular to a preprocessing method and a preprocessing device for images shot by an intelligent terminal.
Background
The intelligent terminal can be an intelligent device with a photographing function, such as an intelligent mobile phone, a tablet personal computer and a digital camera. With the development of the intelligent terminal technology, the functions integrated on the intelligent terminal are more and more abundant, and particularly, the photographing function can support the user to realize the functions of normal photographing, continuous photographing, self-photographing and the like. In the prior art, the intelligent terminal further has a shooting preview function, that is, the content currently shot by the camera of the intelligent terminal is displayed on a display interface of the intelligent terminal, so that a user can know the current shooting content, and position adjustment is performed to achieve the best shooting effect.
At present, for the implementation of a preview display technology of a shot image, the shot image is continuously acquired from a camera according to a certain frequency and is displayed on the display interface, and in the prior art, preprocessing of the shot image is implemented through a class for view display provided by the operating system of the intelligent terminal, for example, a SurfaceView class in an Android operating system is used to implement display of the preview shot image. The SurfaceView class is an inheritance class of a View class (View) in an android operating system, a View Surface instance (namely Surface) specially used for drawing is embedded in the View class, the format and the size of the View Surface instance can be controlled, and the SurfaceView class controls the drawing position of the View Surface instance. The realization mode is simple, the development workload is relatively saved, but if a relatively complex animation special effect is to be made by using the SurfaceView class, the method is very difficult, because in the class or the interface provided by the SurfaceView class, only a Bitmap format (Bitmap) can be used for making an animation image, and the animation special effect made by the Bitmap format is very simple and limited, such as Gaussian blur, difference inversion, fade-in and fade-out and other relatively advanced animation special effects can not be made.
In addition, common controls provided by android operating systems related to interface display, such as TextView, Button, CheckBox and the like, draw own display interface on the drawing surface of the host window, which means that the display interface of the controls is drawn in the main thread of the application program. Since the main thread of the application needs to respond to the user input in time in addition to drawing the display interface, otherwise, the system considers that the application does not respond, and a dialog box without response of the application is popped up. In an android operating system, if an Application is Not responsive enough for a period of time, the system displays a dialog box called an Application Not Responding (ANR) dialog box to the user. The user may choose to "wait" for the program to continue running, or may choose to "force shut down". For some game pictures, or camera previewing and video playing, the display interfaces of the game pictures are complex, and efficient drawing is required. Therefore, their display interfaces are not suitable for rendering in the main thread of the application, at which time a separate view surface instance must be generated for the views of the complex and efficient display interface and a separate thread is used to render the display interface of the views.
In summary, although the prior art has already implemented the preview of the captured image, there are the following problems:
1. higher animation special effects cannot be realized;
2. and the main thread of the application program is occupied, so that the operation experience of a user is poor.
Therefore, how to implement preprocessing on the shot image, and realize various animation special effects without influencing the operation experience of the user is a technical problem to be solved.
Disclosure of Invention
In order to overcome the technical defects, the invention aims to provide a preprocessing method for enabling a shot image of an intelligent terminal to have an animation effect, and the method does not occupy main thread resources of an application program.
In a first aspect of the present application, a method for preprocessing an image captured by an intelligent terminal is disclosed, which includes the following steps:
creating a view surface instance in the intelligent terminal;
creating a renderer in the intelligent terminal;
associating the view surface instance with the renderer;
the view surface instance acquires a group of shot images from a driving layer of the intelligent terminal;
and the renderer renders the shot image to form a group of preview data frames.
Preferably, before the step of creating a view surface instance in the intelligent terminal, the preprocessing method further includes:
and the intelligent terminal enters a shooting preview mode.
Preferably, after the step of rendering the captured image by the renderer to form a group of preview data frames, the preprocessing method further includes:
and sequentially displaying the preview data frames on a display interface of the intelligent terminal.
Preferably, when the renderer renders the shot image, the renderer operates in an on-demand rendering mode or a continuous rendering mode.
Preferably, the view surface instance and the renderer are run in a separate thread when the view surface instance and the renderer are created.
In a second aspect of the present application, a preprocessing apparatus for shooting an image by an intelligent terminal is disclosed, which includes:
the first creating module is used for creating a view surface example in the intelligent terminal;
the second establishing module is used for establishing a renderer in the intelligent terminal;
the association module is connected with the first creation module and the second creation module and associates the view surface instance with the renderer;
the acquisition module acquires a group of shot images from a driving layer of the intelligent terminal through the view surface example;
and the rendering module is connected with the acquisition module and renders the shot image through the renderer to form a group of preview data frames.
Preferably, the pretreatment device further comprises:
and the preview module controls the intelligent terminal to enter a shooting preview mode.
Preferably, the pretreatment device further comprises:
and the display module is connected with the rendering module and sequentially displays the preview data frames on a display interface of the intelligent terminal.
Preferably, when the rendering module renders the shot image through the renderer, the renderer operates in an on-demand rendering mode or a continuous rendering mode.
Preferably, when the first creation module and the second creation module create the view surface instance and the renderer, the view surface instance and the renderer are run in a separate thread.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
1. realizing animation special effects on a shooting preview interface of the intelligent terminal;
2. the method does not occupy the main thread resource of the application program, does not influence the operation feedback of the user, and improves the user experience.
Drawings
Fig. 1 is a schematic flow chart of a preprocessing method for images taken by an intelligent terminal according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart illustrating a preprocessing method for images taken by the intelligent terminal according to another preferred embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a preprocessing device for images taken by an intelligent terminal according to a preferred embodiment of the present invention;
fig. 4 is a schematic structural diagram of a preprocessing device for images taken by an intelligent terminal according to another preferred embodiment of the present invention.
Reference numerals:
10-a preprocessing device for shooting images by the intelligent terminal, 11-a first creation module, 12-a second creation module, 13-an association module, 14-an acquisition module, 15-a rendering module, 16-a preview module and 17-a display module.
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
Referring to fig. 1, a schematic flow chart of a preprocessing method for an image captured by an intelligent terminal according to a preferred embodiment of the present invention is shown, where the preprocessing method includes the following steps:
s101: and creating a view surface example in the intelligent terminal.
In this embodiment, an Android operating System, that is, an Android System, is operated on the smart terminal, and is an operating System widely applied to smart terminal devices such as smart phones and tablet computers, which is issued by ***, and *** also provides rich application interfaces so that developers can perform secondary development. In this step, a view surface instance is created in the intelligent terminal, the view surface instance is an instance of a GLSurfaceView class, the GLSurfaceView class is a view class and is inherited to the SurfaceView class, and the following characteristics are provided:
1, managing a surface in a CameraAp layer, wherein the surface is a special memory and can be directly typeset on view of android;
2> manage a GL display which enables opengl to render content onto said surface;
3> user-defined renderer (render);
4, the renderer is enabled to operate in an independent thread and is separated from the UI thread;
5> support on-demand rendering (on-demand) and continuous rendering (continuous).
The CameraAp layer is a layer corresponding to an application layer of a shooting function in the android operating system. The surface embedded in the GLSurfaceView is specially responsible for OpenGL rendering, which is the key point for realizing the technical effect of the invention. The OpenGL is called Open Graphics Library, defines a cross-programming language and cross-platform programming interface Library, is used for processing three-dimensional images (two-dimensional images can also be used), is a professional graphic program interface, and is a bottom-layer graphic Library which is powerful in function and convenient to call. OpenGL ES (OpenGL for Embedded Systems) is a subset of OpenGL three-dimensional graphics APIs, and is designed for Embedded devices such as mobile phones, PDAs, and game hosts. OpenGL ES is tailored from OpenGL, removes many non-absolutely necessary characteristics such as complex primitives like glBegin/glEnd, quadrilateral (GL _ qualds), polygon (GL _ polygon), etc., and is designed specifically for various embedded systems, including consoles, mobile phones, handheld devices, home appliances, and automobiles.
The intelligent terminal adopts OpenGL ES, and the OpenGL ES provides a GLSurfaceView component to realize the animation effect processing of the image. The method and the device have the advantages that the GLSurfaceView type is used for creating the example, the display of the preview shot image is realized through the SurfaceView type in the prior art, powerful image processing functions or functions in OpenGL ES can be called through the GLSurfaceView type, and rich and various animation effects are realized.
Another advantage of the instance created by the GLSurfaceView class is that it runs in a separate thread and does not occupy the main thread of the application. Threads, sometimes referred to as Lightweight processes (LWP), are the smallest unit of program execution flow. A standard thread consists of a thread ID, a current instruction Pointer (PC), a register set and a stack; in addition, a thread is an entity in a process and is a basic unit independently scheduled and dispatched by a system, the thread does not own system resources and only owns resources which are indispensable in operation at a time, but the thread can share all the resources owned by the process with other threads belonging to the same process. One thread can create and tear down another thread, and multiple threads in the same process can execute concurrently. Due to the mutual restriction between threads, the threads are discontinuous in operation. The thread also has three basic states of ready, blocking and running, wherein the ready state refers to that the thread has all running conditions, can run logically and waits for a processor; the running state means that the thread holding processor is running; a blocked state refers to a thread being logically non-executable while waiting for an event (e.g., a semaphore). Each program has at least one thread, and if the program has only one thread, the program itself is used. The view surface instance runs on an independent thread of the view surface instance after being created, and resources of a main thread (namely a UI thread) of an application program cannot be occupied, so that the intelligent terminal can respond to tasks of the application program while processing a shot image, response delay of user operation is not caused, and compared with the prior art that response delay is caused when processing of the shot image is realized on the UI thread, user experience is improved.
S102: and creating a renderer in the intelligent terminal.
The renderer, referred to as Render in the android operating system, is capable of rendering images. Before the renderer is created, EGLs need to be configured, the Android device often supports multiple EGL configurations, channels (channels) with different numbers can be used, and each channel can be specified to have different bit depths. Therefore, the configuration of the EGL should be specified before the renderer is working. The pixel format of the default EGL configuration in the GLSurfaceView class is RGB _656, 16-bit depth buffer (depth buffer), the mask buffer (stencil buffer) is not started by default, a user can adjust the EGL configuration according to the camera pixel of the intelligent terminal and the pixel of a display interface, and if different EGL configurations are selected, the EGL configuration is realized by using one of setEGLConfigChooser methods.
When the renderer is created, the renderer is realized in a CameraAp layer, a GLSurfaceView.Renderer is registered through a setRenderer (GLSurfaceView.Renderer), namely the renderer is responsible for the real rendering work of the image. The renderer still works under the framework of the GLSurfaceView class, interacting data with the view surface instance. After the renderer is set, setRenderMode (int) may be used to specify whether the rendering mode is on demand rendering (on demand) or continuous rendering (continuous) by default. The renderer will be invoked in a separate thread, so rendering performance is decoupled from the UI thread, i.e. the work of the renderer will not occupy the resources of the UI thread and not affect user operations.
S103: associating the view surface instance with the renderer.
Only by associating the view surface instance with the renderer can processing of the image as a whole be achieved. The association of the two can be achieved through an interface, for example, the view surface instance sends the image to be rendered to the renderer through the interface, and the renderer can also send the rendered image back to the view surface instance through the interface. The two can also carry out data interaction in an event monitoring mode, event monitoring can be finished by using java standard inter-thread communication, and when the renderer monitors that an input event or a shooting event exists, rendering is started.
S104: the view surface instance obtains a set of captured images from a driver layer of the smart terminal.
When the camera of the intelligent terminal shoots an image, an optical image generated by a scene through the lens is projected onto the surface of an image Sensor (Sensor), then is converted into an electric signal, and is converted into a digital image signal after A/D (analog-to-digital conversion), and the digital image signal can be acquired from the driving layer. The driver layer is an important layer in an android operating system, and is also called a kernel, and comprises a display driver, a camera driver, a flash memory driver, a binder (IPC) driver, a keyboard driver, a Wifi driver, an audio driver and a power management part, and like all Linux kernels, the android kernel is an abstraction layer between a hardware layer and a software group. In this step, the view surface instance may obtain a group of shot images through a camera driver, and the camera driver is provided with an interface facing the upper layer, and may perform data transmission of the shot images. When the user uses the intelligent terminal to take a picture, the camera is in an open state and can continuously take pictures according to a certain frequency, so that a group of shot images are formed.
S105: and the renderer renders the shot image to form a group of preview data frames.
The view surface instance sends the acquired set of captured images to the renderer, and the renderer renders, i.e., redraws, the captured images. The rendering, which is the last process of the CG (except post-production, of course), is also the stage of finally making the image conform to the 3D scene, where english is Render, and in real-life work, we often need to output the model or scene as an image file, a video signal or a motion picture film, which must pass through the Render program. Rendering may cause the captured image to produce different effects such as color gradient, background deepening, blurring, and the like. The renderer uses the functions defined in OpenGL to achieve rich animation effects. The shot images are in a group, namely the shot images consist of a plurality of shot images, each shot image corresponds to one preview data frame, so that a group of preview data frames can be correspondingly rendered, and each preview data frame is an image which can be directly displayed on the display interface of the intelligent terminal. If the preview data frame is played according to a preset speed, a corresponding animation effect can be realized.
Referring to fig. 2, which is a schematic flow chart of a preprocessing method for an image captured by an intelligent terminal according to another preferred embodiment of the present invention, before step S101', the preprocessing method further includes:
s100: and the intelligent terminal enters a shooting preview mode.
The intelligent terminal is provided with a shooting preview mode, namely after a shooting function is turned on, the previewed shooting content is displayed on the screen of the intelligent terminal, the previewed shooting content displayed on the screen can change along with the movement of the intelligent terminal, and a user can determine the best shooting scene according to the previewed shooting content. In the implementation of the shooting preview mode, a camera of the intelligent terminal is called according to a certain frequency, an imaging image of the object to be measured is converted into a digital image by a photosensitive element and is displayed on the screen, and in the invention, preprocessing is required before the shooting image is displayed on the screen, namely the process from the step S101 'to the step S105'. The shooting preview mode can be opened by selecting an operation button on a display interface by a user, and after the shooting preview mode is opened, a shot image preprocessing process in the subsequent steps is triggered.
After the step S105', the preprocessing method further includes:
s106: and sequentially displaying the preview data frames on a display interface of the intelligent terminal.
The step realizes the closed loop of the shooting preview mode, namely the final display link. The steps S101 'to S105' implement preprocessing of the original photographed image photographed by the camera and form a rendered preview data frame having a specific effect, and this step displays the preview data frame at a specific frequency. And the renderer sends the rendered preview data frame to the view surface example, and the view surface example can display the preview data frame according to an internal existing display method or function and can set the display rate. The frequency of displaying the preview data frames should be consistent with the frequency of shooting by the camera, so that the preview content can be displayed in real time without jamming, and the frequency is at least 24 frames per second.
As a further improvement of the preprocessing method, when the renderer renders the shot image, the renderer works in an on-demand rendering mode or a continuous rendering mode. When the renderer is created, setRenderMode (int) can be used to specify whether the rendering mode is on-demand rendering (on demand) or continuous rendering (continuous). The continuous rendering is continuously refreshed and redrawn, the default refreshing frame rate is 60FPS, namely redrawn is performed once in 16ms, the performance consumption of the intelligent terminal is high, and the consumption mode is suitable for continuous animation. The on-demand rendering sets the rendering working frequency or working time according to the target requirement, and has higher flexibility.
As a further improvement of the preprocessing method, the view surface instance and the renderer are run in a separate thread when the view surface instance and the renderer are created. The view surface example and the renderer both belong to the framework of the GLSurfaceView class, can run in the same thread, are independent of the UI process, do not occupy the resources of the UI process, and do not affect the operation response of the user interface. The UI refers to a user interface, i.e., a user interface, and includes various functions for displaying an interface of a user and receiving user operations.
Referring to fig. 3, a schematic structural diagram of a preprocessing device 10 for capturing images by an intelligent terminal according to a preferred embodiment of the present invention is shown, where the preprocessing device 10 includes:
a first creation module 11
The first creating module 11 creates a view surface instance in the intelligent terminal. The first creating module 11 generates an instance of the GLSurfaceView class, that is, the view surface instance, and the creation of the instance can be realized by executing an instance creating instruction. When the first creating module 11 creates the view surface instance, the initialization setting may also be modified, so as to implement different configurations for the performance parameters of the intelligent terminal.
A second creation module 12
The second creating module 12 creates a renderer in the smart terminal. Before the second creation module 12 creates the renderer, EGLs need to be configured, the Android device often supports multiple EGL configurations, channels (channels) with different numbers may be used, each channel may also be specified to have a bit (bits) depth with different numbers, and if different EGL configurations are to be selected, one of setEGLConfigChooser methods is used for implementation.
When the second creating module 12 creates the renderer, it is implemented in a CameraAp layer, and registers a glsurfaceview.
-an association module 13
The association module 13 is connected to the first creation module 11 and the second creation module 12, and associates the view surface instance with the renderer. The association module 13 implements the association between the view surface instance and the renderer through an interface, for example, the view surface instance sends an image to be rendered to the renderer through the interface, and the renderer can also send the rendered image back to the view surface instance through the interface. The association module 13 further provides a monitoring function, the view surface instance performs data interaction with the renderer by means of event monitoring, event monitoring can be completed by using java standard inter-thread communication, and rendering is started when the renderer monitors that an input event or a shooting event exists.
An acquisition module 14
The acquiring module 14 acquires a set of captured images from the driving layer of the smart terminal through the view surface instance. The interface provided by the camera drive of the acquisition module 14 acquires the shot images and matches with the shooting frequency of the camera to continuously acquire a group of shot images.
A rendering module 15
The rendering module 15 is connected to the obtaining module 14, and renders the shot image through the renderer to form a group of preview data frames. The rendering module 15 obtains the shot image from the obtaining module 14, and renders the shot image by the renderer to form effects such as color gradation, background deepening, and blurring.
Referring to fig. 4, a schematic structural diagram of a preprocessing device 10 for capturing images by an intelligent terminal according to another preferred embodiment of the present invention is shown, where the preprocessing device 10 further includes:
a preview module 16
The preview module 16 controls the intelligent terminal to enter a shooting preview mode. In this embodiment, the intelligent terminal has a shooting preview mode, that is, after the shooting function is turned on, the preview module 16 displays the previewed shooting content on the screen of the intelligent terminal, and the previewed shooting content displayed on the screen changes along with the movement of the intelligent terminal. The preview module 16 calls the camera of the intelligent terminal according to a certain frequency, converts the imaging image of the object to be measured into a digital image by a photosensitive element and displays the digital image on the screen. The preview module 16 displays an operation button on the display interface to allow a user to select to turn on or turn off a shooting preview mode, and after the shooting preview mode is turned on, the first creation module 11 and the second creation module 12 are called to start a shooting image preprocessing process.
As a further improvement of the above pretreatment device 10, the pretreatment device 10 further includes:
display module 17
The display module 17 is connected with the rendering module 15, and sequentially displays the preview data frames on a display interface of the intelligent terminal. The display module 17 obtains the preview data frame from the rendering module 15, and displays the preview data frame according to a specific frequency. The frequency of the preview data frame displayed by the display module 17 should be consistent with the shooting frequency of the camera, so that the preview content can be displayed in real time without being jammed, and the frequency is at least 24 frames per second.
As a further improvement of the preprocessing device 10, when the rendering module 15 renders the shot image through the renderer, the renderer operates in an on-demand rendering mode or a continuous rendering mode. When the second creating module 12 creates the renderer, setreferermode (int) may be used to specify whether the rendering mode is on-demand rendering (on demand) or continuous rendering (continuous).
As a further improvement of the preprocessing device 10, when the first creation module 11 and the second creation module 12 create the view surface instance and the renderer, the view surface instance and the renderer are run in a separate thread. The view surface example and the renderer both belong to the framework of the GLSurfaceView class, can run in the same thread, are independent of the UI process, do not occupy the resources of the UI process, and do not affect the operation response of the user interface. When the first creation module 11 and the second creation module 12 create the view surface instance and the renderer, related system resources are allocated to the view surface instance and the renderer to support the independent threads to run.
It should be noted that the embodiments of the present invention have been described in terms of preferred embodiments, and not by way of limitation, and that those skilled in the art can make modifications and variations of the embodiments described above without departing from the spirit of the invention.

Claims (6)

1. A preprocessing method for images shot by an intelligent terminal is characterized by comprising the following steps:
creating a view surface instance in the intelligent terminal;
creating a renderer in the intelligent terminal;
associating the view surface instance with the renderer;
the view surface instance acquires a group of shot images from a driving layer of the intelligent terminal;
the renderer renders the shot image to form a group of preview data frames;
when the view surface instance and the renderer are created, the view surface instance and the renderer are run in an independent thread;
and when the renderer renders the shot image, the renderer works in an on-demand rendering mode or a continuous rendering mode.
2. The pretreatment method of claim 1,
before the step of creating a view surface instance in the intelligent terminal, the preprocessing method further includes:
and the intelligent terminal enters a shooting preview mode.
3. The pretreatment method of claim 2,
after the step of rendering the shot image by the renderer to form a group of preview data frames, the preprocessing method further includes:
and sequentially displaying the preview data frames on a display interface of the intelligent terminal.
4. The utility model provides a preprocessing device that is used for intelligent terminal to shoot image which characterized in that includes:
the first creating module is used for creating a view surface example in the intelligent terminal;
the second establishing module is used for establishing a renderer in the intelligent terminal;
the association module is connected with the first creation module and the second creation module and associates the view surface instance with the renderer;
the acquisition module acquires a group of shot images from a driving layer of the intelligent terminal through the view surface example;
the rendering module is connected with the acquisition module and used for rendering the shot image through the renderer to form a group of preview data frames;
when the first creating module and the second creating module create the view surface instance and the renderer, the view surface instance and the renderer are operated in an independent thread;
when the rendering module renders the shot image through the renderer, the renderer works in an on-demand rendering mode or a continuous rendering mode.
5. The pretreatment apparatus of claim 4,
the pretreatment device further comprises:
and the preview module controls the intelligent terminal to enter a shooting preview mode.
6. The pretreatment apparatus of claim 5,
the pretreatment device further comprises:
and the display module is connected with the rendering module and sequentially displays the preview data frames on a display interface of the intelligent terminal.
CN201711458203.5A 2017-12-28 2017-12-28 Preprocessing method and preprocessing device for images shot by intelligent terminal Active CN108184054B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711458203.5A CN108184054B (en) 2017-12-28 2017-12-28 Preprocessing method and preprocessing device for images shot by intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711458203.5A CN108184054B (en) 2017-12-28 2017-12-28 Preprocessing method and preprocessing device for images shot by intelligent terminal

Publications (2)

Publication Number Publication Date
CN108184054A CN108184054A (en) 2018-06-19
CN108184054B true CN108184054B (en) 2020-12-08

Family

ID=62548180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711458203.5A Active CN108184054B (en) 2017-12-28 2017-12-28 Preprocessing method and preprocessing device for images shot by intelligent terminal

Country Status (1)

Country Link
CN (1) CN108184054B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110312117B (en) * 2019-06-12 2021-06-18 北京达佳互联信息技术有限公司 Data refreshing method and device
CN112217990B (en) * 2020-09-27 2024-04-09 北京小米移动软件有限公司 Task scheduling method, task scheduling device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517309A (en) * 2013-10-08 2015-04-15 博雅网络游戏开发(深圳)有限公司 Method and device for processing animation in frame loop
CN105916052A (en) * 2015-12-15 2016-08-31 乐视致新电子科技(天津)有限公司 Video frame drawing method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617027B (en) * 2013-10-29 2015-07-29 合一网络技术(北京)有限公司 Based on image rendering engine construction method and the system of Android system
US20170255264A1 (en) * 2016-03-02 2017-09-07 International Business Machines Corporation Digital surface rendering
CN107203960B (en) * 2016-06-30 2021-03-09 北京新媒传信科技有限公司 Image rendering method and device
CN106230841B (en) * 2016-08-04 2020-04-07 深圳响巢看看信息技术有限公司 Terminal-based real-time video beautifying and streaming method in live webcasting
CN106791408A (en) * 2016-12-27 2017-05-31 努比亚技术有限公司 A kind of shooting preview device, terminal and method
CN106792034A (en) * 2017-02-10 2017-05-31 深圳创维-Rgb电子有限公司 Live method and mobile terminal is carried out based on mobile terminal
CN107463370B (en) * 2017-06-30 2021-08-27 百度在线网络技术(北京)有限公司 Cross-process rendering method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517309A (en) * 2013-10-08 2015-04-15 博雅网络游戏开发(深圳)有限公司 Method and device for processing animation in frame loop
CN105916052A (en) * 2015-12-15 2016-08-31 乐视致新电子科技(天津)有限公司 Video frame drawing method and device

Also Published As

Publication number Publication date
CN108184054A (en) 2018-06-19

Similar Documents

Publication Publication Date Title
WO2021238325A1 (en) Image processing method and apparatus
CN110475072B (en) Method, device, terminal and storage medium for shooting image
EP2878121B1 (en) Method and apparatus for dual camera shutter
EP4207742A1 (en) Photography method, photography apparatus, and electronic device
EP3687161B1 (en) Method for image shooting, apparatus, and storage medium
KR20220082926A (en) Video shooting method and electronic device
EP4109879A1 (en) Image color retention method and device
WO2022127611A1 (en) Photographing method and related device
CN112004041B (en) Video recording method, device, terminal and storage medium
CN112053449A (en) Augmented reality-based display method, device and storage medium
CN112053370A (en) Augmented reality-based display method, device and storage medium
CN112348929A (en) Rendering method and device of frame animation, computer equipment and storage medium
CN111754607A (en) Picture processing method and device, electronic equipment and computer readable storage medium
KR20210030384A (en) 3D transition
JP2024506639A (en) Image display methods, devices, equipment and media
CN108184054B (en) Preprocessing method and preprocessing device for images shot by intelligent terminal
WO2022262550A1 (en) Video photographing method and electronic device
US20140161173A1 (en) System and method for controlling video encoding using content information
WO2019091487A1 (en) Image photographing method and device, terminal, and storage medium
CN116708696B (en) Video processing method and electronic equipment
WO2023160230A9 (en) Photographing method and related device
WO2022170918A1 (en) Multi-person-capturing method and electronic device
CN115334239B (en) Front camera and rear camera photographing fusion method, terminal equipment and storage medium
WO2022262547A1 (en) Video photographing method and electronic device
CN117082295B (en) Image stream processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant