CN106156252B - information processing method and electronic equipment - Google Patents

information processing method and electronic equipment Download PDF

Info

Publication number
CN106156252B
CN106156252B CN201510208597.3A CN201510208597A CN106156252B CN 106156252 B CN106156252 B CN 106156252B CN 201510208597 A CN201510208597 A CN 201510208597A CN 106156252 B CN106156252 B CN 106156252B
Authority
CN
China
Prior art keywords
image
acquired
information
preview
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510208597.3A
Other languages
Chinese (zh)
Other versions
CN106156252A (en
Inventor
冯斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510208597.3A priority Critical patent/CN106156252B/en
Publication of CN106156252A publication Critical patent/CN106156252A/en
Application granted granted Critical
Publication of CN106156252B publication Critical patent/CN106156252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses information processing methods and electronic equipment, wherein the methods are applied to electronic equipment and comprise the steps of obtaining a 1 th preview image of a 0 th region to be acquired through an image acquisition unit in the electronic equipment, obtaining a 3 th region in the 2 th preview image, obtaining a target object in an 4 th region, detecting and obtaining type information of the target object, acquiring and obtaining a th acquired image of an th region to be acquired corresponding to the th preview image through the image acquisition unit, storing the type information in th image information of the th acquired image, and obtaining a th image file.

Description

information processing method and electronic equipment
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to information processing methods and electronic devices.
Background
At present, people have an increasing demand for taking pictures by using intelligent mobile devices, so that a large number of pictures are generated by a picture taking application, at the moment, the disordered picture layout definitely influences the user experience if the generated pictures are not managed by the unified in time, and therefore, the management of the pictures is very important.
In the prior art, four photo management methods are usually adopted, a method , in which generated photos are sorted and managed according to the shooting time of the photos, a method two, in which geographical location information of the photos is used for classified management, a method three, in which the generated photos are classified and managed according to portrait information obtained by shooting, for example, portrait information on the photos is obtained by adopting a face recognition technology, and a method four, in which the generated photos are classified and managed by combining image content obtained by shooting with a method in which a user manually adds image tags.
In the process of inventing the technical scheme in the embodiment of the present application, the inventor of the present application finds that the above prior art has at least the following technical problems:
since the related art cannot determine an area of interest of a user based on a preview image at the time of photographing and then manage photographs according to contents within the area of interest regardless of which ways to manage photographed images, the related art has a technical problem that photographs cannot be managed according to contents of interest of the user.
Since in the prior art, no matter which ways are adopted to manage the shot images, the photos to be managed cannot be automatically tagged, so that the prior art has the technical problem that the image tags cannot be automatically edited.
Disclosure of Invention
The embodiment of the invention provides information processing methods and electronic equipment, which are used for solving the technical problem that photos cannot be managed according to contents which users are interested in the prior art and achieving the technical effect of managing the photos according to the contents which users are interested in.
, the embodiment of the application provides information processing methods, which are applied to electronic equipment, and the method comprises:
obtaining, by an image acquisition unit in the electronic device, an st preview image of a th region to be acquired;
obtaining a th region in the th preview image;
obtaining a target object in the th area;
detecting and obtaining th category information to which the target object belongs;
and acquiring a th acquired image of the th to-be-acquired area corresponding to the th preview image through the image acquisition unit, saving the th type information in th image information of the th acquired image, and acquiring an th image file.
Optionally, the obtaining of the th area in the th preview image specifically includes:
detecting touch operation performed on the th preview image;
and responding to the touch operation to obtain an th area.
Optionally, after obtaining the target object in the th area, the method further includes:
detecting and obtaining a target box which is characterized in the area;
displaying the th target box and the th category information on the th preview image;
wherein the th target box is displayed at the th position of the th preview image, and the th category information is displayed at a second position of the th preview image different from the th position.
Optionally, the image capturing unit captures a th captured image of the th to-be-captured region corresponding to the th preview image, stores the th category information in th image information of the th captured image, and obtains a th image file, specifically:
acquiring a th acquired image of the th to-be-acquired region corresponding to the th preview image by the image acquisition unit;
and saving the th category information in th file attribute information of the th captured image to obtain a th image file containing the th file attribute information.
Optionally, the image capturing unit captures a th captured image of the th to-be-captured region corresponding to the th preview image, stores the th category information in th image information of the th captured image, and obtains a th image file, specifically:
acquiring a th acquired image of the th to-be-acquired region corresponding to the th preview image by the image acquisition unit;
saving the th category information in the th image data of the th captured image to obtain a th image file containing the th image data.
In another aspect, embodiments of the present application further provide electronic devices, including:
an image acquisition device;
the processor is connected with the image acquisition device;
when the image acquisition device acquires a th preview image of an th to-be-acquired region, the processor is used for acquiring a 1 th region in the 0 th preview image, acquiring a target object in the 2 th region, detecting and acquiring th category information of the target object, and saving the th category information in the 36 th image information of the th acquired image after the image acquisition device acquires and acquires a th acquired image of the th to-be-acquired region corresponding to the th preview image to acquire a th image file.
Optionally, the processor is specifically configured to:
detecting touch operation performed on the th preview image;
and responding to the touch operation to obtain an th area.
Optionally, after obtaining the target object in the th area, the processor is further configured to:
detecting and obtaining a target box which is characterized in the area;
displaying the th target box and the th category information on the th preview image;
wherein the th target box is displayed at the th position of the th preview image, and the th category information is displayed at a second position of the th preview image different from the th position.
Optionally, the processor is specifically configured to:
after the image acquisition device acquires and obtains a th acquired image of the th to-be-acquired area corresponding to the th preview image, the th type information is stored in th file attribute information of the th acquired image, and a th image file containing th file attribute information is obtained.
Optionally, the processor is specifically configured to:
after the image acquisition device acquires and obtains a th acquired image of the th to-be-acquired area corresponding to the th preview image, the th type information is saved in th image data of the th acquired image, and a th image file containing th image data is obtained.
In another aspect, embodiments of the present application further provide electronic devices, including:
the image acquisition unit is used for obtaining an st preview image of the th area to be acquired;
an obtaining unit, configured to obtain a th region in the th preview image;
a second obtaining unit configured to obtain a target object in the th area;
a third obtaining unit, configured to detect and obtain th category information to which the target object belongs;
and the processing unit acquires and obtains a th acquired image of the th to-be-acquired area corresponding to the th preview image through the image acquisition unit, and is used for saving the th type information in th image information of the th acquired image to obtain a th image file.
The or more technical solutions in the embodiment of the present application have at least the following or more technical effects:
in the technical solution of the embodiment of the present application, when the user uses the image capturing unit to obtain the th preview image of the th to-be-captured region, the target object can be obtained from the th region in the 0 th preview image, and then the th category information to which the target object belongs is detected, at this time, the th category information is saved in the th image information of the th captured image in the th to-be-captured region, and the th image file is obtained, that is, the image can be managed according to the content of the region of interest of the user in the th image information, so that the technical effect of managing the photo according to the content of interest of the user is achieved.
In the technical solution of the embodiment of the present application, the -th category information can be stored in the -th image data of the -th captured image, so as to obtain the -th image file containing the -th image data, that is, the image tag can be edited without editing the image tag by a manual means, and therefore, a technical effect of automatically editing the image tag according to a result identified by a content in which a user is interested is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present invention.
FIG. 1 is a flowchart of methods for processing information according to embodiment of the present application;
FIG. 2 is a flowchart of the method of step S102 of the information processing methods provided in embodiment of the present application;
FIG. 3 is a flowchart of a method after step S202 of the methods for processing information provided in embodiment of the present application;
FIG. 4 is a flowchart of a method after step S105 of the methods for processing information provided in embodiment of the present application;
fig. 5 is a block diagram illustrating a structure of electronic devices according to a second embodiment of the present application;
fig. 6 is a block diagram of electronic devices according to a third embodiment of the present application.
Detailed Description
The embodiment of the application provides information processing methods and electronic equipment, and the technical problem that photos cannot be managed according to contents interested by users exists in the prior art, so that the technical effect of managing photos according to contents interested by users is achieved.
In order to solve the technical problems, the general idea of the embodiment of the present application is as follows:
acquiring an th preview image of a th to-be-acquired area through an image acquisition unit in the electronic equipment, and acquiring a th area in the th preview image;
obtaining a target object in the th area;
detecting and obtaining th category information to which the target object belongs;
and acquiring a th acquired image of the th to-be-acquired area corresponding to the th preview image through the image acquisition unit, saving the th type information in th image information of the th acquired image, and acquiring an th image file.
In the above technical solution, when the user uses the image capturing unit to obtain the th preview image of the th to-be-captured region, the target object can be obtained from the th region in the 0 th preview image, and the th category information to which the target object belongs is further detected, at this time, the th category information is stored in the th image information of the th captured image in the th to-be-captured region, and the th image file is obtained, that is, the image can be managed according to the content of the region of interest of the user in the th image information, so that the technical effect of managing photos according to the content of interest of the user is achieved.
In order to better understand the technical solutions, the technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are detailed descriptions of the technical solutions of the present invention, and are not limitations of the technical solutions of the present invention, and the technical features in the embodiments and examples of the present invention may be combined with each other without conflict.
Example
Referring to fig. 1, information processing methods provided in embodiment of the present application are applied to electronic devices, and the methods include:
s101, obtaining a th preview image of a to-be-acquired region of through an image acquisition unit in the electronic equipment;
s102, obtaining a th area in the th preview image;
s103, obtaining a target object in the th area;
s104, detecting and obtaining th category information to which the target object belongs;
s105, acquiring a th acquired image of the th to-be-acquired area corresponding to the th preview image through the image acquisition unit, saving the th type information in th image information of the th acquired image, and acquiring a th image file.
In the embodiment of the present application, the electronic device having the image capturing unit may specifically be a smart phone, a tablet computer, a notebook computer, a smart camera, and the like, which is not exemplified by .
In the specific implementation process, the specific implementation process of step S101 to step S105 is as follows:
first, a user obtains a first preview image of a first to-be-collected region through an image collecting unit in the electronic device, for example, when a camera of a smartphone of the user is in a pre-shooting mode of smart photography, a 1 st preview image is obtained in a 0 th to-be-collected region that can be captured by the camera, after obtaining a 3 rd preview image, a 5 th region is obtained in the 4 th preview image, for example, after the user obtains a 6 th preview image of "collecting honey on gardenia", when the user clicks a bee in the 7 th preview image on a touch display screen of the smartphone, the obtained bee is located in a first region in the 9 th preview image, the 0 th region is a target region, for example, the target region can be quickly cut out through texture coordinates of a GPU set in a specific implementation, for example, the target region can be obtained by a background photography, and if the obtained image is a crawler image, the obtained is a crawler image of a crawler image, the bee is a crawler image of a bee-to-collected image, the bee-to-be obtained by a crawler image collecting image of a bee-collecting image, the bee-to-collecting image, the bee-collecting image is a target-collecting image, the bee-collecting image is obtained by a crawler image, the obtaining information of a crawler image, the bee-collecting image is obtained by the bee-collecting image, the obtaining information, the bee-collecting image is obtained image, the target-collecting image is obtained by the obtaining information, the bee-collecting image, the obtaining information of the bee-collecting image is obtained by the bee-collecting image, the target-collecting image is obtained by the bee-collecting image, the bee-collecting image, the obtaining the bee-collecting image, the target-collecting image is the image, the bee-collecting image is the target-collecting image, the target-collecting image is the bee-collecting image, the target-collecting image, the bee-collecting image, the target-collecting image, the bee-collecting image.
In this embodiment, step S105 is to acquire, by the image acquisition unit, a 0 th acquired image of the th to-be-acquired region corresponding to the th preview image, store the th category information in the th file attribute information of the th acquired image, and acquire a th image file including the th file attribute information, specifically, continue to use the th acquired image as an image of "bees are collecting honey on gardenia", and store the th category information "bees" as an example, store the th category information "bees" in the attribute information corresponding to the "bees are collecting honey on gardenia" image, and acquire an image file including attribute information "bees".
In the embodiment of the application, the step S101 specifically comprises the steps of determining that an image preview mode for starting the image acquisition unit is an intelligent tag preview mode, determining that the image acquisition unit displays a acquired th preview image of a th region to be acquired in the intelligent tag preview mode, wherein in the specific implementation process, the image preview mode of the image acquisition unit can be a normal preview mode, an intelligent tag preview mode and other preview modes, and the image preview modes can be mutually switched.
In this embodiment, please refer to fig. 2, the obtaining of the th area in the th preview image in step S102 specifically includes:
s201, detecting and obtaining touch operation performed on the th preview image;
and S202, responding to the touch operation to obtain an th area.
In the specific implementation process, the specific implementation process of steps S201 to S202 is explained as follows, by taking "honey is being collected by a bee on a gardenia" as an example, when a user clicks an operation object "honey bee" on the preview image to select a target object, a touch operation performed on the preview image is obtained, and in response to the touch operation, a target region corresponding to a target object "honey bee" is generated based on the touch operation, for example, when the user clicks an operation object "gardenia" on the preview image, a touch operation performed on the preview image is obtained, and a target region corresponding to a target "gardenia" is determined based on the touch operation at this time.
In the embodiment of the present application, please refer to fig. 3, after step S202, the method performs the steps of:
s301, detecting and obtaining a target box representing the th area;
s302, displaying the th target frame and the th category information on the th preview image;
wherein the th target box is displayed at the th position of the th preview image, and the th category information is displayed at a second position of the th preview image different from the th position.
In a specific implementation process, the specific implementation process of steps S301 to S302 is as follows, taking "bees are collecting honey on gardenia flower" as an example, when a user clicks "bees" in a preview image in a current region to be collected, a th region where "bees" are located in the preview image is obtained, a th region where "bees" are located is cut out quickly by a processor in a smart camera, and a target region is obtained, in order to display the obtained target region to the user more clearly and avoid misoperation of the user, the region where the target object "bees" is located may be displayed on a display screen of the smart phone as a rectangular red target frame, or as a square green target frame, although the target frame may also be obtained, but the target frame does not need to be displayed, if the target frame is to be displayed, a person skilled in the art may design a rectangle according to the shape and color of the target frame, which is not repeated here, the processor in the smart phone may also use a smart image recognition algorithm to print a result that the target object "bees" are located in a , and a preview result of the target object is displayed on a side of the second target image, and the target object is determined as a target object, and the target object is displayed on a third target object recognition result of the third target object recognition area, which is displayed in 395934, and the third target object recognition result, the third target object recognition result of the third object recognition result, which is displayed on a third object recognition result, which is displayed in real-by a third object recognition algorithm, which is performed in real-time, which is performed, the third target object recognition algorithm, where the third target object recognition algorithm, the third target object recognition algorithm.
In the embodiment of the present application, after step S302, a step is performed of detecting that the th target frame is located at the fourth position corresponding to the third position of the th preview image when the target object is located at the third position of the th preview image, and detecting that the th target frame is located at the sixth position corresponding to the fifth position of the th preview image when the target object is located at the fifth position different from the third position of the th preview image, when the position of the target object appearing in the preview image corresponding to the area to be captured changes, for example, the smartphone camera head slightly changes, or the camera lens is not changed but the target object changes in position, or both the target object and the camera lens change, then the target frame and the recognition result corresponding to the target object will also change accordingly.
In the embodiment of the application, when the image acquisition unit responds to a photographing instruction, the image acquisition unit acquires a preview image in a currently corresponding region to be acquired and acquires an acquired image corresponding to the currently preview image, wherein the acquired image comprises a target frame and an identification result corresponding to a target object.
In the embodiment of the present application, in order to facilitate management of the acquired images, step S105 is specifically to acquire, by the image acquisition unit, a th acquired image of the th to-be-acquired region corresponding to the th preview image, and store the th category information in th file attribute information of the th acquired image to obtain a th image file including the th file attribute information.
In the specific implementation process, continuing with the above example, when the th collected image corresponding to "honey is being collected by bees on gardenia" is obtained through the camera in the smart phone, at this time, the target object "honey" is stored in the attribute information of the image, for example, the recognition result "honey" is stored in the detailed information describing the image, and an image file including the attribute information of the image is generated, at this time, the image file belonging to the attribute information including "honey" can be placed under folders, thereby realizing management of photos.
In the embodiment of the application, the category information obtained based on the target object is stored in the obtained th collected image in the form of an image, so as to bring better user experience, at this time, step S105 is specifically to obtain a th collected image of the th to-be-collected area corresponding to the th preview image by the image collection unit, store the th category information in th image data of the th collected image, obtain a th image file containing the th image data, continue to use the above example, the bee currently collected image is collected and honey is collected on gardenia, during the process of collecting the preview image, the processor in the smartphone can detect and obtain the geographic position information and time information of the photo obtained currently by photographing, and store the obtained image data corresponding to the geographic position information such as "a citizen" and the photographing time information such as "2015 5 th day 1" in the image, so as to obtain the image data of the park and provide the obtained image file with the convenience for the user when the obtained photo is printed and the obtained paper photo is classified.
In addition, according to the technical solution provided by the present application, the search for the image containing the same category information as can be further implemented, so that the user can quickly find the image meeting the requirement of the user, in the embodiment of the present application, please refer to fig. 4, and after step S105, the following steps are performed:
s401, retrieving information by inputting th retrieval information corresponding to the th category information in search box;
s402, executing a retrieval operation, and obtaining N image files containing the -th category information from M image files at least containing the -th image file, wherein the N image files at least contain the -th image file, M is an integer greater than or equal to 1, and N is a positive integer less than or equal to M.
Continuing with the above example, when the user enters attribute information, bee, in the search box of the gallery, all images containing attribute information, bee, are obtained and displayed on the display of the smartphone.
Example two
Based on the same inventive concept as that of embodiment of the present application, embodiment two of the present application provides electronic devices, please refer to fig. 5, which includes:
an image acquisition device 10;
the processor 20 is connected with the image acquisition device 10;
when the image acquisition device 10 obtains the th preview image of the th to-be-acquired area, the processor 20 is used for obtaining the 1 th area in the 0 th preview image, obtaining a target object in the 2 th area, detecting and obtaining th category information of the target object, and saving the th category information in the 36 th image information of the th acquired image after the image acquisition device 10 acquires and obtains the th acquired image of the th to-be-acquired area corresponding to the th preview image to obtain the th image file.
In the embodiment of the present application, the processor 20 is specifically configured to:
detecting touch operation performed on the th preview image;
and responding to the touch operation to obtain an th area.
In this embodiment of the application, after obtaining the target object in the th area, the processor 20 is further configured to:
detecting and obtaining a target box which is characterized in the area;
displaying the th target box and the th category information on the th preview image;
wherein the th target box is displayed at the th position of the th preview image, and the th category information is displayed at a second position of the th preview image different from the th position.
In the embodiment of the application, after the target area is obtained, the processor 20 is further configured to detect that a target frame representing the th area is obtained, and display the 0 th target frame and the th category information on the th preview image, wherein the th target frame is displayed at a th position of the th preview image, and the th category information is displayed at a second position of the th preview image, which is different from the th position.
In this embodiment, in order to implement real-time tracking detection on the target object, the processor 20 is further configured to detect that the target frame is located at a fourth position of the preview image corresponding to the third position when the target object is located at the third position of the preview image, and detect that the target frame is located at a sixth position of the preview image corresponding to the fifth position when the target object is located at a fifth position of the preview image different from the third position.
In the embodiment of the present application, to facilitate management of photos, in an aspect of , the processor 20 is specifically configured to, after the image capturing apparatus 10 captures a 1 th captured image that obtains the 0 th to-be-captured area corresponding to the th preview image, store 2 th category information in 4 file attribute information of the 3 th captured image, and obtain a 6 th image file that includes the 5 th file attribute information, and in an aspect of 7, the processor 20 is specifically configured to, after the image capturing apparatus 10 captures a th captured image that obtains the th to-be-captured area corresponding to the th preview image, store the th category information in th image data of the th captured image, and obtain a th image file that includes the th image data.
In the embodiment of the application, in order to facilitate the user to search for the image containing the same category information, the processor 20 is further configured to input th search information corresponding to the th category information in a search box, and perform a search operation to obtain N image files containing th category information from M image files at least containing the th image file, where the N image files at least contain the th image file, M is an integer greater than or equal to 1, and N is a positive integer less than or equal to M.
EXAMPLE III
Based on the same inventive concept as that of embodiment of the present application, please refer to fig. 6, an embodiment three of the present application provides kinds of electronic devices, including:
the image acquisition unit 30 is used for obtaining an st preview image of the th to-be-acquired area;
an obtaining unit 40, configured to obtain a th region in the th preview image;
a second obtaining unit 50 for obtaining a target object in the th area;
a third obtaining unit 60, configured to detect and obtain th category information to which the target object belongs;
the processing unit 70 acquires and obtains a th acquired image of the th to-be-acquired region corresponding to the th preview image through the image acquisition unit 30, and stores the th type information in th image information of the th acquired image to obtain a th image file.
In this embodiment of the present application, the th obtaining unit 40 specifically includes:
an obtaining module for detecting and obtaining touch operation performed on the preview image;
and the second obtaining module is used for responding to the touch operation and obtaining an th area.
In an embodiment of the present application, the electronic device further includes:
a fourth obtaining unit, configured to detect and obtain a target box that characterizes the th region;
a display unit for displaying the th target box and the th category information on the th preview image;
wherein the th target box is displayed at the th position of the th preview image, and the th category information is displayed at a second position of the th preview image different from the th position.
In the embodiment of the present application, the -th processing unit 70 is specifically configured to acquire, by the image acquisition unit 30, a -th acquired image of the -th to-be-acquired region corresponding to the -th preview image, store the -th category information in -th file attribute information of the -th acquired image, and acquire a -th image file containing the -th file attribute information.
In the embodiment of the present application, the -th processing unit 70 is specifically configured to acquire, by the image acquisition unit 30, a -th acquired image of the -th to-be-acquired region corresponding to the -th preview image, store the -th category information in -th image data of the -th acquired image, and acquire a -th image file containing the -th image data.
Through or a plurality of technical solutions in the embodiment of the present application, or a plurality of technical effects can be achieved as follows:
in the technical solution of the embodiment of the present application, when the user uses the image capturing unit to obtain the th preview image of the th to-be-captured region, the target object can be obtained from the th region in the 0 th preview image, and then the th category information to which the target object belongs is detected, at this time, the th category information is saved in the th image information of the th captured image in the th to-be-captured region, and the th image file is obtained, that is, the image can be managed according to the content of the region of interest of the user in the th image information, so that the technical effect of managing the photo according to the content of interest of the user is achieved.
In the technical solution of the embodiment of the present application, the -th category information can be stored in the -th image data of the -th captured image, so as to obtain the -th image file containing the -th image data, that is, the image tag can be edited without editing the image tag by a manual means, and therefore, a technical effect of automatically editing the image tag according to a result identified by a content in which a user is interested is achieved.
Furthermore, the present invention may take the form of a computer program product embodied on or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
It is to be understood that each flow and/or block in the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions which can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flow diagram flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Specifically, the computer program instructions corresponding to the information processing method in the embodiment of the present application may be stored on a storage medium such as an optical disc, a hard disc, a usb disk, or the like, and when the computer program instructions corresponding to the information processing method in the storage medium are read or executed by the electronic device , the method includes the following steps:
obtaining, by an image acquisition unit in the electronic device, an st preview image of a th region to be acquired;
obtaining a th region in the th preview image;
obtaining a target object in the th area;
detecting and obtaining th category information to which the target object belongs;
and acquiring a th acquired image of the th to-be-acquired area corresponding to the th preview image through the image acquisition unit, saving the th type information in th image information of the th acquired image, and acquiring an th image file.
Optionally, the computer program instructions stored in the storage medium corresponding to the step of obtaining the th area in the th preview image, when executed, specifically include:
detecting touch operation performed on the th preview image;
and responding to the touch operation to obtain an th area.
Optionally, the storage medium stores computer program instructions corresponding to the steps, when executed, of obtaining the target object in the th area, the method further comprising:
detecting and obtaining a target box which is characterized in the area;
displaying the th target box and the th category information on the th preview image;
wherein the th target box is displayed at the th position of the th preview image, and the th category information is displayed at a second position of the th preview image different from the th position.
Optionally, the storage medium stores corresponding steps, the acquiring by the image acquiring unit acquires a th acquired image of the th to-be-acquired region corresponding to the th preview image, saves the th category information in th image information of the th acquired image, and acquires a th image file, and the corresponding computer program instructions when executed are specifically:
acquiring a th acquired image of the th to-be-acquired region corresponding to the th preview image by the image acquisition unit;
and saving the th category information in th file attribute information of the th captured image to obtain a th image file containing the th file attribute information.
Optionally, the storage medium stores corresponding steps, the acquiring by the image acquiring unit acquires a th acquired image of the th to-be-acquired region corresponding to the th preview image, saves the th category information in th image information of the th acquired image, and acquires a th image file, and the corresponding computer program instructions when executed are specifically:
acquiring a th acquired image of the th to-be-acquired region corresponding to the th preview image by the image acquisition unit;
saving the th category information in the th image data of the th captured image to obtain a th image file containing the th image data.
Having described preferred embodiments of the invention, further alterations and modifications may be effected to these embodiments by those skilled in the art having the benefit of the basic inventive concepts .
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (11)

1, information processing method, the method is applied to electronic equipment, the method includes:
obtaining, by an image acquisition unit in the electronic device, an st preview image of a th region to be acquired;
obtaining a th region in the th preview image;
obtaining a target object in the th area;
detecting and obtaining th category information to which the target object belongs;
and acquiring a th acquired image of the th to-be-acquired area corresponding to the th preview image through the image acquisition unit, saving the th type information in th image information of the th acquired image, and acquiring an th image file.
2. The method of claim 1, wherein the obtaining the th region in the th preview image comprises:
detecting touch operation performed on the th preview image;
and responding to the touch operation to obtain an th area.
3. The method of claim 2, wherein after obtaining the target object at the th region, the method further comprises:
detecting and obtaining a target box which is characterized in the area;
displaying the th target box and the th category information on the th preview image;
wherein the th target box is displayed at the th position of the th preview image, and the th category information is displayed at a second position of the th preview image different from the th position.
4. The method as claimed in any claim 1-3, wherein the acquiring by the image acquiring unit acquires th acquired image of the th to-be-acquired region corresponding to the th preview image, saves the th category information in th image information of the th acquired image, and acquires th image file, specifically:
acquiring a th acquired image of the th to-be-acquired region corresponding to the th preview image by the image acquisition unit;
and saving the th category information in th file attribute information of the th captured image to obtain a th image file containing the th file attribute information.
5. The method as claimed in any of claims 1-3, wherein the acquiring by the image acquiring unit acquires th acquired image of the th to-be-acquired region corresponding to the th preview image, saves the th category information in th image information of the th acquired image, and acquires th image file, specifically:
acquiring a th acquired image of the th to-be-acquired region corresponding to the th preview image by the image acquisition unit;
saving the th category information in the th image data of the th captured image to obtain a th image file containing the th image data.
An electronic device of the kind , comprising:
an image acquisition device;
the processor is connected with the image acquisition device;
when the image acquisition device acquires a th preview image of an th to-be-acquired region, the processor is used for acquiring a 1 th region in the 0 th preview image, acquiring a target object in the 2 th region, detecting and acquiring th category information of the target object, and saving the th category information in the 36 th image information of the th acquired image after the image acquisition device acquires and acquires a th acquired image of the th to-be-acquired region corresponding to the th preview image to acquire a th image file.
7. The electronic device of claim 6, wherein the processor is specifically configured to:
detecting touch operation performed on the th preview image;
and responding to the touch operation to obtain an th area.
8. The electronic device of claim 6, wherein after obtaining the target object at the th region, the processor is further to:
detecting and obtaining a target box which is characterized in the area;
displaying the th target box and the th category information on the th preview image;
wherein the th target box is displayed at the th position of the th preview image, and the th category information is displayed at a second position of the th preview image different from the th position.
9. The electronic device of any of claims , wherein the processor is specifically configured to:
after the image acquisition device acquires and obtains a th acquired image of the th to-be-acquired area corresponding to the th preview image, the th type information is stored in th file attribute information of the th acquired image, and a th image file containing th file attribute information is obtained.
10. The electronic device of any of claims , wherein the processor is specifically configured to:
after the image acquisition device acquires and obtains a th acquired image of the th to-be-acquired area corresponding to the th preview image, the th type information is saved in th image data of the th acquired image, and a th image file containing th image data is obtained.
An electronic device of the kind 11, , comprising:
the image acquisition unit is used for obtaining an st preview image of the th area to be acquired;
an obtaining unit, configured to obtain a th region in the th preview image;
a second obtaining unit configured to obtain a target object in the th area;
a third obtaining unit, configured to detect and obtain th category information to which the target object belongs;
and the processing unit acquires and obtains a th acquired image of the th to-be-acquired area corresponding to the th preview image through the image acquisition unit, and is used for saving the th type information in th image information of the th acquired image to obtain a th image file.
CN201510208597.3A 2015-04-28 2015-04-28 information processing method and electronic equipment Active CN106156252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510208597.3A CN106156252B (en) 2015-04-28 2015-04-28 information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510208597.3A CN106156252B (en) 2015-04-28 2015-04-28 information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN106156252A CN106156252A (en) 2016-11-23
CN106156252B true CN106156252B (en) 2020-01-31

Family

ID=57346751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510208597.3A Active CN106156252B (en) 2015-04-28 2015-04-28 information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN106156252B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530114A (en) * 2013-09-27 2014-01-22 贝壳网际(北京)安全技术有限公司 Picture managing method and device
CN104077312A (en) * 2013-03-28 2014-10-01 腾讯科技(深圳)有限公司 Picture classification method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200508858A (en) * 2003-08-20 2005-03-01 Primax Electronics Ltd File management method for a digital device
KR101058014B1 (en) * 2004-10-15 2011-08-19 삼성전자주식회사 Method of controlling digital photographing apparatus for assortment replay, and digital photographing apparatus adopting the method
US8687924B2 (en) * 2006-09-22 2014-04-01 Apple Inc. Managing digital images
US8254684B2 (en) * 2008-01-02 2012-08-28 Yahoo! Inc. Method and system for managing digital photos
CN103365869A (en) * 2012-03-29 2013-10-23 宏碁股份有限公司 Photo management method and electronic device
CN102789489B (en) * 2012-07-04 2014-08-27 杨震群 Image retrieval method and system based on hand-held terminal
CN103823858A (en) * 2014-02-21 2014-05-28 联想(北京)有限公司 Information processing method and information processing device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077312A (en) * 2013-03-28 2014-10-01 腾讯科技(深圳)有限公司 Picture classification method and device
CN103530114A (en) * 2013-09-27 2014-01-22 贝壳网际(北京)安全技术有限公司 Picture managing method and device

Also Published As

Publication number Publication date
CN106156252A (en) 2016-11-23

Similar Documents

Publication Publication Date Title
EP3125135B1 (en) Picture processing method and device
WO2017129018A1 (en) Picture processing method and apparatus, and smart terminal
US10514818B2 (en) System and method for grouping related photographs
CN105320695B (en) Picture processing method and device
WO2016101757A1 (en) Image processing method and device based on mobile device
CN103810471B (en) Identify the method and apparatus and its image pickup method of file and picture
CN105894016B (en) Image processing method and electronic device
WO2016145844A1 (en) Picture sorting method and corresponding picture storage and display device
US10331953B2 (en) Image processing apparatus
US20140055479A1 (en) Content display processing device, content display processing method, program and integrated circuit
CN104063444A (en) Method and device for generating thumbnail
CN105005599A (en) Photograph sharing method and mobile terminal
JP6230386B2 (en) Image processing apparatus, image processing method, and image processing program
WO2015196681A1 (en) Picture processing method and electronic device
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN112148192A (en) Image display method and device and electronic equipment
CN110313001A (en) Photo processing method, device and computer equipment
CN101465936A (en) Photographic arrangement and method for extracting and processing image thereof
CN106156252B (en) information processing method and electronic equipment
US9141850B2 (en) Electronic device and photo management method thereof
CN103517020A (en) Data processing method and device and electronic equipment
US20140108405A1 (en) User-specified image grouping systems and methods
JP2017184021A (en) Content providing device and content providing program
CN104954688A (en) Image processing method and image processing device
TWI621954B (en) Method and system of classifying image files

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant