CN116721377A - Scene acquisition method and device and electronic equipment - Google Patents

Scene acquisition method and device and electronic equipment Download PDF

Info

Publication number
CN116721377A
CN116721377A CN202210191293.0A CN202210191293A CN116721377A CN 116721377 A CN116721377 A CN 116721377A CN 202210191293 A CN202210191293 A CN 202210191293A CN 116721377 A CN116721377 A CN 116721377A
Authority
CN
China
Prior art keywords
scene
target
image
card
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210191293.0A
Other languages
Chinese (zh)
Inventor
李世焘
刘昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210191293.0A priority Critical patent/CN116721377A/en
Publication of CN116721377A publication Critical patent/CN116721377A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses a scene acquisition method and device and electronic equipment, and belongs to the technical field of electronic equipment. The scene acquisition method comprises the following steps: acquiring scene characteristics corresponding to a target scene, wherein the scene characteristics comprise main body characteristics corresponding to a target object in the target scene and background characteristics corresponding to a background part; acquiring a first image card associated with the main body feature and a second image card associated with the background feature; and combining the first image card and the second image card based on the position relation between the target object and the background part to obtain an image card combined picture corresponding to the target scene.

Description

Scene acquisition method and device and electronic equipment
Technical Field
The application belongs to the technical field of electronic equipment, and particularly relates to a scene acquisition method and device and electronic equipment.
Background
At present, in the development and debugging stage of electronic equipment, a large number of different real scenes need to be shot through the electronic equipment, and whether the shooting effect of the electronic equipment is problematic is verified through a large number of images obtained through shooting.
In the related art, different real scenes are often located at different positions, so that a tester consumes a great deal of manpower and time when shooting a great deal of selected real scenes, and the equipment testing cost is high.
Disclosure of Invention
The embodiment of the application aims to provide a scene acquisition method, a scene acquisition device and electronic equipment, which can solve the problems that a great deal of manpower and time are consumed in the development and debugging stage of the electronic equipment in the related technology, and the equipment test cost is high.
In a first aspect, an embodiment of the present application provides a scene acquisition method, where the method includes: acquiring scene characteristics corresponding to a target scene, wherein the scene characteristics comprise main body characteristics corresponding to a target object in the target scene and background characteristics corresponding to a background part; acquiring a first image card associated with the main body feature and a second image card associated with the background feature; and combining the first image card and the second image card based on the position relation between the target object and the background part to obtain an image card combined picture corresponding to the target scene.
In a second aspect, an embodiment of the present application provides a scene acquisition apparatus, including: the acquisition module is used for acquiring scene characteristics corresponding to a target scene, wherein the scene characteristics comprise main body characteristics corresponding to a target object in the target scene and background characteristics corresponding to a background part; the acquisition module is also used for acquiring a first image card associated with the main body characteristic and a second image card associated with the background characteristic; and the scene module is used for combining the first image card and the second image card based on the position relation between the target object and the background part to obtain an image card combined picture corresponding to the target scene.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the scene acquisition method as in the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the scene acquisition method as in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement the steps of the scene acquisition method as in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executed by at least one processor to implement the steps of the scene acquisition method as described in the first aspect.
In the embodiment of the application, the main body characteristics corresponding to the target object and the background characteristics corresponding to the background part in the target scene can be obtained, the first image card is obtained through the main body characteristics, and the second image card is obtained through the background characteristics. Because the first image card and the second image card respectively correspond to the target object and the background part in the target scene, the first image card and the second image card are combined based on the position relation between the target object and the background part, and the target scene can be simulated, so that a simulated scene of the target scene, namely an image card combined picture, is obtained. Based on this, the tester need not go to real target scene and shoot, but through shooting first picture card and second picture card and picture card combination picture, can accomplish the equipment test, need not to carry out the live-action and shoot, effectively saves time and manpower, reduces equipment test cost.
Drawings
Fig. 1 is a schematic flow chart of a scene acquisition method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an example of a card combination interface provided by an embodiment of the present application;
FIG. 3 is a second flowchart of a scene acquisition method according to an embodiment of the present application;
FIG. 4 is a third flow chart of a scene acquisition method according to an embodiment of the present application;
FIG. 5 is a flowchart of a scene acquisition method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a scene acquisition device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
As background technology, in the development and debugging stage of the electronic device, a large number of different real scenes need to be shot by the electronic device, and whether the shooting effect of the electronic device has a problem or not is verified by a large number of images obtained by shooting. Different real scenes are often in different positions, so that a tester can consume a great deal of manpower and time when shooting a great deal of selected real scenes, and the equipment testing cost is high.
Aiming at the problems in the related art, the embodiment of the application provides a scene acquisition method, which can acquire main body characteristics corresponding to a target object in a target scene and background characteristics corresponding to a background part, acquire a first image card through the main body characteristics and acquire a second image card through the background characteristics. Because the first image card and the second image card respectively correspond to the target object and the background part in the target scene, the first image card and the second image card are combined based on the position relation between the target object and the background part, and the target scene can be simulated, so that a simulated scene of the target scene, namely an image card combined picture, is obtained. Based on the method, a tester does not need to go to a real target scene to shoot, but can finish equipment testing by shooting combined pictures of the first picture card, the second picture card and the picture card, so that live-action shooting is not needed, time and labor are effectively saved, equipment testing cost is reduced, and the problems that a large amount of labor and time are consumed in the development and debugging stage of the electronic equipment in the related technology and the equipment testing cost is high are solved.
The scene acquisition method provided by the embodiment of the application is described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a scene acquisition method according to an embodiment of the present application, where an execution subject of the scene acquisition method may be an electronic device. The execution body is not limited to the present application.
As shown in fig. 1, the scene acquisition method provided by the embodiment of the present application may include steps 110 to 130.
Step 110, obtaining scene characteristics corresponding to the target scene.
The scene features comprise main body features corresponding to target objects in the target scene and background features corresponding to background parts.
The target scene may be a scene that the user wants to simulate or construct, i.e. a scene to be simulated.
For example, a tester may take an outdoor strong light scene as a target scene when he wants to test the shooting effect of the electronic device under outdoor strong light, and may simulate the outdoor strong light scene by acquiring scene characteristics of the outdoor strong light scene.
The target object may be a photographing subject in a target scene, the number of target objects in the target scene may be at least one, and the background portion is a portion other than the photographing subject.
Illustratively, as shown in fig. 2, in a target scene 201, the target object is a subject 202 in 201, and the background portion is a portion 203 other than 202 in 201.
Step 120, a first graphic card associated with the subject feature and a second graphic card associated with the background feature are obtained.
Specifically, the electronic device may obtain a card matching with the main feature from the plurality of cards, obtain a first card associated with the target object, obtain a card matching with the background feature from the plurality of cards, and obtain a second card associated with the background portion. In the case where the target object includes a plurality of target objects, the electronic device may acquire at least one first graphic card associated with each target object.
For example, the plurality of cards includes 18 gray-scale values, the main feature is a gray-scale value of 50%, and the background feature is a gray-scale value of 10%, and then a card with a gray-scale value of 50% may be selected from the 18 gray-scale values as the first card, and a card with a gray-scale value of 10% may be selected as the second card.
It should be noted that the number of the first cards or the second cards may be at least one, and the number of the first cards and the second cards is not specifically limited in the present application.
In one example, as shown in fig. 2, the first graphics cards associated with the target object 202 include four, three colors for the four first graphics cards, which are combined together into the graphics card combination 205 for the target object 202.
In another example, the second graphic card includes two, a pure white graphic card and an 18% neutral gray graphic card, respectively. If the proportion of the two second cards in the picture is different, the two second cards can correspond to different backgrounds.
And 130, combining the first graphic card and the second graphic card based on the position relation between the target object and the background part to obtain a graphic card combined picture corresponding to the target scene.
As shown in fig. 2, the four first cards are combined together to obtain a card combination corresponding to the target object 202, the second card 206 corresponds to the background portion 203, and the card combination screen 204 of the first card and the second card can be used as a simulated scene of the target scene 201, so that the scene image of the target scene 201 can be obtained by shooting the card combination screen 204 without going to the actual target scene 201 for shooting.
The scene acquisition method provided by the embodiment of the application can acquire the main body characteristics corresponding to the target object and the background characteristics corresponding to the background part in the target scene, acquire the first image card through the main body characteristics and acquire the second image card through the background characteristics. Because the first image card and the second image card respectively correspond to the target object and the background part in the target scene, the first image card and the second image card are combined based on the position relation between the target object and the background part, and the target scene can be simulated, so that a simulated scene of the target scene, namely an image card combined picture, is obtained. Based on this, the tester need not go to real target scene and shoot, but through shooting first picture card and second picture card and picture card combination picture, can accomplish the equipment test, need not to carry out the live-action and shoot, effectively saves time and manpower, reduces equipment test cost.
The specific implementation of the steps 110-130 is described in detail below.
Step 110 is involved, where scene features corresponding to the target scene are obtained.
In some embodiments of the present application, the method may further comprise, prior to step 110, the steps of: receiving preset dimension information input by a user; and determining the content included in the scene characteristics according to the preset dimension information.
The target shooting strategy may include at least one of an automatic exposure strategy and an automatic white balance strategy corresponding to the electronic device.
Optionally, if the preset dimension information is automatic exposure (Automatic Exposure, AE), the content included in the main feature may be an average gray value corresponding to the target object, and the content included in the background feature may be a proportion of the bright area and the dark area in the background portion; if the preset dimension information is auto white balance (Automatic white balance, AWB), the content included in the main feature may be saturation, and the content included in the background feature may be a proportion of the light source to the background portion.
In the embodiment of the application, the content included in the main body feature and the background feature in the scene feature is determined by the preset dimension information, so that a user can customize the dimension to be concerned, and after the preset dimension information is input into the electronic device, the electronic device can acquire the main body feature and the background feature matched with the preset dimension information. Therefore, the combined picture of the graphic card obtained by combination can be adapted to the preset dimension information, the target scene which accords with the dimension concerned by the user is simulated as much as possible, and the shooting effect in the dimension is convenient to test.
In some embodiments of the present application, the scene features may include content including at least one of: gray value, brightness, dynamic range, reflectivity, contrast, color saturation, and type.
It should be noted that, the more parameters included in the scene feature, the more the acquired first image card is matched with the target object, and the more the acquired second image card is matched with the background portion, so that the closer the image card combined picture of the first image card and the second image card is to the target scene.
In one example, the plurality of cards includes 24 colors, and the main body features that the red and yellow colors with higher saturation are combined, so that the red card and the yellow card can be selected from the 24 color cards as the first card; the background feature is a combination of white and gray with low saturation, and a white card and a gray card can be selected from the 24 cards with different colors as the second card.
In the embodiment of the application, the content included in the scene features can be any type of parameter, so that the scene features can meet the requirements of various target scenes as much as possible, the adaptation degree of the combined picture card combined picture and the scene is improved, and the scene range of the simulated scene of the picture card combined picture is enlarged.
In some embodiments of the present application, fig. 3 is a flowchart of another scenario acquisition method provided in the embodiment of the present application, and step 110 may include step 310 and step 320 shown in fig. 3.
In step 310, a first image is acquired.
The shooting scene of the first image is the target scene, and the first image may include at least one piece.
In one embodiment, when testing the camera effect of the electronic device, some fixed test scenes exist, so the target scene can be any fixed test scene, and after the target scene captures an image once, the image corresponding to the target scene can be saved. If the electronic equipment is required to be tested later, the image corresponding to the target scene, namely the first image, can be directly acquired, and the target scene is restored through the first image.
For example, the first image may be manually submitted for the user.
In another embodiment, the first image may be a problem image captured by the electronic device, the target scene is a capturing scene corresponding to the problem image, and by inputting the problem image into the device, a problem existing when the problem image is captured may be reproduced. Alternatively, the problem image may be manually input by a tester, or the electronic device is connected to a customer complaint system, and when the customer complaint system receives a customer complaint request, the problem image in the customer complaint request may be sent to the electronic device, so that the electronic device receives the problem image, and a scene when the problem image is shot is simulated through the problem image.
Step 320, identifying the first image based on a preset identification algorithm, determining a target object and a background portion in the first image, and generating a main feature corresponding to the target object and a background feature corresponding to the background portion.
The preset recognition algorithm may be set according to the required recognition accuracy, and the present application is not limited herein.
Optionally, if the required accuracy is low, the preset recognition algorithm may be an edge segmentation algorithm; if the required accuracy is high, the preset recognition algorithm may be an AI recognition algorithm.
For example, the electronic device may identify an object near the center in the first image as a target object through an edge segmentation algorithm, or may divide the first image into 9 regions (similar to a composition line that can be adjusted out by a camera preview interface) for simplifying the operation, and identify only the middle-most region, with the middle-most region being the target object.
In the embodiment of the application, after the first image is acquired, the first image can be identified based on a preset identification algorithm, a target object and a background part in the first image are determined, and main body characteristics corresponding to the target object and background characteristics corresponding to the background part are generated. Based on the above, a first image card matching with the target object and a second image card matching with the background part can be acquired according to the main body feature and the background feature, and the pictures in the first image, namely the target scene when the first image is shot, can be simulated through the first image card and the second image card. Therefore, by shooting the combined picture of the picture card, the scene image corresponding to the target scene can be obtained, and a tester does not need to go to the real target scene to shoot, so that the test cost is reduced, and the test duration is shortened. When the first image is a problem image, a target scene when the problem image is shot can be flexibly simulated through the image card combined picture, and problems existing when the problem image is shot can be quickly reproduced through analyzing the scene image obtained by shooting the image card combined picture, so that the problem processing efficiency is improved.
In other embodiments of the present application, the obtaining the scene feature corresponding to the target scene may specifically include: and receiving scene characteristics corresponding to the target scene input by the user.
Specifically, the user may directly input the scene features of the target scene to be simulated to the electronic device, so that the electronic device simulates the target scene based on the main features and the background features in the scene features, thereby obtaining the scene image corresponding to the target scene.
Illustratively, if the target scene is a white plus scene, the scene features may include a higher light color occupancy in the picture; or, if the target scene is a black subtraction scene, the scene features may include that the dark color in the picture occupies a relatively high proportion; alternatively, the target scene is a high dynamic range (High Dynamic Range, HDR) scene, then the scene features may include a wider dynamic range in the picture.
In the embodiment of the application, the first image card matched with the main body characteristic and the second image card matched with the background characteristic can be obtained by receiving the main body characteristic and the background characteristic of the target scene to be simulated, which are input by a user, so that the first image card matched with the target object and the second image card matched with the background part are obtained. Based on the above, by combining the first image card and the second image card, a simulated scene of the target scene, namely an image card combined picture, can be constructed, and a scene image corresponding to the target scene is obtained by shooting the image card combined picture. Under the condition that the target scenes comprise a plurality of target scenes, the scene characteristics of each target scene are respectively input into the electronic equipment, so that the combined picture of the graphics card corresponding to each target scene can be obtained quickly, the scene images corresponding to the plurality of target scenes can be obtained quickly, the debugging period of the electronic equipment is shortened, and the testing efficiency is improved.
Step 130 is involved, based on the positional relationship between the target object and the background portion, combining the first graphic card and the second graphic card to obtain a graphic card combined picture corresponding to the target scene.
In some embodiments of the present application, the electronic device may be connected to the capturing component, the background feature includes at least two feature parameter values, the second graphics card associated with the background feature includes at least two graphics cards, the feature parameter values corresponding to the at least two graphics cards are different, fig. 4 is a schematic flow diagram of another scene acquisition method provided in the embodiment of the present application, and step 130 may include step 410 and step 420 shown in fig. 4.
In step 410, a proportion of each sub-region in the background portion to the background portion is determined.
The sub-areas in the background portion are divided based on the characteristic parameter values, and the proportion of the sub-areas to the background portion can be the proportion of the sub-areas to the area of the background portion.
Illustratively, the background portion P of the target scene consists of bright and dark regions, so the background features may include feature parameter values: the second graphic card associated with the background feature comprises a graphic card A1 and a graphic card A2, wherein the graphic card A1 corresponds to the average luminance value A1, and the graphic card A2 corresponds to the average luminance value A2. The electronic device may divide the background portion P into a sub-region P1 corresponding to A1 and a sub-region P2 corresponding to A2, and determine that the ratio of the area of P1 to P is x1 and the ratio of the area of P2 to P is x2.
And step 420, controlling the grabbing component to grab the first image card and the second image card, placing at least one of the first image card and the second image card according to the position relation, and placing at least two second image cards according to the proportion, so as to obtain the image card combined picture.
Specifically, the electronic device may place at least one of the first graphic card and the second graphic card, and in the case where the second graphic card is fixed, may place the first graphic card to a plane in which the second graphic card is located; alternatively, in the case where the first card is fixed, the second card may be placed on the plane on which the first card is located; alternatively, the first and second cards are placed on the same plane.
Referring to the above example, the background portion P of the target scene is composed of a bright area and a dark area, and the second graphic card associated with the background feature includes the graphic card A1 and the graphic card A2, wherein the graphic card A1 corresponds to the average luminance value A1, and the graphic card A2 corresponds to the average luminance value A2. The electronic device may divide the background portion P into a sub-region P1 corresponding to A1 and a sub-region P2 corresponding to A2, and determine that the ratio of the area of P1 to P is x1 and the ratio of the area of P2 to P is x2. Therefore, when the graphic card a1 and the graphic card a2 are placed, the placement can be performed with reference to x1 and x2 to obtain a background graphic card combined picture, so that the proportion of the placed graphic card a1 to the whole background graphic card combined picture is close to x1, and the proportion of the placed graphic card a2 to the whole background graphic card combined picture is close to x2. Meanwhile, referring to the target object Q in the target scene being located at the center of the target scene, the electronic device may control the grabbing component, for example, control the mechanical arm to grab the first graphic card of the entity, and place the first graphic card at the center of the background graphic card combined picture, so as to obtain the final graphic card combined picture.
In the embodiment of the application, the electronic equipment can determine the position relationship between the target object and the background part according to the scene characteristics corresponding to the target scene. Therefore, when the electronic equipment controls the grabbing component to grab the first image card and the second image card, the first image card and the second image card can be placed according to the position relation between the target object and the background part, so that the obtained image card combined picture restores the target scene to the greatest extent, and the similarity of the target scene and the image card combined picture is improved. Meanwhile, under the condition that the background feature comprises at least two feature parameter values and the second image card comprises at least two images, at least two second image cards can be placed according to the proportion by determining the proportion of the subareas corresponding to different feature parameter values to the background part, so that the proportion of the second image card corresponding to a certain feature parameter value to the image card combined picture is adapted to the proportion of the subareas corresponding to the feature parameter value to the background part, and the similarity of the background part and the background in the image card combined picture is improved.
In one embodiment, two sets of standby cards may be separately placed, one set of standby cards corresponding to the photographing subject and one set of standby cards corresponding to the background portion, each set including a plurality of standby cards. The electronic device may control the grabbing component to select a first image card corresponding to the shooting subject from a group of standby image cards, and select a second image card corresponding to the background portion from another group of standby image cards.
In some embodiments of the present application, in order to test the shooting strategy of the device, the method may further include the steps of: receiving preset dimension information input by a user; determining a target shooting strategy according to preset dimension information; and shooting a picture card combined picture of the first picture card and the second picture card based on the target shooting strategy to obtain a scene image corresponding to the target scene.
Alternatively, the electronic device may directly photograph the combined picture of the picture card, or photograph the combined picture of the picture card by controlling other electronic devices having photographing functions.
Optionally, if the preset dimension information is an automatic exposure (Automatic Exposure, AE), the target shooting strategy may be determined to be an automatic exposure strategy; if the preset dimension information is an automatic white balance (Automatic white balance, AWB), the target shooting strategy may be determined to be an automatic white balance strategy.
In the embodiment of the application, a user can customize the equipment strategy to be tested by inputting preset dimension information into the electronic equipment, and the electronic equipment can determine the target shooting strategy based on the preset dimension information and shoot the combined picture of the graphic card by adopting the target shooting strategy. By shooting based on the same target shooting strategy under different scenes, scene images with poor performance in a plurality of scene images can be found, and according to the scene images and the corresponding shooting scene extraction problems, developers adjust the target shooting strategy according to the existing problems, so that the target shooting strategy of the electronic equipment is improved, and the electronic equipment is ensured to have a good shooting effect under any shooting scene.
In some embodiments of the present application, in order to further improve the similarity between the combined image of the graphics card and the target scene, fig. 5 is a schematic flow diagram of another scene acquisition method provided in the embodiment of the present application, and the capturing of the combined image of the graphics card of the first graphics card and the second graphics card based on the target capturing policy to obtain the scene image corresponding to the target scene may specifically include steps 510 to 530 shown in fig. 5.
Step 510, determining light source information corresponding to the target scene according to the scene characteristics.
Wherein the light source information may include at least one of: luminance value, color temperature.
In step 520, a target light filling parameter is determined based on the light source information.
Optionally, the electronic device may directly determine the parameter corresponding to the light source information as the target light-compensating parameter, or the electronic device may adjust the parameter in the light source information, for example, increase a preset value, to obtain the target light-compensating parameter.
And 530, supplementing light to the image card combined picture based on the target light supplementing parameter, and shooting the image card combined picture after light supplementing based on the target shooting strategy to obtain a scene image corresponding to the target scene.
Specifically, the electronic device may be connected to the light source system, so that the electronic device may control the light source system to perform light filling on the combined image based on the target light filling parameter, so that the combined image of the image card is close to the light source corresponding to the target scene, and brightness and color temperature of the combined image of the image card are ensured to be similar to those of the target scene.
For example, the light source system may correspond to a plurality of light sources with different color temperatures and colors, such as D75, D65, D50, CWF, TL84, A, H light, and the brightness value of each light source may be adjusted to meet the light supplementing requirement of the combined picture of the graphics card.
In the embodiment of the application, because the brightness and the color temperature of the ambient light are different during photographing, the photographing effect is also influenced, so that in order to ensure that the finally photographed scene image is consistent with the image directly photographed for the target scene, the electronic equipment can determine the target light supplementing parameter based on the light source information of the target scene, and supplement light to the combined picture of the picture card through the target light supplementing parameter, so that the combined picture of the picture card is close to the light source corresponding to the target scene, the brightness and the color temperature of the combined picture of the picture card are ensured to be similar to those of the target scene, the similarity of the combined picture of the picture card and the target scene is improved, and the similarity of the photographed scene image and the image directly photographed for the target scene is further improved.
In other embodiments of the present application, after capturing a combined image of a first image card and a second image card based on a target capturing policy to obtain a scene image corresponding to a target scene, the method may further specifically include: acquiring target shooting parameters corresponding to a scene image, wherein the target shooting parameters comprise at least one of automatic exposure parameters and automatic white balance parameters; and adjusting a target shooting strategy based on the target shooting parameters and the preset shooting parameters.
The preset shooting parameters may be shooting parameters under an ideal effect, and the target shooting parameters correspond to the preset shooting parameters.
In an exemplary embodiment, when the target shooting parameter is an automatic exposure parameter corresponding to the scene image, the preset shooting parameter may be an automatic exposure parameter under an ideal effect, the target shooting policy may be an automatic exposure policy corresponding to the electronic device, and by comparing the two automatic exposure parameters, the automatic exposure policy corresponding to the electronic device may be optimized and adjusted to improve the shooting effect of the electronic device.
In the embodiment of the application, after the scene image is shot based on the target shooting strategy, the problems existing in the scene image can be extracted by comparing the target shooting parameters corresponding to the scene image with the preset shooting parameters, so that the problems existing in the target shooting strategy are solved, the target shooting strategy is optimized, and the shooting effect of the electronic equipment is improved.
It should be noted that, in the scene acquisition method provided in the embodiment of the present application, the execution subject may be a scene acquisition device, or a control module of the scene acquisition device for executing the scene acquisition method. In the embodiment of the application, a scene acquisition device is taken as an example to execute a scene acquisition method. The scene acquisition device will be described in detail below.
Fig. 6 is a schematic structural diagram of a scene acquisition device provided by the application.
As shown in fig. 6, an embodiment of the present application provides a scene acquisition apparatus 600, the scene acquisition apparatus 600 including: an acquisition module 610, a scene module 620.
The obtaining module 610 is configured to obtain a scene feature corresponding to a target scene, where the scene feature includes a main feature corresponding to a target object in the target scene and a background feature corresponding to a background portion; the obtaining module 610 is further configured to obtain a first graphic card associated with the main feature and a second graphic card associated with the background feature; the scene module 620 is configured to combine the first graphic card and the second graphic card based on the positional relationship between the target object and the background portion, so as to obtain a graphic card combined picture corresponding to the target scene.
The scene acquisition device provided by the embodiment of the application can acquire the main body characteristics corresponding to the target object and the background characteristics corresponding to the background part in the target scene, acquire the first image card through the main body characteristics and acquire the second image card through the background characteristics. Because the first image card and the second image card respectively correspond to the target object and the background part in the target scene, the first image card and the second image card are combined based on the position relation between the target object and the background part, and the target scene can be simulated, so that a simulated scene of the target scene, namely an image card combined picture, is obtained. Based on this, the tester need not go to real target scene and shoot, but through shooting first picture card and second picture card and picture card combination picture, can accomplish the equipment test, need not to carry out the live-action and shoot, effectively saves time and manpower, reduces equipment test cost.
In some embodiments of the application, the apparatus further comprises: the receiving module is used for receiving preset dimension information input by a user before the scene characteristics corresponding to the target scene are acquired; and the determining module is used for determining the content included by the scene characteristics according to the preset dimension information.
In some embodiments of the application, the scene features include content comprising at least one of: gray value, brightness, dynamic range, reflectivity, contrast, color saturation.
In some embodiments of the present application, the acquisition module 610 includes: an acquisition unit configured to acquire a first image; the identification unit is used for identifying the first image based on a preset identification algorithm, determining a target object and a background part in the first image, and acquiring main body characteristics corresponding to the target object and background characteristics corresponding to the background part.
In some embodiments of the present application, the obtaining module 610 is specifically configured to: and receiving scene characteristics corresponding to the target scene input by the user.
In some embodiments of the present application, the device is connected to the capturing component, the background feature includes at least two feature parameter values, the second graphics card associated with the background feature includes at least two graphics cards, the feature parameter values corresponding to the at least two graphics cards are different, and the scene module is specifically configured to: determining the proportion of each sub-region in the background part to the background part, wherein the sub-regions in the background part are divided based on the characteristic parameter values; the grabbing component is controlled to grab the first image card and the second image card, at least one of the first image card and the second image card is placed according to the position relation, and at least two second image cards are placed according to the proportion, so that an image card combined picture is obtained.
In some embodiments of the application, the apparatus further comprises: the receiving module is used for receiving preset dimension information input by a user; the determining module is used for determining a target shooting strategy according to preset dimension information; and the shooting module is used for shooting the picture card combined picture of the first picture card and the second picture card based on the target shooting strategy to obtain a scene image corresponding to the target scene.
In some embodiments of the present application, a photographing module includes: the determining unit is used for determining light source information corresponding to the target scene according to the scene characteristics; a determining unit, configured to determine a target light-compensating parameter based on the light source information; and the shooting unit is used for carrying out light filling on the picture card combined picture based on the target light filling parameter, shooting the picture card combined picture after light filling based on the target shooting strategy, and obtaining a scene image corresponding to the target scene.
In some embodiments of the application, the apparatus further comprises: the obtaining module 610 is configured to obtain, after capturing a combined image of the first image card and the second image card based on a target capturing policy, a scene image corresponding to a target scene, where the target capturing parameter corresponds to the scene image, and the target capturing parameter includes at least one of an automatic exposure parameter and an automatic white balance parameter; the adjusting module is used for adjusting the target shooting strategy based on the target shooting parameters and the preset shooting parameters.
The scene acquisition device provided by the embodiment of the present application can implement each process implemented by the electronic device in the method embodiment of fig. 1 to 5, and in order to avoid repetition, a description is omitted here.
The scene acquisition device in the embodiment of the application can be an electronic device, and can also be a component, an integrated circuit or a chip in the electronic device. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The scene acquisition device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 700, including a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and capable of running on the processor 701, where the program or the instruction implements each process of the above embodiment of the scene acquisition method when executed by the processor 701, and the process can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 800 includes, but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, and processor 810.
Those skilled in the art will appreciate that the electronic device 800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 810 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 810 is configured to obtain a scene feature corresponding to a target scene, where the scene feature includes a main feature corresponding to a target object in the target scene and a background feature corresponding to a background portion; the processor 810 is further configured to obtain a first graphic card associated with the subject feature and a second graphic card associated with the background feature; the processor 810 is configured to combine the first graphic card and the second graphic card based on a positional relationship between the target object and the background portion, so as to obtain a graphic card combined picture corresponding to the target scene.
In the embodiment of the application, the main body characteristics corresponding to the target object and the background characteristics corresponding to the background part in the target scene can be obtained, the first image card is obtained through the main body characteristics, and the second image card is obtained through the background characteristics. Because the first image card and the second image card respectively correspond to the target object and the background part in the target scene, the first image card and the second image card are combined based on the position relation between the target object and the background part, and the target scene can be simulated, so that a simulated scene of the target scene, namely an image card combined picture, is obtained. Based on this, the tester need not go to real target scene and shoot, but through shooting first picture card and second picture card and picture card combination picture, can accomplish the equipment test, need not to carry out the live-action and shoot, effectively saves time and manpower, reduces equipment test cost.
In some embodiments of the present application, the user input unit 807 is configured to receive preset dimension information input by a user before acquiring a scene feature corresponding to a target scene; the processor 810 is configured to determine content included in the scene feature according to the preset dimension information.
In some embodiments of the application, the scene features include content comprising at least one of: gray value, brightness, dynamic range, reflectivity, contrast, color saturation.
In some embodiments of the application, the processor 810 is specifically configured to: acquiring a first image; and identifying the first image based on a preset identification algorithm, determining a target object and a background part in the first image, and acquiring main body features corresponding to the target object and background features corresponding to the background part.
In some embodiments of the application, the processor 810 is specifically configured to: and receiving scene characteristics corresponding to the target scene input by the user.
In some embodiments of the present application, the electronic device is connected to the capturing component, the background feature includes at least two feature parameter values, the second graphics card associated with the background feature includes at least two graphics cards, and the feature parameter values corresponding to the at least two graphics cards are different, and the processor 810 is specifically configured to: determining the proportion of each sub-region in the background part to the background part, wherein the sub-regions in the background part are divided based on the characteristic parameter values; the grabbing component is controlled to grab the first image card and the second image card, at least one of the first image card and the second image card is placed according to the position relation, and at least two second image cards are placed according to the proportion, so that an image card combined picture is obtained.
In some embodiments of the present application, a user input unit 807 for receiving preset dimension information input by a user; a processor 810 for determining a target shooting strategy according to preset dimension information; and the processor 810 is configured to shoot a combined picture of the first graphic card and the second graphic card based on the target shooting strategy, so as to obtain a scene image corresponding to the target scene.
In some embodiments of the application, the processor 810 is specifically configured to: determining light source information corresponding to a target scene according to scene characteristics; determining a target light supplementing parameter based on the light source information; and carrying out light filling on the image card combined picture based on the target light filling parameter, and shooting the light filled image card combined picture based on the target shooting strategy to obtain a scene image corresponding to the target scene.
In some embodiments of the present application, the processor 810 is configured to obtain, after capturing a combined image of the first image card and the second image card based on a target capturing policy to obtain a scene image corresponding to a target scene, target capturing parameters corresponding to the scene image, where the target capturing parameters include at least one of an automatic exposure parameter and an automatic white balance parameter; the processor 810 is configured to adjust the target shooting strategy based on the target shooting parameter and a preset shooting parameter.
It should be appreciated that in embodiments of the present application, the input unit 804 may include a graphics processor (Graphics Processing Unit, GPU) 8041 and a microphone 8042, the graphics processor 8041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes at least one of a touch panel 8071 and other input devices 8072. Touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two parts, a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 809 can be used to store software programs as well as various data. The memory 809 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 809 may include volatile memory or nonvolatile memory, or the memory 809 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 809 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 810 may include one or more processing units; optionally, the processor 810 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 810.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the above embodiment of the scene acquisition method, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The processor is a processor in the electronic device in the above embodiment. Readable storage media, including computer readable storage media, examples of which include non-transitory computer readable storage media such as computer Read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the scene acquisition method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described scene acquisition method embodiment, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (11)

1. A scene acquisition method, comprising:
acquiring scene characteristics corresponding to a target scene, wherein the scene characteristics comprise main body characteristics corresponding to a target object in the target scene and background characteristics corresponding to a background part;
acquiring a first image card associated with the main body feature and a second image card associated with the background feature;
and combining the first image card and the second image card based on the position relation between the target object and the background part to obtain an image card combined picture corresponding to the target scene.
2. The method of claim 1, wherein prior to the acquiring the scene feature corresponding to the target scene, the method further comprises:
receiving preset dimension information input by a user;
and determining the content included in the scene characteristics according to the preset dimension information.
3. The method of claim 1, wherein the scene feature comprises content comprising at least one of: gray value, brightness, dynamic range, reflectivity, contrast, color saturation.
4. The method according to claim 1, wherein the obtaining the scene feature corresponding to the target scene includes:
Acquiring a first image;
and identifying the first image based on a preset identification algorithm, determining a target object and a background part in the first image, and acquiring main body characteristics corresponding to the target object and background characteristics corresponding to the background part.
5. The method according to claim 1, wherein the obtaining the scene feature corresponding to the target scene includes:
and receiving scene characteristics corresponding to the target scene input by the user.
6. The method according to claim 1, wherein the method is applied to an electronic device, the electronic device is connected with a grabbing component, the background feature includes at least two feature parameter values, the second graphics card associated with the background feature includes at least two graphics cards, feature parameter values corresponding to the at least two graphics cards are different, the combining of the first graphics card and the second graphics card based on the positional relationship between the target object and the background portion, and the obtaining of the graphics card combined picture corresponding to the target scene includes:
determining the proportion of each sub-region in the background part to the background part, wherein the sub-regions in the background part are divided based on the characteristic parameter values;
And controlling the grabbing component to grab the first image card and the second image card, placing at least one of the first image card and the second image card according to the position relation, and placing at least two second image cards according to the proportion to obtain the image card combined picture.
7. The method according to claim 1, wherein the method further comprises:
receiving preset dimension information input by a user;
determining a target shooting strategy according to the preset dimension information;
and shooting the picture card combined picture of the first picture card and the second picture card based on the target shooting strategy to obtain a scene image corresponding to the target scene.
8. The method of claim 7, wherein the capturing the combined picture of the first and second graphics cards based on the target capturing policy to obtain the scene image corresponding to the target scene comprises:
determining light source information corresponding to the target scene according to the scene characteristics;
determining a target light supplementing parameter based on the light source information;
and carrying out light filling on the image card combined picture based on the target light filling parameter, and shooting the light filled image card combined picture based on the target shooting strategy to obtain a scene image corresponding to the target scene.
9. The method according to claim 7, wherein after the capturing of the combined pictures of the first and second graphics cards based on the target capturing policy, to obtain the scene image corresponding to the target scene, the method further comprises:
acquiring target shooting parameters corresponding to the scene image, wherein the target shooting parameters comprise at least one of automatic exposure parameters and automatic white balance parameters;
and adjusting the target shooting strategy based on the target shooting parameters and preset shooting parameters.
10. A scene acquisition device, comprising:
the acquisition module is used for acquiring scene characteristics corresponding to a target scene, wherein the scene characteristics comprise main body characteristics corresponding to a target object in the target scene and background characteristics corresponding to a background part;
the acquisition module is further used for acquiring a first image card associated with the main body feature and a second image card associated with the background feature;
and the scene module is used for combining the first image card and the second image card based on the position relation between the target object and the background part to obtain an image card combined picture corresponding to the target scene.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the scene acquisition method as claimed in any one of claims 1 to 9.
CN202210191293.0A 2022-02-28 2022-02-28 Scene acquisition method and device and electronic equipment Pending CN116721377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210191293.0A CN116721377A (en) 2022-02-28 2022-02-28 Scene acquisition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210191293.0A CN116721377A (en) 2022-02-28 2022-02-28 Scene acquisition method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116721377A true CN116721377A (en) 2023-09-08

Family

ID=87873917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210191293.0A Pending CN116721377A (en) 2022-02-28 2022-02-28 Scene acquisition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116721377A (en)

Similar Documents

Publication Publication Date Title
CN109685746A (en) Brightness of image method of adjustment, device, storage medium and terminal
CN106651797B (en) Method and device for determining effective area of signal lamp
CN112672069B (en) Exposure method and apparatus
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN108551552A (en) Image processing method, device, storage medium and mobile terminal
CN108494996A (en) Image processing method, device, storage medium and mobile terminal
CN113873161A (en) Shooting method and device and electronic equipment
CN108683845A (en) Image processing method, device, storage medium and mobile terminal
US10769416B2 (en) Image processing method, electronic device and storage medium
CN112419218A (en) Image processing method and device and electronic equipment
CN110473156B (en) Image information processing method and device, storage medium and electronic equipment
CN111968605A (en) Exposure adjusting method and device
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN116721377A (en) Scene acquisition method and device and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN113489901B (en) Shooting method and device thereof
CN113923367B (en) Shooting method and shooting device
CN108959075B (en) Method and device for testing algorithm library, storage medium and electronic equipment
CN112367470B (en) Image processing method and device and electronic equipment
CN112399091B (en) Image processing method and device and electronic equipment
CN114143448B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN117793513A (en) Video processing method and device
CN116017146A (en) Image processing method and device
CN114302057A (en) Image parameter determination method and device, electronic equipment and storage medium
CN117278844A (en) Image processing method, image processing circuit, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination