CN112087577A - Teaching case generation method and device, computer equipment and storage medium - Google Patents

Teaching case generation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112087577A
CN112087577A CN202010963137.2A CN202010963137A CN112087577A CN 112087577 A CN112087577 A CN 112087577A CN 202010963137 A CN202010963137 A CN 202010963137A CN 112087577 A CN112087577 A CN 112087577A
Authority
CN
China
Prior art keywords
scene
case
panoramic
data
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010963137.2A
Other languages
Chinese (zh)
Inventor
尹元昌
黄凯文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mobile Internet Research Institute Co ltd
Original Assignee
Shenzhen Mobile Internet Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mobile Internet Research Institute Co ltd filed Critical Shenzhen Mobile Internet Research Institute Co ltd
Priority to CN202010963137.2A priority Critical patent/CN112087577A/en
Publication of CN112087577A publication Critical patent/CN112087577A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application belongs to the technical field of image processing, and relates to a method and a device for generating a teaching case applied to a case-issuing field, computer equipment and a storage medium, wherein the method comprises the following steps: panoramic acquisition operation is carried out on a case scene based on spherical camera equipment, and a plurality of groups of scene image data are obtained; constructing site panoramic image information corresponding to the case site based on the plurality of groups of site image data; receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data; embedding the target object data into the panoramic image information based on the spatial position data to obtain a case panoramic image; outputting the panoramic image data to the user terminal. According to the method and the device, the problem that the real scene is not true in simulation is effectively avoided by acquiring the image data of the case scene and constructing the panoramic image consistent with the scene, and the real objectivity of the panoramic image of the case finding scene is guaranteed.

Description

Teaching case generation method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating a teaching case applied to a case-issuing site, a computer device, and a storage medium.
Background
Criminal case investigation is the most core teaching content of public security major, and the study of a huge comprehensive knowledge system is decomposed in more than ten professional courses such as investigation science, field investigation science, forensic medicine, criminal science and technology, the content is fragmented and discretized, a link of knowledge integration and comprehensive capability training is lacked, and the comprehensive skills and comprehensive application capability of student case investigation cannot be improved. From the viewpoint of the talent culture target and the professional property, the public security technology specialty is an application type specialty with strong applicability and practicability oriented to the public security practice. Therefore, practice or case teaching close to the real field is required to break through the bottleneck of professional experimental teaching and public security actual combat training.
At present, most public security colleges and universities carry out simulated real field practice exercises through virtual simulation experiments, and informationized or simulated teaching tools and methods for creating, reshaping (optimizing) or restoring experimental practice teaching scenes by using virtual reality or physical simulation technologies are used.
However, the traditional on-site construction method is generally virtual, and the scene constructed through the virtual is not directly seen by the user, but is simulated by a computer three-dimensional technology to approximate a real virtual world, so that the scene constructed through the virtual is only called virtual reality, and is not a real panorama.
Disclosure of Invention
The embodiment of the application aims to provide a teaching case generation method, a device, computer equipment and a storage medium applied to a case issuing field, so as to solve the problem that the simulation of real field practice through a virtual simulation experiment is not directly seen by a user and is not a real panorama.
In order to solve the above technical problem, an embodiment of the present application provides a method for generating a teaching case applied to a case issuing field, which adopts the following technical scheme:
panoramic acquisition operation is carried out on a case scene based on spherical camera equipment, and a plurality of groups of scene image data are obtained;
constructing site panoramic image information corresponding to the case site based on the plurality of groups of site image data;
receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data;
embedding the target object data into the panoramic image information based on the spatial position data to obtain a case panoramic image;
outputting the panoramic image data to the user terminal.
In order to solve the above technical problem, an embodiment of the present application further provides a teaching case generating device applied to a case issuing field, and the following technical solutions are adopted:
the panoramic acquisition module is used for carrying out panoramic acquisition operation on a case scene based on the spherical camera equipment to acquire a plurality of groups of scene image data;
the panoramic construction module is used for constructing field panoramic image information corresponding to the case scene based on the plurality of groups of field image data;
the target object receiving module is used for receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data;
a case image acquisition module, configured to embed the target object data into the panoramic image information based on the spatial position data, so as to obtain a case panoramic image;
and the case image output module is used for outputting the case panoramic image to the user terminal.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
comprising a memory and a processor;
the memory stores a computer program, and the processor implements the steps of the teaching case generation method applied to the scenario site when executing the computer program.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the teaching case generation method applied to a scenario site as described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the application provides a teaching case generation method applied to a case issuing field, which comprises the following steps: panoramic acquisition operation is carried out on a case scene based on spherical camera equipment, and a plurality of groups of scene image data are obtained; constructing site panoramic image information corresponding to the case site based on the plurality of groups of site image data; receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data; embedding the target object data into the panoramic image information based on the spatial position data to obtain a case panoramic image; outputting the panoramic image data to the user terminal. By collecting image data of a case scene and constructing a panoramic image consistent with the scene, the problem of unreal simulation of a real scene is effectively avoided, meanwhile, a teacher can design a corresponding panoramic case script according to the content of a professional course lecture, and by accurately measuring the length and the area of a key position of the real panoramic case scene, a target trace and a certificate three-dimensional space point in a panorama are accurately positioned and used for marking depth information of a suspicious point or a target object, such as a footprint, fingerprint, DNA data and the like, so that the real objectivity of the panoramic image of the case scene is ensured.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a flowchart illustrating an implementation of a method for generating a teaching case applied to a filing site according to an embodiment of the present application;
FIG. 2 is a flowchart of an implementation of step S102 in FIG. 1;
fig. 3 is a schematic structural diagram of a teaching case generating device applied to a case issuing field according to a second embodiment of the present application;
FIG. 4 is a schematic block diagram of panorama construction module 120 of FIG. 3;
FIG. 5 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
Example one
Referring to fig. 1, a flow chart of a teaching case generation method applied to a filing site according to an embodiment of the present application is shown, and for convenience of description, only the parts related to the present application are shown.
In step S110, a panoramic capture operation is performed on the basis of a spherical camera device on the basis of a case scene, and a plurality of sets of scene image data are acquired.
In the embodiment of the present application, the spherical imaging apparatus refers to an imaging device that can capture an image in 360 degrees in all directions.
In this embodiment of the application, the panoramic capturing operation refers to capturing scene image data of a space at the same time by the spherical imaging devices disposed in different spaces, so that the panoramic capturing operation can be implemented.
In the embodiment of the present application, the multiple sets of field image data refer to scene image data respectively collected by the sphere camera devices in the different spaces and scenes, and the scene image data are grouped and distinguished based on the different scenes.
In the embodiment of the application, all the optical lens groups arranged in the spherical camera device in different spaces share the shooting shutter control sensor, so that the shooting and light signal collection can be carried out at the same moment, the collected image set data has the linear identity of space and time, and the situations of displacement shooting, delayed shooting and rendering and limited angle imaging in the existing mode are avoided.
In step S120, live panoramic image information corresponding to the scene is constructed based on the plurality of sets of live image data.
In the embodiment of the present application, since there may be a plurality of associated scenarios in a case scene, as an example, for example: indoor, outdoor, etc.
In the embodiment of the application, the field image data collected by the spherical camera device is distinguished according to different scenes, so that the field image data of the same scene can be spliced into panoramic image information consistent with the scene conveniently.
In step S130, target object data sent by the user terminal is received, where the target object data at least carries spatial position data.
In the embodiment of the present application, the user terminal may be a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, etc., and a fixed terminal such as a digital TV, a desktop computer, etc., it should be understood that the examples of the user terminal herein are only for convenience of understanding and are not intended to limit the present invention.
In the embodiment of the application, the target object data is mainly used for marking suspicious points or depth information of the target object, such as footprints, fingerprints, DNA data and the like, and the knowledge teaching of related professional courses on the panoramic teaching carrier based on the real file scene is realized.
In practical application, an instructor can carry out corresponding panoramic case script design according to the content of a professional course lecture, and target traces and three-dimensional space points of evidence in a panoramic are accurately positioned by accurately measuring the length and the area of a key position of a real panoramic case issuing site.
In step S140, the target object data is embedded into the panoramic image information based on the spatial position data, obtaining a case panoramic image.
In the embodiment of the application, the target object data carries the spatial position data, and the target object data is embedded into the corresponding scene image information according to the spatial position data, so that the knowledge content used for teaching by a teacher is attached to the panoramic image, a student can learn the panoramic image in an actual manner, and the practical and analysis capabilities of the student are effectively improved.
In step S150, the panoramic image data is output to the user terminal.
In an embodiment of the present application, a method for generating a teaching case applied to a case-issuing field is provided, including: panoramic acquisition operation is carried out on a case scene based on spherical camera equipment, and a plurality of groups of scene image data are obtained; constructing site panoramic image information corresponding to the case site based on the plurality of groups of site image data; receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data; embedding the target object data into the panoramic image information based on the spatial position data to obtain a case panoramic image; outputting the panoramic image data to the user terminal. By collecting image data of a case scene and constructing a panoramic image consistent with the scene, the problem of unreal simulation of a real scene is effectively avoided, meanwhile, a teacher can design a corresponding panoramic case script according to the content of a professional course lecture, and by accurately measuring the length and the area of a key position of the real panoramic case scene, a target trace and a certificate three-dimensional space point in a panorama are accurately positioned and used for marking depth information of a suspicious point or a target object, such as a footprint, fingerprint, DNA data and the like, so that the real objectivity of the panoramic image of the case scene is ensured.
With continuing reference to fig. 2, a flowchart for implementing step S120 in fig. 1 is shown, and for convenience of illustration, only the portions relevant to the present application are shown.
In some optional implementation manners of the first embodiment of the present application, the step S120 specifically includes: step S210, step 220, step S230, step S240, step 250, step S260, step S270, and step S280.
In step S210, first live image data corresponding to a first live is extracted from the plurality of sets of live image data, and first live image information is created.
In the embodiment of the present application, since scene image data are already grouped according to sphere imaging devices of different scenes, when a first scene image needs to be created, the sphere imaging device of the first scene can acquire the first scene image data corresponding to the first scene.
In the embodiment of the present application, after the first live image data is acquired, the scene image consistent with the scene may be formed by splicing the length, width, and height coordinates of the scene optical signals.
In step S220, first channel data sent by the user terminal is received.
In this embodiment, the first channel data refers to a default initial station and an initial view direction in which the first scene is communicated with the second scene.
In step S230, a first channel connection point is created in the first live image information based on the first channel data.
In this embodiment of the application, since the first channel data includes spatial position information, in the first scene image, the first channel connection point may be embedded in the first scene image based on the spatial position information, so as to complete the creation of the first channel connection point.
In step S240, second live image data corresponding to a second live is extracted from the plurality of sets of live image data, and second live image information is created.
In step S250, the second channel data sent by the ue is received.
In step S260, a second channel connection point is created in the second live image information based on the second channel data.
In step S270, an association relationship between the first channel connection point and the second channel connection point is established to complete the channel association operation.
In the embodiment of the application, by establishing the association relationship between the first channel connection point and the second channel connection point, a user can freely switch from one visual scene to another visual scene through the channel connection point, and the user can freely enter and exit two different visual scenes.
In step S280, after the creation operation of the plurality of sets of live image data is completed, the live panoramic image information is obtained.
In practical application, firstly creating a channel in a first panoramic scene, then selecting a target scene of the channel as a second panoramic scene, and simultaneously selecting a default initial site and an initial view angle direction, wherein the channel is named as a channel 1; then, the same operation is carried out, a corresponding channel is also created in the second panoramic scene, then the target scene of the channel is selected as the first panoramic scene, and the default initial station and the initial view angle direction are simultaneously selected, wherein the channel is named as a channel 2; then, in any of the above panoramic scenes, after the fabricated "channel" is selected (here, "channel 1" or "channel 2" may be selected), the operation of "associating channel" is clicked, and the corresponding "channel" fabricated in another panoramic scene is selected, and after the click is determined, the operation of associating channel 1 "and channel 2" is completed.
In some optional implementation manners of the first embodiment of the present application, the spherical image capturing apparatus is composed of at least 3 optical lens groups, the optical lens groups are uniformly distributed inside the spherical image capturing apparatus, and the focusing directions of the optical lens groups all point to the center of the sphere of the spherical image capturing apparatus.
In some optional implementations of the first embodiment of the present application, a connection line of central points of the lenses of the plurality of optical lens groups forms a largest equilateral shape or equilateral shape in the sphere.
In the embodiment of the application, the sphere camera device integrates n +1(n > ═ 2) optical lens groups which are uniformly embedded on the surface of the sphere device; all optical lens groups of the spherical camera device are ensured to share a shooting shutter control sensor, so that all the optical lens groups can shoot and collect optical signals at the same time; meanwhile, the manufacturing process of each group of optical lenses is uniform, so that the parameters such as refractive indexes of all the optical lenses are consistent, and the optical deviation rate of all the optical lenses is reduced to the minimum, so that the focus of each optical lens is concentrated at the spherical center of the spherical equipment.
In the embodiment of the application, the manufacturing process of each group of optical lenses is uniform, so that parameters such as refractive indexes of all the optical lenses are consistent, and the optical deviation ratios of all the optical lenses are reduced to the minimum, so as to ensure that the focal point of each optical lens is concentrated at the spherical center of the spherical equipment, namely the focal distance of the optical lens is approximately equal to the radius length of the spherical equipment.
In some optional implementation manners of the first embodiment of the present application, the step S101 further includes: adjusting a white balance parameter of the sphere photography apparatus.
In the embodiments of the present application, as research progresses, standardization of a panoramic image has become a necessary process for ensuring image sharpness. However, the inventors have realized that panoramic scan imaging is associated with many factors, such as the model of the slide scanner, which can cause color structure differences in the panoramic image.
In the embodiment of the present application, in order to unify the standard of panoramic scanning imaging, the white balance parameter may be adjusted for the spherical imaging apparatus. White balance is also understood as white balance, and here means that a slide glass not carrying a sample is placed under a sphere imaging device in advance, and then the color mixing ratio, that is, RGB values, of the panoramic image are adjusted based on the slide glass not carrying the sample so that the color mixing ratio of the panoramic image becomes white. That is, the standard of panoramic scan imaging is unified such that the background of the panoramic image is white.
Then, when the subsequent sphere imaging apparatus performs panoramic scan imaging, the slide bearing the sample at the time of scanning will remain white as a background because the effective image is shown instead of white, and the slide not bearing the sample at the time of scanning will remain white because the effective image is not shown. Based on this, the panoramic image obtained when the spherical imaging apparatus performs panoramic scanning imaging can be adjusted in image sharpness based on white.
In some optional implementation manners of the first embodiment of the present application, the step S101 further includes: and performing gamma curve correction processing on the sphere imaging equipment.
The inventors realized that when a light source is irradiated on a slide, the middle area of an image tends to be brighter than the edge area of the image when a ball imaging apparatus performs scan imaging, and based on this, in order to make the brightness contrast of the image smaller, the gamma parameter of the ball imaging apparatus may be adjusted, that is, the ball imaging apparatus is subjected to gamma curve correction processing so that the brightness contrast of the edge area of the image and the middle area of the image can be relatively reduced.
Further, in order to be able to unify the standards of panoramic scan imaging, gamma curve correction processing is also required for the spherical imaging apparatus. The gamma curve correction can be used for compensating the color display difference of images when the spherical camera devices in the slide scanners of different models scan and image, so that the images can show the same color display effect when the spherical camera devices in the slide scanners of different models scan and image.
By such an arrangement, after the gamma curve is corrected, the following purposes can be achieved: the color of the gray scale of the dark field is obviously improved, the color error of each gray scale is obviously reduced, the color detail of the dark field is clear, the image color is displayed consistently, the transparency is good, and the brightness contrast is relatively reduced.
In summary, the present application provides a method for generating a teaching case applied to a case issuing site, including: panoramic acquisition operation is carried out on a case scene based on spherical camera equipment, and a plurality of groups of scene image data are obtained; constructing site panoramic image information corresponding to the case site based on the plurality of groups of site image data; receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data; embedding the target object data into the panoramic image information based on the spatial position data to obtain a case panoramic image; outputting the panoramic image data to the user terminal. By collecting image data of a case scene and constructing a panoramic image consistent with the scene, the problem of unreal simulation of a real scene is effectively avoided, meanwhile, a teacher can design a corresponding panoramic case script according to the content of a professional course lecture, and by accurately measuring the length and the area of a key position of the real panoramic case scene, a target trace and a certificate three-dimensional space point in a panorama are accurately positioned and used for marking depth information of a suspicious point or a target object, such as a footprint, fingerprint, DNA data and the like, so that the real objectivity of the panoramic image of the case scene is ensured. Meanwhile, a user can quickly and lightly fix the general appearance and the structure of a multi-angle multi-scene site by using the invention, the acquisition time range of the multi-scene site panoramic equipment is less than or equal to 1s by using a lightweight handheld device, 36 groups of photos are acquired at one time in a multi-angle, all-dimensional and integral covering manner, the panoramic images are acquired at any position angles of the same scene, and the taken panoramic images are processed in real time and returned to spliced panoramic images to be displayed on various terminals to be visualized on the site; the user can utilize the single scene data which is shot well and fixed to carry out the series connection of independent scenes in different visual ranges through the function of a channel in a terminal system, the single scenes in different visual ranges are communicated according with real world scenes, the effect of free switching of multiple scenes in different visual ranges is achieved, and the immersive real scene browsing function can be carried out on any scene in the real sense, which is different from a virtual reality technology and an augmented reality technology.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
Example two
With further reference to fig. 3, as an implementation of the method shown in fig. 1, the present application provides an embodiment of a teaching case generation apparatus applied to a filing site, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus can be applied to various electronic devices.
As shown in fig. 3, the teaching case generating apparatus 100 applied to the case-issuing site according to the second embodiment includes: a panorama acquisition module 110, a panorama construction module 120, a target object receiving module 130, a case image acquisition module 140, and a case image output module 150. Wherein:
the panoramic acquisition module 110 is used for performing panoramic acquisition operation on a case scene based on spherical camera equipment to acquire a plurality of groups of scene image data;
a panorama constructing module 120, configured to construct, based on the multiple sets of live image data, live panoramic image information corresponding to the case scene;
a target object receiving module 130, configured to receive target object data sent by a user terminal, where the target object data at least carries spatial position data;
a case image obtaining module 140, configured to embed the target object data into the panoramic image information based on the spatial position data, so as to obtain a case panoramic image;
a case image output module 150, configured to output the case panoramic image to the user terminal.
In the embodiment of the present application, the spherical imaging apparatus refers to an imaging device that can capture an image in 360 degrees in all directions.
In this embodiment of the application, the panoramic capturing operation refers to capturing scene image data of a space at the same time by the spherical imaging devices disposed in different spaces, so that the panoramic capturing operation can be implemented.
In the embodiment of the present application, the multiple sets of field image data refer to scene image data respectively collected by the sphere camera devices in the different spaces and scenes, and the scene image data are grouped and distinguished based on the different scenes.
In the embodiment of the application, all the optical lens groups arranged in the spherical camera device in different spaces share the shooting shutter control sensor, so that the shooting and light signal collection can be carried out at the same moment, the collected image set data has the linear identity of space and time, and the situations of displacement shooting, delayed shooting and rendering and limited angle imaging in the existing mode are avoided.
In the embodiment of the present application, since there may be a plurality of associated scenarios in a case scene, as an example, for example: indoor, outdoor, etc.
In the embodiment of the application, the field image data collected by the spherical camera device is distinguished according to different scenes, so that the field image data of the same scene can be spliced into panoramic image information consistent with the scene conveniently.
In the embodiment of the present application, the user terminal may be a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, etc., and a fixed terminal such as a digital TV, a desktop computer, etc., it should be understood that the examples of the user terminal herein are only for convenience of understanding and are not intended to limit the present invention.
In the embodiment of the application, the target object data is mainly used for marking suspicious points or depth information of the target object, such as footprints, fingerprints, DNA data and the like, and the knowledge teaching of related professional courses on the panoramic teaching carrier based on the real file scene is realized.
In practical application, an instructor can carry out corresponding panoramic case script design according to the content of a professional course lecture, and target traces and three-dimensional space points of evidence in a panoramic are accurately positioned by accurately measuring the length and the area of a key position of a real panoramic case issuing site.
In the embodiment of the application, the target object data carries the spatial position data, and the target object data is embedded into the corresponding scene image information according to the spatial position data, so that the knowledge content used for teaching by a teacher is attached to the panoramic image, a student can learn the panoramic image in an actual manner, and the practical and analysis capabilities of the student are effectively improved.
In an embodiment of the present application, a teaching case generating device applied to a case issuing field is provided, including: the panoramic acquisition module is used for carrying out panoramic acquisition operation on a case scene based on the spherical camera equipment to acquire a plurality of groups of scene image data; the panoramic construction module is used for constructing field panoramic image information corresponding to the case scene based on the plurality of groups of field image data; the target object receiving module is used for receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data; a case image acquisition module, configured to embed the target object data into the panoramic image information based on the spatial position data, so as to obtain a case panoramic image; and the case image output module is used for outputting the case panoramic image to the user terminal. By collecting image data of a case scene and constructing a panoramic image consistent with the scene, the problem of unreal simulation of a real scene is effectively avoided, meanwhile, a teacher can design a corresponding panoramic case script according to the content of a professional course lecture, and by accurately measuring the length and the area of a key position of the real panoramic case scene, a target trace and a certificate three-dimensional space point in a panorama are accurately positioned and used for marking depth information of a suspicious point or a target object, such as a footprint, fingerprint, DNA data and the like, so that the real objectivity of the panoramic image of the case scene is ensured.
Continuing to refer to fig. 4, a schematic diagram of the panorama constructing module 120 of fig. 3 is shown, and for ease of illustration, only the portions relevant to the present application are shown.
In some optional implementations of the second embodiment of the present application, the panorama constructing module 120 includes: a first field building submodule 121, a first data receiving submodule 122, a first connection point creating submodule 123, a second field building submodule 124, a second data receiving submodule 125, a second connection point creating submodule 126, a channel associating submodule 127 and a panoramic image obtaining submodule 128. Wherein:
a first scene establishing submodule 121, configured to extract first scene image data corresponding to a first scene from the multiple sets of scene image data, and establish first scene image information;
a first data receiving submodule 122, configured to receive first channel data sent by the user terminal;
a first connection point creation sub-module 123 for creating a first channel connection point in the first live image information based on the first channel data;
a second scene establishing sub-module 124, configured to extract second scene image data corresponding to a second scene from the plurality of groups of scene image data, and establish second scene image information;
a second data receiving submodule 125, configured to receive second channel data sent by the user terminal;
a second connection point creation sub-module 126 for creating a second channel connection point in the second live image information based on the second channel data;
the channel association submodule 127 is configured to establish an association relationship between the first channel connection point and the second channel connection point, so as to complete a channel association operation;
and a panoramic image obtaining sub-module 128, configured to obtain the on-site panoramic image information after completing the creation operation on the multiple sets of on-site image data.
In the embodiment of the present application, since scene image data are already grouped according to sphere imaging devices of different scenes, when a first scene image needs to be created, the sphere imaging device of the first scene can acquire the first scene image data corresponding to the first scene.
In the embodiment of the present application, after the first live image data is acquired, the scene image consistent with the scene may be formed by splicing the length, width, and height coordinates of the scene optical signals.
In this embodiment, the first channel data refers to a default initial station and an initial view direction in which the first scene is communicated with the second scene.
In this embodiment of the application, since the first channel data includes spatial position information, in the first scene image, the first channel connection point may be embedded in the first scene image based on the spatial position information, so as to complete the creation of the first channel connection point.
In the embodiment of the application, by establishing the association relationship between the first channel connection point and the second channel connection point, a user can freely switch from one visual scene to another visual scene through the channel connection point, and the user can freely enter and exit two different visual scenes.
In practical application, firstly creating a channel in a first panoramic scene, then selecting a target scene of the channel as a second panoramic scene, and simultaneously selecting a default initial site and an initial view angle direction, wherein the channel is named as a channel 1; then, the same operation is carried out, a corresponding channel is also created in the second panoramic scene, then the target scene of the channel is selected as the first panoramic scene, and the default initial station and the initial view angle direction are simultaneously selected, wherein the channel is named as a channel 2; then, in any of the above panoramic scenes, after the fabricated "channel" is selected (here, "channel 1" or "channel 2" may be selected), the operation of "associating channel" is clicked, and the corresponding "channel" fabricated in another panoramic scene is selected, and after the click is determined, the operation of associating channel 1 "and channel 2" is completed.
In some optional implementation manners of the second embodiment of the present application, the spherical image capturing apparatus is composed of at least 3 optical lens groups, the optical lens groups are uniformly distributed inside the spherical image capturing apparatus, and the focusing directions of the optical lens groups all point to the center of the sphere of the spherical image capturing apparatus.
In some optional implementations of the second embodiment of the present application, the central points of the lenses of the plurality of optical lens groups are connected to form a maximum equilateral shape or equilateral shape in the sphere.
In the embodiment of the application, the sphere camera device integrates n +1(n > ═ 2) optical lens groups which are uniformly embedded on the surface of the sphere device; all optical lens groups of the spherical camera device are ensured to share a shooting shutter control sensor, so that all the optical lens groups can shoot and collect optical signals at the same time; meanwhile, the manufacturing process of each group of optical lenses is uniform, so that the parameters such as refractive indexes of all the optical lenses are consistent, and the optical deviation rate of all the optical lenses is reduced to the minimum, so that the focus of each optical lens is concentrated at the spherical center of the spherical equipment.
In the embodiment of the application, the manufacturing process of each group of optical lenses is uniform, so that parameters such as refractive indexes of all the optical lenses are consistent, and the optical deviation ratios of all the optical lenses are reduced to the minimum, so as to ensure that the focal point of each optical lens is concentrated at the spherical center of the spherical equipment, namely the focal distance of the optical lens is approximately equal to the radius length of the spherical equipment.
In some optional implementations of the second embodiment of the present application, the panorama acquiring module 110 includes: and a parameter adjusting submodule. Wherein:
and the parameter adjusting submodule is used for adjusting the white balance parameter of the spherical camera equipment.
In the embodiments of the present application, as research progresses, standardization of a panoramic image has become a necessary process for ensuring image sharpness. However, the inventors have realized that panoramic scan imaging is associated with many factors, such as the model of the slide scanner, which can cause color structure differences in the panoramic image.
In the embodiment of the present application, in order to unify the standard of panoramic scanning imaging, the white balance parameter may be adjusted for the spherical imaging apparatus. White balance is also understood as white balance, and here means that a slide glass not carrying a sample is placed under a sphere imaging device in advance, and then the color mixing ratio, that is, RGB values, of the panoramic image are adjusted based on the slide glass not carrying the sample so that the color mixing ratio of the panoramic image becomes white. That is, the standard of panoramic scan imaging is unified such that the background of the panoramic image is white.
Then, when the subsequent sphere imaging apparatus performs panoramic scan imaging, the slide bearing the sample at the time of scanning will remain white as a background because the effective image is shown instead of white, and the slide not bearing the sample at the time of scanning will remain white because the effective image is not shown. Based on this, the panoramic image obtained when the spherical imaging apparatus performs panoramic scanning imaging can be adjusted in image sharpness based on white.
In some optional implementations of the second embodiment of the present application, the panorama acquiring module 110 includes: and a correction processing submodule. Wherein:
and the correction processing submodule is used for carrying out gamma curve correction processing on the spherical camera equipment.
The inventors realized that when a light source is irradiated on a slide, the middle area of an image tends to be brighter than the edge area of the image when a ball imaging apparatus performs scan imaging, and based on this, in order to make the brightness contrast of the image smaller, the gamma parameter of the ball imaging apparatus may be adjusted, that is, the ball imaging apparatus is subjected to gamma curve correction processing so that the brightness contrast of the edge area of the image and the middle area of the image can be relatively reduced.
Further, in order to be able to unify the standards of panoramic scan imaging, gamma curve correction processing is also required for the spherical imaging apparatus. The gamma curve correction can be used for compensating the color display difference of images when the spherical camera devices in the slide scanners of different models scan and image, so that the images can show the same color display effect when the spherical camera devices in the slide scanners of different models scan and image.
By such an arrangement, after the gamma curve is corrected, the following purposes can be achieved: the color of the gray scale of the dark field is obviously improved, the color error of each gray scale is obviously reduced, the color detail of the dark field is clear, the image color is displayed consistently, the transparency is good, and the brightness contrast is relatively reduced.
To sum up, the present application provides a teaching case generation device applied to a case issuing site, including: the panoramic acquisition module is used for carrying out panoramic acquisition operation on a case scene based on the spherical camera equipment to acquire a plurality of groups of scene image data; the panoramic construction module is used for constructing field panoramic image information corresponding to the case scene based on the plurality of groups of field image data; the target object receiving module is used for receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data; a case image acquisition module, configured to embed the target object data into the panoramic image information based on the spatial position data, so as to obtain a case panoramic image; and the case image output module is used for outputting the case panoramic image to the user terminal. By collecting image data of a case scene and constructing a panoramic image consistent with the scene, the problem of unreal simulation of a real scene is effectively avoided, meanwhile, a teacher can design a corresponding panoramic case script according to the content of a professional course lecture, and by accurately measuring the length and the area of a key position of the real panoramic case scene, a target trace and a certificate three-dimensional space point in a panorama are accurately positioned and used for marking depth information of a suspicious point or a target object, such as a footprint, fingerprint, DNA data and the like, so that the real objectivity of the panoramic image of the case scene is ensured. Meanwhile, a user can quickly and lightly fix the general appearance and the structure of a multi-angle multi-scene site by using the invention, the acquisition time range of the multi-scene site panoramic equipment is less than or equal to 1s by using a lightweight handheld device, 36 groups of photos are acquired at one time in a multi-angle, all-dimensional and integral covering manner, the panoramic images are acquired at any position angles of the same scene, and the taken panoramic images are processed in real time and returned to spliced panoramic images to be displayed on various terminals to be visualized on the site; the user can utilize the single scene data which is shot well and fixed to carry out the series connection of independent scenes in different visual ranges through the function of a channel in a terminal system, the single scenes in different visual ranges are communicated according with real world scenes, the effect of free switching of multiple scenes in different visual ranges is achieved, and the immersive real scene browsing function can be carried out on any scene in the real sense, which is different from a virtual reality technology and an augmented reality technology.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 5, fig. 5 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 200 includes a memory 210, a processor 220, and a network interface 230 communicatively coupled to each other via a system bus. It is noted that only computer device 200 having components 210 and 230 is shown, but it is understood that not all of the illustrated components are required and that more or fewer components may alternatively be implemented. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 210 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 210 may be an internal storage unit of the computer device 200, such as a hard disk or a memory of the computer device 200. In other embodiments, the memory 210 may also be an external storage device of the computer device 200, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 200. Of course, the memory 210 may also include both internal and external storage devices of the computer device 200. In this embodiment, the memory 210 is generally used for storing an operating system installed in the computer device 200 and various application software, such as program codes of a teaching case generation method applied to a case-taking site. In addition, the memory 210 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 220 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 220 is generally operative to control overall operation of the computer device 200. In this embodiment, the processor 220 is configured to run the program code stored in the memory 210 or process data, for example, run the program code of the teaching case generation method applied to the case scene.
The network interface 230 may include a wireless network interface or a wired network interface, and the network interface 230 is generally used to establish a communication connection between the computer device 200 and other electronic devices.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing a teaching case generation program applied to a case scene, where the teaching case generation program applied to the case scene is executable by at least one processor to cause the at least one processor to perform the steps of the teaching case generation method applied to the case scene.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A teaching case generation method applied to a case issuing field is characterized by comprising the following steps:
panoramic acquisition operation is carried out on a case scene based on spherical camera equipment, and a plurality of groups of scene image data are obtained;
constructing site panoramic image information corresponding to the case site based on the plurality of groups of site image data;
receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data;
embedding the target object data into the panoramic image information based on the spatial position data to obtain a case panoramic image;
outputting the panoramic image data to the user terminal.
2. The method for generating the teaching case applied to the case scene as claimed in claim 1, wherein the step of constructing the scene panoramic image information corresponding to the case scene based on the plurality of groups of scene image data specifically comprises the following steps:
extracting first field image data corresponding to a first field from the plurality of groups of field image data, and establishing first field image information;
receiving first channel data sent by the user terminal;
creating a first channel connection point in the first live image information based on the first channel data;
extracting second field image data corresponding to a second field from the plurality of groups of field image data, and establishing second field image information;
receiving second channel data sent by the user terminal;
creating a second channel connection point in the second live image information based on the second channel data;
establishing an association relation between the first channel connection point and the second channel connection point to complete channel association operation;
and when the creation operation of the plurality of groups of field image data is completed, obtaining the field panoramic image information.
3. The method as claimed in claim 1, wherein the spherical camera device comprises at least 3 optical lens groups, the optical lens groups are uniformly distributed inside the spherical camera device, and the focusing directions of the optical lens groups are all directed to the center of the spherical camera device.
4. The method as claimed in claim 3, wherein the center points of the lens of the plurality of optical lens groups are connected to form a maximum equilateral shape or equilateral volume within the sphere.
5. The method for generating the teaching case applied to the case-taking scene as claimed in claim 1, wherein the step of performing the panoramic acquisition operation on the case-taking scene based on the spherical camera device to obtain the plurality of groups of scene image data comprises the following steps:
adjusting a white balance parameter of the sphere photography apparatus.
6. The method for generating the teaching case applied to the case-taking scene as claimed in claim 1, wherein the step of performing the panoramic acquisition operation on the case-taking scene based on the spherical camera device to obtain the plurality of groups of scene image data comprises the following steps:
and performing gamma curve correction processing on the sphere imaging equipment.
7. An apparatus for generating teaching cases for use in a case-issuing field, the apparatus comprising:
the panoramic acquisition module is used for carrying out panoramic acquisition operation on a case scene based on the spherical camera equipment to acquire a plurality of groups of scene image data;
the panoramic construction module is used for constructing field panoramic image information corresponding to the case scene based on the plurality of groups of field image data;
the target object receiving module is used for receiving target object data sent by a user terminal, wherein the target object data at least carries spatial position data;
a case image acquisition module, configured to embed the target object data into the panoramic image information based on the spatial position data, so as to obtain a case panoramic image;
and the case image output module is used for outputting the case panoramic image to the user terminal.
8. The apparatus as claimed in claim 7, wherein the panoramic construction module comprises:
the first site establishing sub-module is used for extracting first site image data corresponding to a first site from the plurality of groups of site image data and establishing first site image information;
the first data receiving submodule is used for receiving first channel data sent by the user terminal;
a first connection point creation sub-module for creating a first channel connection point in the first live image information based on the first channel data;
the second scene establishing submodule is used for extracting second scene image data corresponding to a second scene from the plurality of groups of scene image data and establishing second scene image information;
the second data receiving submodule is used for receiving second channel data sent by the user terminal;
a second connection point creation sub-module for creating a second channel connection point in the second live image information based on the second channel data;
the channel association submodule is used for establishing an association relationship between the first channel connection point and the second channel connection point so as to complete channel association operation;
and the panoramic image acquisition sub-module is used for acquiring the on-site panoramic image information after finishing the creation operation of the plurality of groups of on-site image data.
9. A computer arrangement comprising a memory in which a computer program is stored and a processor which, when executing the computer program, carries out the steps of the teaching case generation method for application in a scenario as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method for generating teaching cases for application in a case-taking place according to any one of claims 1 to 7.
CN202010963137.2A 2020-09-14 2020-09-14 Teaching case generation method and device, computer equipment and storage medium Pending CN112087577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010963137.2A CN112087577A (en) 2020-09-14 2020-09-14 Teaching case generation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010963137.2A CN112087577A (en) 2020-09-14 2020-09-14 Teaching case generation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112087577A true CN112087577A (en) 2020-12-15

Family

ID=73736741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010963137.2A Pending CN112087577A (en) 2020-09-14 2020-09-14 Teaching case generation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112087577A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201780606U (en) * 2010-06-08 2011-03-30 上海市刑事科学技术研究所 Field three-dimensional reappearance device
US20170195561A1 (en) * 2016-01-05 2017-07-06 360fly, Inc. Automated processing of panoramic video content using machine learning techniques
CN110738895A (en) * 2019-09-16 2020-01-31 潘涛 Criminal scene investigation real-scene VR comprehensive training system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201780606U (en) * 2010-06-08 2011-03-30 上海市刑事科学技术研究所 Field three-dimensional reappearance device
US20170195561A1 (en) * 2016-01-05 2017-07-06 360fly, Inc. Automated processing of panoramic video content using machine learning techniques
CN110738895A (en) * 2019-09-16 2020-01-31 潘涛 Criminal scene investigation real-scene VR comprehensive training system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JSHSOFT2013: "《刑事案件现场全景再现分析***》", 《百度文库》 *
飞揪Q9776: "《案件现场全景再现分析演示***》", 《百度文库》 *

Similar Documents

Publication Publication Date Title
CN105046213B (en) A kind of method of augmented reality
US11302118B2 (en) Method and apparatus for generating negative sample of face recognition, and computer device
US10970938B2 (en) Method and apparatus for generating 3D information
CN111612842B (en) Method and device for generating pose estimation model
CN109120854B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111228821B (en) Method, device and equipment for intelligently detecting wall-penetrating plug-in and storage medium thereof
CN107256082B (en) Throwing object trajectory measuring and calculating system based on network integration and binocular vision technology
CN111508033A (en) Camera parameter determination method, image processing method, storage medium, and electronic apparatus
CN109523499A (en) A kind of multi-source fusion full-view modeling method based on crowdsourcing
CN112561973A (en) Method and device for training image registration model and electronic equipment
CN113724391A (en) Three-dimensional model construction method and device, electronic equipment and computer readable medium
KR101360999B1 (en) Real time data providing method and system based on augmented reality and portable terminal using the same
CN114358112A (en) Video fusion method, computer program product, client and storage medium
CN112381118B (en) College dance examination evaluation method and device
CN112102481A (en) Method and device for constructing interactive simulation scene, computer equipment and storage medium
WO2023217138A1 (en) Parameter configuration method and apparatus, device, storage medium and product
CN112087577A (en) Teaching case generation method and device, computer equipment and storage medium
CN116708862A (en) Virtual background generation method for live broadcasting room, computer equipment and storage medium
CN112287945A (en) Screen fragmentation determination method and device, computer equipment and computer readable storage medium
CN114419293B (en) Augmented reality data processing method, device and equipment
Wang et al. Enhancing visualisation of anatomical presentation and education using marker-based augmented reality technology on web-based platform
CN112087578A (en) Cross-region evidence collection method and device, computer equipment and storage medium
CN109472821A (en) Depth estimation method, device, equipment and storage medium
CN112291445B (en) Image processing method, device, equipment and storage medium
CN116601661A (en) Generating an evaluation mask for multi-factor authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000 Room 403, building 4, Shenzhen software industry base, No. 19, 17 and 18, Haitian 1st Road, Binhai community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Applicant after: Shenzhen man machine consensus Technology Co.,Ltd.

Address before: 30A, building 5, building 1-5, Huating, modern city, No. 17, Nanguang Road, Nanshan street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen Mobile Internet Research Institute Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201215