CN108109197A - A kind of image procossing modeling method - Google Patents

A kind of image procossing modeling method Download PDF

Info

Publication number
CN108109197A
CN108109197A CN201711350936.7A CN201711350936A CN108109197A CN 108109197 A CN108109197 A CN 108109197A CN 201711350936 A CN201711350936 A CN 201711350936A CN 108109197 A CN108109197 A CN 108109197A
Authority
CN
China
Prior art keywords
target object
image
angle
modeling method
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711350936.7A
Other languages
Chinese (zh)
Other versions
CN108109197B (en
Inventor
吴秋红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhong Rui Hua Xin Information Technology Co Ltd
Original Assignee
Beijing Zhong Rui Hua Xin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhong Rui Hua Xin Information Technology Co Ltd filed Critical Beijing Zhong Rui Hua Xin Information Technology Co Ltd
Priority to CN201711350936.7A priority Critical patent/CN108109197B/en
Publication of CN108109197A publication Critical patent/CN108109197A/en
Application granted granted Critical
Publication of CN108109197B publication Critical patent/CN108109197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of image procossing modeling methods, comprise the following steps:S1. video image acquisition is carried out to target object;S2. to carrying out edge analysis processing in video image per two field picture, identify the edge contour of target object, the shooting angle of different frame is marked, form the profile information of target object different angle;S3. the profile information of the different angle to being generated in step S2 carries out the simulation rotation modeling of virtual 3d space, forms 3D models.Image procossing modeling method provided by the present invention, by each frame data in captured video image, carrying out the analysis identification of target object in image, the algorithm of identification can be varied, and the recognizer that can be increased income using existing network handles image.Method operand provided by the invention is small, is not very powerful intelligent terminal suitable for processing capacities such as mobile phone, tablets.

Description

A kind of image procossing modeling method
Technical field
The invention belongs to 3D modeling technical fields, and in particular to a kind of image procossing modeling method.
Background technology
Image modeling technology refers to be acquired photo to object by equipment such as cameras, through computer progress graphic diagram As processing and three-dimensional computations, so as to automatically generate the technology of the threedimensional model of subject, belong to three-dimensional reconstruction Scope is related to the subjects such as computer geometry, computer graphics, computer vision, image procossing, mathematical computations.
From the point of view of the long-term follow of our at home and abroad correlative technology fields investigates grasped situation, have in the world at present The mechanisms such as Microsoft, autodesk, inc., Stanford University and the Massachusetts Institute of Technology are quick in the three-dimensional body based on image There is good achievement in research in terms of reconstruction, but only laboratory research achievement, it at present can not also be commercial.Microsoft was once The service of three-dimensional reconstruction based on image is being provided on the net, but can not undertaken since user's visit capacity is big and technology is unqualified Heavy technological service closes corresponding server soon.Ye You Canada Companies FOTO3D etc. is based in the world at present The marketing of the three-dimensional reconstruction system of image, but substantial amounts of manual interaction is needed, shooting environmental and shooting essence to photo Degree has quite high requirement, therefore traction is not high.
And conventional images processing modeling method is cumbersome, operand is big, leads to not apply to the processing energy such as mobile phone, tablet On the weak arithmetic facility of power.
The content of the invention
Present invention aim to address the above problems, provide a kind of image procossing modeling method of low operand.
In order to solve the above technical problems, the technical scheme is that:A kind of image procossing modeling method, including following step Suddenly:
S1, video image acquisition is carried out to target object;
S2, edge analysis processing is carried out to every two field picture in video image, the edge contour of target object is identified, to not Shooting angle at same frame is marked, and forms the profile information of target object not ipsilateral;
S3, modeling, shape are rotated to the simulation of the virtual 3d space of profile information progress of the different angle generated in step S2 Into 3D models.
By to each frame data in captured video image, carrying out the analysis identification of target object in image, the calculation of identification Method can be varied, and the recognizer that can be increased income using existing network handles image.
Preferably, the step S2 includes following sub-step:
S21, brightness identification is carried out to every two field picture, calculates luminance mean value and dispersion;
S22, edge sharpening and binaryzation are carried out to image, obtains two-value gray-scale map;
S23, two-value gray-scale map is modified:
S231, the continuous modification that border is done using the information of image itself, excluding singular point and noise influences;
S232, using the supplementary data of front and rear frame to present frame into row bound continuous modification.
Preferably, the step S231 includes:Orientation detection nearby, chosen distance and side are carried out at discontinuous odd point It is attached to most matched singular point, and is marked in two-value gray-scale map:
Distance and direction for pixel P and P ' can similarly obtain P points to continuously coupled direction each point (P0… Pn) retrospect (Δ0…Δn), singular point fitting is carried out according to the direction of Δ sequence, finally determines most suitable tie point.
Preferably, the correcting region and front and rear frame of current frame flag are compared, if front and rear frame exists continuously Situation then carries out approximate match according to the continuous situation of front and rear frame.
Preferably, the step S3 includes following sub-step:
The fixed characteristic point of relative position is as angle rotary reference point in S31, selection target object;
S32, the variation by having chosen reference point relative position calculate target object angle of inclination, relative position and phase To angle, angle change of the present image boundary profile in 2D spaces is judged;
S33, three-dimensional perspective reduction amendment is carried out to the change sequence of reference point in each frame, obtains the true of target object Rotation angle as the 3D profiles of current frame boundaries, carries out the border in 2D images 3D positions mark, completes target object 3D model modelings;
If S34, the target object for shooting standing by the way of follow shot with camera terminal, in each frame image data In, the data of the acceleration transducer of records photographing terminal, inertial sensor and magnetometric sensor, according to these data to target Object carries out angle analysis, so as to obtain the 2D profiles of target object different angle, and then synthesizes 3D models.
Preferably, the step S3 includes following sub-step:
S31, choose changeless object of reference beside target object and then select the characteristic point generation reference on object of reference Vector;
S32, by the mark point vector on target object and the angled relationships of object of reference vector to the angle of present frame into Rower is noted, and one frame of generation carries the 2D outline datas of angle information, after the completion of the analysis of all 360 degree of outline datas, Jin Erhe Into the 3D model modelings of target object.
Preferably, further included after the step S3:
S4,3D models progress details is portrayed and corrected.
Preferably, in the step S4, when target object is human body, the trend of bone is confirmed using median computation methods And joint position.
Preferably, the step S4 includes:Normative reference object is shot with camera terminal first, then by the image of acquisition The data of all angles are compared with normative reference object data, are obtained the feature of the ball-type distortion of camera terminal and are calculated ratio Example carries out accurate measure, with obtained each shooting to the various camera terminals that video image acquisition can be carried out to target object The feature and calculating ratio of the ball-type distortion of terminal establish correction model database;When user is carried out with wherein known camera terminal After shooting, before generating 3D models, video image can first pass through the corresponding distortion data correction model of correction model database lookup, Model Identification is carried out again after Computer Vision.
Preferably, the step S4 includes:The amendment of local size is directly carried out to 3D models.
The beneficial effects of the invention are as follows:A kind of image procossing modeling method provided by the present invention, by being regarded to captured Each frame data in frequency image carry out the analysis identification of target object in image, and the algorithm of identification can be varied, can utilize The recognizer that existing network is increased income handles image.This method operand is small, suitable for processing capacities such as mobile phone, tablets It is not very powerful intelligent terminal.
Description of the drawings
Fig. 1 is human body leg vertical of the present invention and flexuosity schematic diagram.
Specific embodiment
The present invention is described further in the following with reference to the drawings and specific embodiments:
Embodiment one
A kind of image procossing modeling method provided in this embodiment, comprises the following steps:
S1, video image acquisition is carried out to target object by camera terminal;Camera terminal can be the electricity such as mobile phone, tablet Sub- equipment.
S2, edge analysis processing is carried out to every two field picture in video image, the edge contour of target object is identified, to not Shooting angle at same frame is marked, and forms the profile information of target object different angle.
Step S2 includes following sub-step:
S21, brightness identification is carried out to every two field picture, calculates luminance mean value and dispersion;
Better recognition effect is, it is necessary to first assess the effect of image entirety, so as to be calculated to be follow-up in order to obtain Method sets basic parameter and boundary condition.Brightness identification is carried out to key frame of video first with image procossing:(L0…Ln), Luminance mean value and dispersion are calculated by average weighted method afterwards.
LnPair it is the overall brightness value of n-th frame image, computational methods can linearly be calculated using average gray, i.e., Each RGB color value in each two field picture does average addition, and Z is pixel quantity.
Wherein B works as a as last recognition result parameter0=0, a '=1 when, obtain crude initial values B0。a0Manually to adjust Whole corrected parameter, a ' is recommendation coefficient, a in the case of typically no manual intervention0=0, it can also be according to the need of practical application It asks and whole gray scale adjusting is carried out to image, video, that is, adjust a0Numerical value, only can be embodied in user in advance can With on the image seen.A ' values are between 0.7~1.3.
S22, edge sharpening and binaryzation are carried out to image, obtains two-value gray-scale map;
Image is carried out using high-pass filtering and the spatial domain differential method edge sharpening and binaryzation (be arranged to 255 more than threshold value, It is arranged to 0), reach ultimate attainment limb recognition less than threshold value.Afterwards in the sharpening figure of each two field picture, according to brightness before from It dissipates weighted value B to be compared, forms two-value gray-scale map:
The gray scale (or RGB component) of wherein g (x, y) representative graph picture point f (x, y), G [f (x, y)] are picture point f (x, y) Grad.
S23, two-value gray-scale map is modified:
Quality problems of the two-value gray-scale map due to noise or image in itself after sharpening, it is understood that there may be partial discontinuous or The local unsharp situation of person, for this purpose, the amendment in two stages will be carried out in the present embodiment to two-value gray-scale map:
S231, the continuous modification that border is done using the information of image itself:
Orientation detection, the most matched singular point of chosen distance and direction nearby is carried out at discontinuous odd point to be attached, And it is marked in two-value gray-scale map:
Distance and direction for pixel P and P ' can similarly obtain P points to continuously coupled direction each point (P0… Pn) retrospect (Δ0…Δn), singular point fitting is carried out according to the direction of Δ sequence, finally determines most suitable tie point.
S232, using the supplementary data of front and rear frame to present frame into row bound continuous modification:
The correcting region and front and rear frame of current frame flag are compared, if front and rear frame there are continuous situation, Approximate match is carried out according to the continuous situation of front and rear frame, matching value can be according to the unmarked frontier district for " having been corrected " of present frame Domain carries out similarity analysis.
S3, modeling, shape are rotated to the simulation of the virtual 3d space of profile information progress of the different angle generated in step S2 Into 3D models.Step S3 includes following sub-step:
The fixed characteristic point of relative position is as angle rotary reference point in S31, selection target object;Characteristic point can be Inflection point in object external outline line.The quantity of characteristic point is at least three, for example, in advance in order to position it is convenient and identify color dot, Cubical wedge angle, the ears of human body, the fixation stitch points of clothes.
S32, the variation by having chosen reference point relative position calculate target object angle of inclination, relative position and phase To angle, angle change of the present image boundary profile in 2D spaces is judged;
Δ θ is the angle change of target object,WithRepresent that target object rotates former and later two reference points and successively formed Direction vector.
S33, in order to revert to 3D fields, in each frame reference point change sequence carry out three-dimensional perspective reduction correct, The true rotation angle of target object is obtained, as the 3D profiles of current frame boundaries, 3D positions are carried out to the border in 2D images Mark finally synthesizes the 3D model modelings that all 2D profiles complete target object.
In addition, for target object, if the size between given specified point, system can also be according to this size in actual 3D Concrete meaning in model, the size for carrying out full model is deduced, so as to form the target object 3D moulds of more closing to reality size Type.For example, the calibration for Human Height, can assist to deduce out the size at other positions of human body, such as:Brachium, measurements of the chest, waist and hips etc. Information.
If S34, the target object for shooting standing by the way of follow shot with camera terminal, in each frame image data In, the data of the acceleration transducer of records photographing terminal, inertial sensor and magnetometric sensor, according to these data to target Object carries out angle analysis, so as to obtain the 2D profiles of target object different angle, and then synthesizes 3D models.
S4,3D models progress details is portrayed and corrected.
First, flexible article 3D Modifying models:Target object is if the flexible situation for being not fixed object, such as human body.It will Target person is asked to shoot 360 degree of image/videos according to different postures, such as the flattened upright, both arms of both arms naturally droop uprightly, certainly It so squats down etc. postures, and corresponding different posture is modeled respectively, it is thin so as to obtain object module more abundant " joint " Section.
Since human body is a kind of very special " object ", for the 3D modeling of human body, if only the model from outside Be scanned, be it is inappropriate because different skeletal forms, articular morphology can to human body during exercise external deformation have it is non- Often big influence.Internal reckoning is carried out according to the characteristics of body configuration's bending change, so that it is determined that influential in 3D models Skeleton data, into a 3D model that are abundant and improving human body.
For the relevant parameter in joint, bone, the flexure operations such as can be squatted down according to upright, reported arm carry out initial data and adopt Collection.The present invention confirms the trend of bone and joint position using median computation methods.These information will be used for target object The variation generated during joint motions, to consider the matching degree of outer cover (clothes etc.).
As shown in Figure 1, for the curved body part of energy, we measure to obtain respectively:The length of the position straight configuration L, the length L of the first arm0, the second arm length L1, the first joint radius R0With the radius R of second joint1, the first joint with First arm and the arc length L at the second arm point of contact2, second joint and the second arm point of contact arc length L3, one end of the first arm and the second arm One end is connected by the first joint, and the other end of the second arm is connected with second joint.For leg, upright and bending situation Under, L be leg vertical state length, L0For the length of thigh, L1For the length of shank, R0For the radius of knee, R1For ankle Radius, L2For knee and thigh and the arc length at shank point of contact, L3For the arc length at ankle and shank point of contact, by calculating R0And R1 Center location obtain articulation center, while calculate bone length and be:
Meanwhile according to R0, R1Centre point position and bone length LbBone, the phase in joint can be depicted in 3D models To position.It, so can be very when partial analysis is done we obtain the relative position information of internal bone based on this Easily calculate required design margin and design details.
Same principle, can be to the data in the joints such as definite arm, elbow, neck.
2nd, the spherical surface distortion of camera terminal is modified:Since different camera terminals, such as each mobile phone brand are being clapped Different position regional imaging can be there are different degrees of spherical surface distortion, according to the spherical surface distortion of different mobile phone brands when taking the photograph image Empirical value establishes the spherical surface distortion data storehouse based on mobile phone brand, software version, thus the 3D models after it is shot and is identified It is further modified, to reach most accurate recognition effect.
Specifically, normative reference object is shot with camera terminal first, then by the data of the image all angles of acquisition Compared with normative reference object data, obtain the feature of the ball-type distortion of camera terminal and calculating ratio, to it is various can The camera terminal that video image acquisition is carried out to target object carries out accurate measure, with the ball-type distortion of obtained each camera terminal Feature and calculating ratio establish correction model database;After user is shot with wherein known camera terminal, 3D is generated Before model, video image can first pass through the corresponding distortion data correction model of correction model database lookup, Computer Vision Carry out Model Identification again afterwards.
3rd, the amendment of local size is directly carried out to 3D models:Original model can be carried out according to the hobby of oneself The amendment of local small size, for example adjust some local sizes.Particularly manikin can adjust the ruler of concrete position Very little or user carries out manual correction according to situation about actually measuring.
Embodiment two
A kind of image procossing modeling method provided in this embodiment, the area of the image procossing modeling method a kind of with embodiment It is not only that, the step S3 in the present embodiment is following sub-step:
S31, choose changeless object of reference beside target object and then select the mark point generation reference on object of reference Vector;The object of reference can be the object of reference artificially placed beside target object, such as scale or similar object.Mark point can Think the inflection point in object external outline line.Mark the quantity at least two of point.
S32, when target object is rotated, pass through mark point vector and the angle of object of reference vector on target object The angle of relation pair present frame is labeled, and one frame of generation carries the 2D outline datas of angle information, when all 360 degree of number of contours After the completion of analysis, and then synthesize the 3D model modelings of target object.
Object of reference can be with more convenient accurate reduction for completing 3D coordinates.If the specific size of given object of reference, also The size marking of target object can be carried out according to the specific size of exhibition object, so as to obtain the 3D moulds closer to actual effect Type.
Those of ordinary skill in the art will understand that the embodiments described herein, which is to help reader, understands this hair Bright principle, it should be understood that protection scope of the present invention is not limited to such special statement and embodiment.This field Those of ordinary skill these disclosed technical inspirations can make according to the present invention and various not depart from the other each of essence of the invention The specific deformation of kind and combination, these deform and combine still within the scope of the present invention.

Claims (10)

1. a kind of image procossing modeling method, which is characterized in that comprise the following steps:
S1, video image acquisition is carried out to target object;
S2, edge analysis processing is carried out to every two field picture in video image, the edge contour of target object is identified, to different frame Shooting angle be marked, formed target object different angle profile information;
S3, modeling is rotated to the simulation of the virtual 3d space of profile information progress of the different angle generated in step S2, forms 3D Model.
2. image procossing modeling method according to claim 1, it is characterised in that:The step S2 includes following sub-step Suddenly:
S21, brightness identification is carried out to every two field picture, calculates luminance mean value and dispersion;
S22, edge sharpening and binaryzation are carried out to image, obtains two-value gray-scale map;
S23, two-value gray-scale map is modified:
S231, the continuous modification that border is done using the information of image itself;
S232, using the supplementary data of front and rear frame to present frame into row bound continuous modification.
3. image procossing modeling method according to claim 2, it is characterised in that:The step S231 includes:Do not connecting Orientation detection, the most matched singular point of chosen distance and direction nearby is carried out at continuous odd point to be attached, and in two-value gray-scale map In be marked:
Distance and direction for pixel P and P ' can similarly obtain P points to continuously coupled direction each point (P0…Pn) chase after (the Δ to trace back0…Δn), singular point fitting is carried out according to the direction of Δ sequence, finally determines most suitable tie point.
4. image procossing modeling method according to claim 3, it is characterised in that:The step S232 includes:To current The correcting region of frame flag is compared with front and rear frame, continuous according to front and rear frame if front and rear frame is there are continuous situation Situation carry out approximate match.
5. image procossing modeling method according to claim 1, it is characterised in that:The step S3 includes following sub-step Suddenly:
The fixed characteristic point of relative position is as angle rotary reference point in S31, selection target object;
S32, the variation by having chosen reference point relative position calculate target object angle of inclination, relative position and relative angle Degree judges angle change of the present image boundary profile in 2D spaces;
S33, three-dimensional perspective reduction amendment is carried out to the change sequence of reference point in each frame, obtains the true rotation of target object Angle as the 3D profiles of current frame boundaries, carries out the border in 2D images 3D positions mark, completes the 3D moulds of target object Type models;
If S34, the target object for shooting standing by the way of follow shot with camera terminal, in each frame image data, The data of the acceleration transducer of records photographing terminal, inertial sensor and magnetometric sensor, according to these data to object Body carries out angle analysis, so as to obtain the 2D profiles of target object different angle, and then synthesizes 3D models.
6. image procossing modeling method according to claim 1, it is characterised in that:The step S3 includes following sub-step Suddenly:
S31, choose changeless object of reference beside target object and then select the characteristic point generation reference vector on object of reference;
S32, by the mark point vector on target object and the angled relationships of object of reference vector to the angle of present frame into rower Note, one frame of generation carry the 2D outline datas of angle information, after the completion of the analysis of all 360 degree of outline datas, and then synthesize mesh Mark the 3D models of object.
7. image procossing modeling method according to claim 1, it is characterised in that:It is further included after the step S3:
S4,3D models progress details is portrayed and corrected.
8. image procossing modeling method according to claim 7, it is characterised in that:In the step S4, target object is During human body, the trend of bone and joint position are confirmed using median computation methods.
9. image procossing modeling method according to claim 7, it is characterised in that:The step S4 includes:First with bat Terminal taking normative reference object is taken the photograph, then carries out the data of the image all angles of acquisition and normative reference object data pair Than obtaining the feature of the ball-type distortion of camera terminal and calculating ratio, target object progress video image being adopted to various The camera terminal of collection carries out accurate measure, is established and corrected with the feature and calculating ratio of the ball-type distortion of obtained each camera terminal Model database;After user is shot with wherein known camera terminal, before generating 3D models, video image can be first passed through and repaiied The corresponding distortion data correction model of positive model database lookup, Computer Vision carry out Model Identification again afterwards.
10. image procossing modeling method according to claim 7, it is characterised in that:The step S4 includes:Directly to 3D Model carries out the amendment of local size.
CN201711350936.7A 2017-12-15 2017-12-15 Image processing modeling method Active CN108109197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711350936.7A CN108109197B (en) 2017-12-15 2017-12-15 Image processing modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711350936.7A CN108109197B (en) 2017-12-15 2017-12-15 Image processing modeling method

Publications (2)

Publication Number Publication Date
CN108109197A true CN108109197A (en) 2018-06-01
CN108109197B CN108109197B (en) 2021-03-02

Family

ID=62216262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711350936.7A Active CN108109197B (en) 2017-12-15 2017-12-15 Image processing modeling method

Country Status (1)

Country Link
CN (1) CN108109197B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113903052A (en) * 2021-09-08 2022-01-07 华南理工大学 Indoor human body collision alarm method and device based on image processing and mechanical analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020050988A1 (en) * 2000-03-28 2002-05-02 Michael Petrov System and method of three-dimensional image capture and modeling
US20030035061A1 (en) * 2001-08-13 2003-02-20 Olympus Optical Co., Ltd. Shape extraction system and 3-D (three dimension) information acquisition system using the same
CN101398886A (en) * 2008-03-17 2009-04-01 杭州大清智能技术开发有限公司 Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN102364524A (en) * 2011-10-26 2012-02-29 清华大学 Three-dimensional reconstruction method and device based on variable-illumination multi-visual-angle differential sampling
CN105893675A (en) * 2016-03-31 2016-08-24 东南大学 Open space periphery building form optimization control method based on sky visible range evaluation
CN107134008A (en) * 2017-05-10 2017-09-05 广东技术师范学院 A kind of method and system of the dynamic object identification based under three-dimensional reconstruction
CN107423729A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of remote class brain three-dimensional gait identifying system and implementation method towards under complicated visual scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020050988A1 (en) * 2000-03-28 2002-05-02 Michael Petrov System and method of three-dimensional image capture and modeling
US20030035061A1 (en) * 2001-08-13 2003-02-20 Olympus Optical Co., Ltd. Shape extraction system and 3-D (three dimension) information acquisition system using the same
CN101398886A (en) * 2008-03-17 2009-04-01 杭州大清智能技术开发有限公司 Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN102364524A (en) * 2011-10-26 2012-02-29 清华大学 Three-dimensional reconstruction method and device based on variable-illumination multi-visual-angle differential sampling
CN105893675A (en) * 2016-03-31 2016-08-24 东南大学 Open space periphery building form optimization control method based on sky visible range evaluation
CN107134008A (en) * 2017-05-10 2017-09-05 广东技术师范学院 A kind of method and system of the dynamic object identification based under three-dimensional reconstruction
CN107423729A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of remote class brain three-dimensional gait identifying system and implementation method towards under complicated visual scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
肖雪: "基于视频序列的人体骨架提取与三维重建", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113903052A (en) * 2021-09-08 2022-01-07 华南理工大学 Indoor human body collision alarm method and device based on image processing and mechanical analysis
CN113903052B (en) * 2021-09-08 2024-06-18 华南理工大学 Indoor human body collision alarm method and device based on image processing and mechanical analysis

Also Published As

Publication number Publication date
CN108109197B (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN111460873B (en) Image processing method and device, image equipment and storage medium
CN108629801B (en) Three-dimensional human body model posture and shape reconstruction method of video sequence
CN108053283B (en) Garment customization method based on 3D modeling
CN110599540B (en) Real-time three-dimensional human body shape and posture reconstruction method and device under multi-viewpoint camera
CN108305312B (en) Method and device for generating 3D virtual image
CN105354876B (en) A kind of real-time volume fitting method based on mobile terminal
Corazza et al. A markerless motion capture system to study musculoskeletal biomechanics: visual hull and simulated annealing approach
CN108475439B (en) Three-dimensional model generation system, three-dimensional model generation method, and recording medium
US7804998B2 (en) Markerless motion capture system
CN111932678B (en) Multi-view real-time human motion, gesture, expression and texture reconstruction system
CN104794722A (en) Dressed human body three-dimensional bare body model calculation method through single Kinect
US20170053422A1 (en) Mobile device human body scanning and 3d model creation and analysis
CN106952335B (en) Method and system for establishing human body model library
CN112330813A (en) Wearing three-dimensional human body model reconstruction method based on monocular depth camera
JP2022501732A (en) Image processing methods and devices, image devices and storage media
WO2020156627A1 (en) The virtual caliper: rapid creation of metrically accurate avatars from 3d measurements
WO2020147797A1 (en) Image processing method and apparatus, image device, and storage medium
CN107901424A (en) A kind of Image Acquisition modeling
CN108109197A (en) A kind of image procossing modeling method
CN116966086A (en) Human back acupoints calibrating method and system based on real-time image optimization
CN208497700U (en) A kind of Image Acquisition modeling
US20220198696A1 (en) System for determining body measurement from images
Lin et al. Create a Virtual Mannequin Through the 2-D Image-based Anthropometric Measurement and Radius Distance Free Form Deformation
CN113902845A (en) Motion video generation method and device, electronic equipment and readable storage medium
Robertini et al. Capture of arm-muscle deformations using a depth-camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant