3D Scanning: how, what and why

The use of virtual versions of real life objects is practiced in many fields: construction and engineering, architecture, medicine, cultural preservation, entertainment, retail, robotic quality assurance, various design processes, reverse engineering, security, forensics, etc.

Data Formats

  • point cloud
  • polygon, NURBS, CAD models
  • 2d slices (CT, MRI)

Active Sensing Technologies
(emitting radiation, more expensive, compatible with more surface types)

Laser Scanning (read wikipedia article for further detail)

  • time-of-flight (large scale scanning, buildings)
  • triangulation (higher precision, shorter range)
  • conoscopic holography ( scanning through narrow openings)
  • handheld (flexible angles)

The OSU Sullivant Rotunda Project (static interior Lidar scanning)
Lidar scanning for ethnographical and historical preservation
Surface scanning for painting (collage, etc) technique exploration and preservation
Virtual clothes fitting
Preserving Historical Sites Around the World (human controlled scanner navigation)
Human Skin in Memex music video
Smithsonian X3d Project
Mobile Laser Scanning System for the French Railway
Laser Scanning Confocal Microscopy

Sound Scanning

Multibeam Acoustic Sounding System
Fetal 3d ultrasound

Tomography and Magnetic Resonance

Research at Vision Lab
Computer Tomography

Structured Light (LED of Infrared)

Kinect, Asus (infra-red sensor) and other

Passive Sensing Technologies
(less costly, some surfaces are problematic, ie transparent, reflective)



Paul Debevec's work (related TED talk on the making of Curious Case of Benjamin Button)

Static Setup (EA face scanning)

Aerial Photogrammetry with Drones

Stereoscopic Photogrammetry

Silhouette vision in fusion 3d scanning

Markerless Motion Capture

Laser Scanning based approaches and other work at Max Planck Istitut Informatic (Graphics, Vision & Video Group)

Face scanning in iPhone X

Mobile markerless mocap

Organic Motion's Reality Capture


Lebin Yiu et all, Tsinghua University


Current Tech Reviews



Photogrammetry process with Zephyr 3d

Taking pictures check points:

1. Object fully in frame

2. Object fully in focus (check at all angles)

3. If the objects surface does not have high contrast details consider putting pieces of bright spike tape on it (can edit the texture later) or if the object is small enough set it on newspaper.

4. Avoid blurriness resulting from handheld camera in shooting videos.

5. Ensure lighting that is as even as possible.

6.Take ~ 100-150 photos evenly distributing the angles of shooting. Circle the object at least twice at different heights. Pay attention to the visibility of features on the underside of an object. Consider putting the object up on a tripod or pedestal to get a better view.

Check out these tips on good photogrammetry photo taking.


All ACCAD computers now have Zephyr Lite, a free software that can reconstruct an object from up to 50 photos. There are two computers with Zephyr Lite licenses (up to 500 photos).  There is one station with Zephyr Pro (unlimited photos, merging multiple workspaces and reconstruction from silhouette). Look for red sticky notes to spot locations of these computers.


Reconstructing in Zephyr (follow the main steps in Workspace menu)

1. Workspace/New Project. If Masquerade was used to create masks in images, check Masks Images. Add photos. If adding videos specify frame rate (ie importing video at 5fps means for each second shot 5 images will be imported).

2. Follow the "next" button option. At Camera Orientation Presets make the right Category. Start with Default preset. Run the Reconstruction to get the Sparse Point Cloud. If the Reconstruction was successful all cameras will be green YES. If not consider restarting the process and picking a higher quality settings in Camera Orientation Presets.

3.* If the Sparse Point Cloud looks tilted you can orient it using Tools/Workspace/ Scale/Translate/Rotate Objects.

To speed up the process change the boundaries of the Bounding box: right click on the cloud and pick Boundary Box. Then right click and pick Scale. Move the walls of the Bounding box from outside inward. You can also use selection tools (middle right in the top menu) to select and unnecessary sparse points.

4. Workspace/ Dense Point Cloud Generation. Follow the steps in the wizard menu. The look of the Dense Point Cloud is the best predictor of reconstruction quality. * You can still use Selection Tools to remove the noise points. You can also use Color picker (RGB button) to select the noise points using their color).

5. Workspace/Mesh Extraction. Follow the Wizard menu. Pick Default Sharp features (sharper details but noise probability) or Default Smooth features (smoother surface).

6. Workspace/ Textured Mesh Generation. Move to this option only once you are happy with previous steps.

* Optional steps that can speed up the reconstruction process. Noise reduction on the Dense Point Cloud will improve the model quality too.

** Navigate around Zephyr workspace using LMB = Tumble, MMB = Pan, Wheel = Zoom. Be careful to click outside the Bounding Box or Rotation Manipulator (if manipulating object) in order to navigate.

*** Save after each step so that it's easy to go back. After experimenting with object reconstruction using default options you can experiment with Advanced Settings. (ie watertightness to reduce the amount of holes in Mesh Reconstruction). Read the Help Menu and the tutorials below.

Zephyr 3d online tutorials

Back to 7001