New Sony camera sensor could bring HDR video to cell phones

Sony Exmor RS sensor upgrades photo and video quality of smartphone cameras.

I am skeptical of the HDR movie recording claim vis a vis cell phones. HDR video recording likely produces very large video files, not exactly the best match to the limited memory on smart phones. However, the HDR output might have benefits to at least improving dynamic range of the MPEG4 recording. Present small sensors generally have terrible dynamic range.

#3D images using normal and very wide interaxial lens spacing

When shooting 3D images, the distance between the left and right lenses matters as it impacts both the useful depth of the image and how close your camera can be to primary objects without the objects appear to “pop out” in front of the screen.

The following photos are 3d “red/cyan” stereoscopic anaglyph images. The photos were taken using dual Lumix GH-2 cameras and processed in Stereomaker. The original 5k pixel wide images were uploaded to my video blog; click on the image to see larger image. Use your red/cyan 3D glasses to view these images.

First photo is of playground equipment using my standard 5 1/4″ interaxial spacing – that is the closest I can get the cameras together, and the lens was set to 42mm (on a micro four thirds camera that is the same as an 84mm lens on a full frame camera).

Notice that the line of trees well behind the playground equipment is all “in the distance” at the same plane. Not much 3D going on back there. This is due to the closer lens spacing.

For the following images, I used my “sliding rail” mount which is a home made mount that enables me to separate the lenses by more than 2 feet.

The effect of a wider lens separation is to create a sense of stereoscopic depth much deeper into the scene than is possible with a narrow lens separation. Think of it like this – suppose your two camera lenses (or your eyes) were on top of each other, in the exact same spot. You would not see any 3D effect as you’d have a 2D image. But move your lenses (or eyes) apart by a millimeter or two. Now you would begin to see some depth but only for objects very close to you. Distant objects would not be sufficiently different in the left and right views to give a 3D sense to them.

Now move the lenses (or your eyes) several inches or even a foot apart. The difference in the images, even at far distances, will now be noticeable.

A “normal” 3D camera might have lens spacing in the 1 and 1/2 to 2 1/2 inches range (similar to your eyes). This camera will produce useful depth at up to perhaps 200 feet. Beyond that, objects will tend to appear on a distant plane.

By increasing the interaxial base, we extend the depth into the distance. In these photos, with a long stereo baseline between the lenses, depth is extended out to easily 800 to 1,000 feet.

Why don’t we then just always use a wide lens separation? Because a wide separation has impacts on objects appearing closer to the camera. As a rough rule, the closest subjects in your scene should be about 30 times the distance between the lenses. For a 2 foot lens separation, that means the nearest objects should be at least 60 feet away.

By comparison, my normal 5 1/4″ separation enables objects down to less than 14 feet from the cameras. Consequently, lens width matters! For my Canon HF M301 video camera set up, lens spacing is 2 5/8″ and enables taking 3D images down to within about 7 feet of the subject.

In these images (some shot at 42mm m4/3ds) we still see depth at what is probably 800 to 1,000 feet. But note that I carefully avoided having anything, like a tree, appear in the foreground – it would have hurt your eyes with this lens spacing!

 

 

Enhanced by Zemanta

Demo of in-camera 3D Image Stabilization

3D Image stabilization.

When using two cameras, we normally turn video image stabilization off to avoid having the individual cameras do their “own thing” with stabilization. However, I shot a test clip using “dynamic” image stabilization on both of my Canon Vixia HF M301 video cameras and the results were quite good.

Just don’t do any rapid or jerky camera movements or things really well go haywire. But for basic handheld shots or this walking shot, it works okay!

http://youtu.be/bj4ssfFx6pI

I just set up a new Youtube channel at http://youtube.com/3dStreams and am thinking about putting most of my 3D stuff there.

3D Video image stabilization

When shooting 3D video using two cameras, we turn off all in camera stabilization features. This makes hand held video look shaky, unfortunately, as all the little hand movements remain in the original clips. Consequently, we have to shoot most 3D on a tripod.

I just did a test, though, using my Canon Vixia HF M301 video cameras. I use two of them to shoot stereoscopic 3D. The HF M301 has three stabilization options: off, standard and dynamic. For my tests, I shot a scene using standard stabilization and another scene using dynamic. Then I paired the left and right tracks in Magix Movie Edit, output to a WMV 3D anaglyph file and watched the test video.

For general hand held shots without rapid camera movements, the in camera stabilization tracked well between both cameras. However, rapid camera movements caused different stabilization effects in each camera and the 3D goes bonkers as one goes left and the other goes right!

Bottom line: for simple handheld shots without a lot of movement, the in camera stabilization works okay and eliminates the handheld jerkiness.

Ideally, it would be nice if we could stabilize the 3D images after editing but there is not an easy way to do that.

Most video editing software today has image stabilization capabilities. Image stabilization analyzes the video for jerky movements and then corrects by shifting the video around to minimize the jerky movements. Since this tends to leave black bars at the top, bottom or sides, the video is also enlarged slightly to fill in the black bars left by shifting the video.

When it comes to 3D video, there does not seem to be a great solution. Sony Movie Edit Platinum 11 disables the stabilization feature on paired 3D clips. Magix Movie Edit Pro MX Plus can stabilize individual clips before pairing – but there is not a good way to match the stabilization between the left and right tracks. You can stabilize one, copy the effects track and paste it to the 2nd track, but this has not produced the expected result of matching stabilization.

I suppose we could created our 3D track, output to a video file, import that file as a single clip that may as well be 2D as far as the editor is concerned, and then apply stabilization. Not sure I want to go through two more transcodes though!

Enhanced by Zemanta