3D-Con Announced in Costa Mesa 25-30 July

Wish I could go:

The 38th National Stereoscopic Association Convention “3D-CON” is planned for July 25-30, 2012 in Costa Mesa, California, USA. Come immerse yourself in some spectacular 3D stereo over six action-packed days! This is the place to find cutting-edge stereo theatre, informative workshops, a stereoscopic art exhibition, image competitions, room hopping, a 3D auction, a large trade fair and a technical exhibit of new equipment and displays.

via 3D-Con Announced in Costa Mesa 25-30 July.

For 3D, spacing between the lenses really does matter

I have been shooting 3D mostly using two GH-2 DSLR type cameras. Due to the width of these cameras, my interaxial lens spacing (the distance between the centers of the lens) is almost six inches.

This spacing turns out to be much too wide (which I expected) for most subjects and is definitely too wide for anything closer than 15-20 feet from the camera. Using the wide spacing causes the left and right eye images to be separated by too much. While they can be brought closer together in the editor, this tends to cause problems on more distant items in the scene.

Last weekend, I went to a park and using a set of trees spaced in front of me, I shot 3D on both two Kodak Playsport Zx3 cameras with about 3 inch spacing and the same scenes with the GH-2s with 6″ inch spacing. Without question, the 3″ spacing was better looking than the 6″ inch spacing and when the distance to the first tree was reduced to about ten feet, the Playsport 3D combo did a nice job but the with the GH-2, I had to severely reposition the tree in depth to avoid hurting my eyes. But that really messed up the more distant trees. This little test confirmed the importance of working with closer lens spacing.

That said the GH-2s, of course, have great image quality, low light capability, and better dynamic range and compression. They remain a great tool for shooting scenes with longer depth, such as my Civil War battle re-enactment video. And the GH-2s did a great job when I was testing at about 30 feet from the first tree.

But for close in subjects – and generally that means most indoor shooting where space is limited – close lens spacing is essential.

I am starting to test using two Canon HF M301 video cameras that I picked up for $200. So far, the results look to be extremely good. With the M301, I get a high 24 Mbps AVCHD encoded video data rate for good image quality, plus using my custom 3D mount, the lens spacing is just 2 3/4 inches. And if I really wanted to, I could reduce that to 2 5/8″ for sure.

The M301s are quite a deal, albeit, their lenses, at the widest, are about the same as a 40mm lens on a full frame camera. In other words, very close to a “normal” lens. 3D likes wide angle. I tested my 43mm filter ring Canon wide angle adapter using a 43 to 37mm adapter, and this lens produced very good quality wide HD on the M301. But alas, 3D requires two lenses, I only have one wide angle adapter and do not plan to buy another any time soon (unless I get lucky and find one for sale really cheap!)

My goal is to find good 3D video solutions that do not cost a fortune. That rules out using beam splitter rigs for sure! Plus, I usually shoot while “portable”, meaning I am on my feet a lot, and sometimes even running with my cameras, so small size and light weight are important. I typically shoot using a monopod since its lighter than a tripod, very fast to set up, and I often use it to lift my cameras high over head for an aerial view. Anyway, I will continue to work on finding simple, affordable solutions and will post what I learn on this blog.

Enhanced by Zemanta

Magix Movie Edit 3D video encoding tip

I am using Magix Movie Edit Pro MX Plus for 3D video editing. When I am ready to encode my videos, I have been exporting them as MPEG4 video files. However, when I output red/cyan anaglyph for my own viewing (I do not have a 3D monitor), I have been disappointed in the compression quality.

This became readily apparent today on some test shots that included grass, bushes, tree leaves and other high detail objects that compressed very poorly.

A better solution is to just output using Windows Media exportt (File | Export movie | Windows Media export) and on the Advanced (Video) settings tab, make sure you have Windows Media Video 9, Variable bit rate-quality and and set the Bit-rate-quality setting to a high value (I’m using 90) for the video options.

This is producing a much cleaner video image with fewer compression artifacts. The regular MPEG4 video encoder seems to work well on normal 2D video but really chokes on anaglyph producing video with a great deal of compression artifacts.

I also found that exporting using the Quicktime option and the default Sorensen 3 codec worked well too, better than MPEG4 on the anaglyph format.

Enhanced by Zemanta

The 3D rig I used for a Civil War Battle Re-enactment 3D video

I used two Lumix GH-2 DSLRs to shoot 3D video. In this configuration, the lens centers are spaced about 6 inches apart which means that primary subjects need to be at least 15 feet and preferably 20 feet away from the camera to avoid 3D eye strain (convergence going bonkers).

The cameras are screwed into a rail made of aluminum using 1/4-20 knobs picked up at a local hardware store. Audio is recorded using two shot gun mics (bought off EBay, and yes, one has a homemade wind muff) and feeding into a factory refurbished Beachtek audio mixer which connects to one of the cameras. The shot gun mics make a HUGE difference in audio quality. Unfortunately, for the first battle I had left my XLR mic cables in my car so recorded the good audio only on the 2nd battle.

The six inch lens spacing is okay for outdoor landscapes and events where the subject is typically 20+ feet away. But experimenting with the Kodak Playsport cameras that I picked up used, I find the 3″ lens spacing produces a much more pleasing 3D effect. Probably not a big surprise, after all, what’s the spacing between your eyes? Probably not six inches!

But the GH-2 does have some nice features – like being able to align the zooms on the two cameras pretty easily (and good enough). Plus for video, the 14-42mm stock lens provides multiple focal lengths. Not only the 14-42mm range, but also the “Extended Telephoto” mode that crops the 1080 field out of the full image sensor, roughly doubling the focal length at 1080p.

Time permitting, I hope to produce a tutorial on shooting 3D with ordinary consumer cameras. From what I have seen, consumer oriented 3D “all in one” cameras do not deliver the video quality that interest me. While its a little more work, two consumer cameras can deliver surprisingly good 3D results, and results that are clearly better than the “all in one” approach. Plus, I can use external mics which most low end cameras do not support.

In theory, we are supposed to time synch the two cameras for precise frame alignment. But for most activities and viewing on the computer monitor or even the HDTV, frame level synchronization while editing seems plenty adequate. I am not shooting for the local movie theater or IMAX screen!

How do I synch the two cameras? I just snap my fingers to put a pulse on the audio tracks (or in the case of the Civil War battle, musket fire works quite nicely too). Then in the 3D editing software, I use the audio track pulses to align the video segments.

Enhanced by Zemanta

“ePhotozine Panasonic 3D lens review. Is the 3D hype over?”

43 Rumors | Blog | ePhotozine Panasonic 3D lens review. Is the 3D hype over?.

I just bought one of these 3D lenses for $50 and so far, I have not quite figured what it might be good for. I thought with the close lens spacing, and a simple trick to use it with video (put tape over the electrical contacts), that it would be useful for close in 3D shots due to the narrow interaxial spacing.

However, since the lens works by creating two side by side images on the 1080p video (a common 3D standard), the video images cannot be correctly stretched in my editor to produce the right aspect ratio.

Normally, we take two full size images 1920×1080, combine those into a single 3D representation, and (for Youtube) output in the side-by-side format. In the side-by-side format, the 3D is represented with a squished left image on the left and a squished right image on the right. These are then stretched to present a full size (albeit, lower resolution) 3D image at 1080 pixels wide.

The problem is that using the Lumix 3D lens, we start with an image of about 960×1080 and that is the correct proportions for the image. My editor can edit proper side-by-side, but it has no idea what to do with the non-squished side-by-side images. For now, it stretches the 960×1080 back to 1920×1080 and yes, this makes everyone look really fat!

For now, I have not yet found a solution to put this lens to work on close in subjects for 3D video. But I will keep trying!