The interocular distance must be limited, the two points of view from a single lens cannot differ that much, right?
And can the interocular distance be changed?
A Million Dreams
Well I was kept awake for a few nights trying to figure out a way to do that. It seems they are smarter than me.
I do not know how many problems will this system show, but such a concept COULD prove
a commercial success, if it works FAIRLY well.
The same producers that used to shoot features on PD 150
+30 6944 725315
Argyris said: "It seems they are smarter than me."
Sony R&D?...do ya think?
Chief Technology Officer
Band Pro Film & Digital
Hi to All,
I'm following this list for more than a year, but this is my first post, so let me briefly introduce myself: I am a VFX-cinematographer and stereographer based in South-France. For further information please Google Imdb or LinkedIn.
Following the laws of optics and according to the sketch on the SONY-site, the max. interocular distance may be the diameter of the entrance pupil minus capture device width, right? Which would mean that only extreme small (and "out-of-center") parts of this single lense would be used to make the images.
Optically possible... like building a single lense with a entrance pupil of wider than d=89mm (65mm interocular + 24mm film back width)
>>Argyris said: "It seems they are smarter than me." Sony R&D?...do ya think?
As long as the marketing department is not aware of the project...
Now, seriously, this confirms a rumour that was roaming around since NAB at least.At that time, the fairy tale was about dual sensor and single lens.
The race is now open for the first picture of the dual lens, single sensor, from RED. (no rumour here, or I'm starting it, but it seems so obvious...)
Is there anyone here who understands optics and trigonometry and could provide us with interocular numbers. And someone aware of current optoelectronics state of the art, who could give an estimated price range and time window ?
Does it looks like 10M$ in 10 years, or 10K$ next CES ?
Two Eyes, One Brain, No Kidding
Bernard Mendiburu (Prof. lists)wrote:
>> The race is now open for the first picture of the dual lens, single sensor, from RED.(no rumour >>here, or I'm starting it, but it seems so obvious...)
I’ve seen a few others 'working' prototypes for RED already, I’m sure you
have too around LA, when they’d be commercially available who knows
I still don’t get the Sony camera except for table top
Argyris said : "It seems they are smarter than me."
>Sony R&D?...do ya think?
Smarter or not, they certainly have a somewhat bigger budget .
>>Following the laws of optics and according to the sketch on the SONY-site, the max. interocular >>distance may be the diameter of the entrance pupil minus capture device width, right?
I suspect that something like this is going on. My bet is that there's a vibrating element in the optical path that somehow shifts the position of the entrance pupil left-right 10 or 20 times per second, with each sensor being exposed and read-out alternately. The 240fps frame rate suggests that within each 1/48 second (assuming a "180 degree" shutter value) several discrete exposures are integrated by each sensor into what is effectively a single 1/48 second exposure. Any multi-exposure ghosting of fast-moving image elements could probably be processed out pretty easily.
Not being a stereographer I may be way off base here, but I wonder whether the constrained interocular distance might suggest a conservative approach that's better suited to mainstream drama than to trendy, in-your-face, whizbang flix.
Marin County, CA
I actually thought about using a single lens once...but wasn't sure if it was possible. I had found a large single chip camera that I could get two different 1920X1080 framed areas of the large chip output at the same time (it could actually do up to 4 areas). The separation on the chip would be
enough to be worthwhile testing for a 3D capture, but the question is...by putting a large enough lens on the front (say a medium format lens) is it possible to use a single lens at all? Any optical engineers out there?
>> I actually thought about using a single lens once...but wasn't sure if it was possible.
I have seen two projects with tests done with a rig of this type of configuration. It is a trade off. One camera/one sensor BUT less latitude with the lenses. Depends on the project (but doesn't everything)
The benefit......perfectly sync'd, colour accurate image/s in full 1920x1080 and at high speed to boot.
YES IT IS POSSIBLE!
We developed the system to take the output (that had both eyes combined) and split the images out in real-time to see real-time 3D.
disclosure: We have a demo office at StereoScope and work with them and others to develop tools used in 3D workflows.
About a year ago I had a talk with an experienced Still photographer.
I asked him:
"By shifting the film plata of a view camera can you change perspective?"
"No (and I believe he meant practically no), to do that you need to shift the lens"
That was the time I stopped wondering how a single lens system could be accomplished.
Dan, to shift the entrance pupil you need to shift the front of the lens and I do not believe this is Sony's approach. According to the drawings, I think, they are using standard 2/3'' lenses out of the shelf. That must be their philosophy.
Of course they could be recording in smaller sensors.
I AM SPECULATING!!!!!!
+30 6944 725315
>>My bet is that there's a vibrating element in the optical path that somehow shifts the position of the >>entrance pupil left-right
What would be really funny is if Sony's stereoscopic R&D department intentionally released the wrong picture/diagram with this article just to see what the different 3D lists around the world would come up with. This thread gains a strong edge of comedy with that notion.
BTW, check on the 2D "depth enhanced" footage of the video demo, the rolling effect of the lens flare.
Daniel Drasin wrote:
>>Not being a stereographer I may be way off base here, but I wonder whether the constrained >>interocular distance might suggest a conservative approach that's better suited to mainstream >>drama than to trendy, in-your-face, whizbang flix.
....or consumer video.
Or it might suggest an entirely consumer approach to 3d. After all, pros have no problem with complicated 3d rigs.
It seems that this type of imaging system - though limited - could be easily scaled and implemented into consumer cameras which have just enough 3d to make your home movies seem 3d - but without all of those pesky features and flexibility that professional rigs have.
Most amateurs have a hard time setting proper exposure - let alone determining the correct interocular distance in a zoom.
So if it does 3d reasonably well in 80% of situations, the viewing audience will feel that they got their money's worth out of the camera.
The conservative 3d would alleviate headaches while watching your offspring in 3d.
>>What would be really funny is if Sony's stereoscopic R&D department intentionally released the >>wrong picture/diagram with this article just to see what the different 3D lists around the world >>would come up with.
That's certainly possible, since the diagram seems to make no sense as-is. That's why I speculated that it would require at least one additional element to somehow shift the entrance pupil - assuming
that's what they're doing.
Come to think of it, maybe the camera itself is a hoax. Or maybe you need two of them!
Marin County, CA
>>Would somebody know if the following patent (US7019780) is the one used by Sony's new >>camera?:
Whether or not Sony is using this patent, if it's a single lens 3D camera, then it MUST be achieving the 3D separation by using pencils of light from the left side of the lens and the side edge of the lens to produce two separate images. I can't imagine any other way (though maybe I'm not thinking outside the square enough, so to speak).
This patent is mainly about the shuttering device used to do this - but it still relies on the two eyes being, in effect, the two sides of the one lens.
The patent uses, as an example, "a 12-power lens for 2/3rd inch securing a parallax (or IOD) of 10-15mm".
This is going to be very subtle, flat 3D. OK for the amateur market perhaps, but not really accurate.
To get 65mm IOD, I reckon you would need a lens with a front entry pupil of at least 85mm. Even then, the majority of light forming each image would come from parts of the lens closer than 65mm.
film ~ technology ~ strategy
Dominic Case wrote:
>>Whether or not Sony is using this patent, if it's a single lens 3D camera ...
My knowledge of optical design is limited but:
Out of curiosity, If the separation is being accomplished near the nodal point of the lens, wouldn't they be able to optically offset the divergent beam paths by a minute distance to enhance the 3d separation without having to increase the actual interocular distance?
It would take a far smaller shift to achieve this internally than at the front of the lens.
In other words, it might not be the correct way to do it, but would effectively shifting the images further apart increase the "3d effect" without increasing the actual interocular distance? I know it would change the 'screen depth' and forget about the longer focal lengths....but for the prosumer?
Any of these cameras is aiming for a niche within the 3d camera market - somewhere between 3d palm cam and the full size, two camera rig.
Hard to expect it to be all things to all people - but more tools for us. (yay!)
>> It's not an Angenieux product, it is by another company.
If you're referring to the V3 then I beg to differ.
It is very impressive, no, it's not 3D but there is an illusion of depth
that is very different to 2D.
Geoff Boyle FBKS
mobile: +44 (0)7831 562877 www.cinematography.net
>>"To be honest I have a past with them. Last year at the IBC I had asked >>for an appointment and >>they said, come freely at anytime. I went to their booth just to be treated like a student who would >>ruin their product by simply looking at it.
Please accept my apologies for failing to speak with you at the 2008 IBC show. I just read your post, and am very bothered that you went away feeling as though I was not interested in speaking with you. I was the one v3 employee, in the Angenieux booth, at the 2008 show. I am an American, and don't speak using phrases such as "mate," so I don't know with whom you spoke, but it does not sound like me. I am sorry that I didn't know about this sooner, so I might have made amends. No one likes to be treated badly and I am sorry it happened to you. Again, please accept my apologies
At the 2008 show, we featured an AX3 prototype system (with new interface controls) in an Angenieux 19x lens on a Sony camera. It primarily resided on a table, and was not hooked up to a monitor or any other equipment. The camera had battery power, and I encouraged booth visitors to play with the
controls and learn the system. This also leads me to believe that I didn't speak with you, as I would have gladly let you handle the device.
v3 is a 2D technology that can be incorporated into both standard 2D and 3D production environments (i.e., two v3 lenses in a stereoscopic configuration). Our technology visually enhances the depth and texture of both 2D and 3D-and extends the usability of footage shot using v3 technology so it becomes scalable and easily incorporated into the 2D/3D, HD, digital cinema, 35mm, mobile video, and streaming media realms. v3 images have widespread distribution that can be viewed on any display, without the need for additional hardware or software.
In the past year, we have made significant advances in our technology, as can be evidenced by the video in the following link:
Again, please accept my sincere apologies. I will contact you offline to
discuss the matter further.
All my best,
Vision III Imaging, Inc.
8605 Westwood Center Drive
Vienna, VA 22182
(703) 639-0670 (O)
(703) 639-0749 (F) www.inv3.com
>> it is something in-between both words and this makes it unusable for both
Images with the illusion of depth on a website? Or an iPhone?
Seems like a great idea to Me.
Geoff Boyle FBKS
mobile: +44 (0)7831 562877
Apology accepted. But, I would not really remember the exact words used one year ago, right?
I do remember is that I was treated impolitely, I felt forced to leave, felt unwanted.
Now let's give an end to this and talk about important things:
The V3 adaptor is (in my opinion) lacking a target group.
There is 2D and there is stereo3D. In a few years there will be holographic3D too.
Where does it fall?
Nowhere. Why should anyone invest in such a system?
Where can it be used?
I do not think these questions can be answered easily.
In addition the close ups in your site are quite shaky, more than what I would consider acceptable.
Also, the extra depth cues provided by V3 are not sufficient to imitate stereo3D experience, sorry.
>>"Images with the illusion of depth on a website? Or an iPhone? Seems like a great idea to me "
Geoff, there are other ways I think.
Like the Swedish system especially designed for cell phones.
This would not need specialised acquisition equipment since they are creating 3D out of 2D stepping over colour differences. Sorry I do not remember the guys name, (he is young and works in a Swedish University), he lectured in Dimension3 last May. I have to search a bit.
The idea of shooting especially for websites or iPhone sounds weird to me. Anyone shooting anything should normally want to use his product as widely as possible right? Even if this is just an option
Just my humble opinion
+30 6944 725315
>>Like the Swedish system especially designed for cell phones.
>>This would not need specialised acquisition equipment since they are creating 3D out of 2D >>stepping over colour differences.
I am not sure if you either are referring to an European FP7 project for taking 3D to the mobile devices like cell phones or the paper on real time conversion of 2D to 3D on Philips WOWvx screens using colour differences.
- The first of the talks at Dimension 3 is a project lead by a Finnish university (Tampereen Teknillinen Yliopisto) and the project name is MOBILE3DTV (www.mobile3dtv.eu)
- The second, a paper mentioning a colour differences approach for converting 2D content to 3D on the fly (demo of the Spiderman DVD converted to 3D on the booth) was not from a Swedish researcher, but a Cuban one: Carlos Vazquez is his name.
The project was not developed in a Swedish university, but in a Canadian Research Center: CRC Ottawa.
My guess is the 2D to 3D JVC realtime HW system showed in IBC, might be "inspired" on this CRC approach.