Future of photography: Ren Ng at TEDxSanJoseCA 2012




Before starting Lytro in 2006, Ren had been extensively studying light field science and computational science. His seminal Ph.D. research on light field technology earned the field’s top honor, the ACM Doctoral Dissertation Award for best thesis in computer science and engineering, as well as Stanford University’s Arthur Samuel Award for Best Ph.D. Dissertation. The entrepreneurial spark came when Ren purchased his first DSLR camera and saw the potential to apply light field technology to capture pictures in addition to image generation. He decided to apply and extend his theoretical work by making light field cameras available to consumers.

In the spirit of ideas worth spreading, TEDx is a program of local, self organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, Where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.

Original source


24 responses to “Future of photography: Ren Ng at TEDxSanJoseCA 2012”

  1. Very interesting. It's exciting to realize people are still working on the next big step on photography technology.

    Here's what I would personally love to see happen next:
    1) A finished display medium – the actual photograph – that directly uses the vector information. The first interactive-focus photos shown are super cool for sure; at the same time, you can make the same thing with any old camera and a tripod. You could even automate it with an on-camera app. Prompted by your 3d type demonstration, I'm imagining a new kind of photograph that both has that wide aperture, Chuck Close, ultra-present look AND the DOF of a landscape photo. Is there a way to combine that 3D tech with eye tracking, so that the viewer literally focuses regions of the photograph with eye movement? I guess the photograph would be VR headset?
    2) Put the camera in the hands of serious, advanced photographers. Apropos of the subject of depth, there is a whole universe of variation within the idea of what you can do with any camera. Some photographers do more, and at first it can be hard to jump onto the merry-go-round of understanding. Putting the tech in the hands of serious, advanced lifers – i.e. wall photographers with careers – would give you a better idea of what the tech can do – plus obvs it would go further in promoting the consumer product.
    3) Some varied panel of experts to help you with artist relations. E.g. a fine arts person, a reportage person, a fashion person, etc… The famous Philip-Lorca DiCorcia quote comes to mind. Ask any BFA which quote I'm thinking of and they will instantly know, and then proceed to explain to you what I'm trying to articulate in a YouTube comment. Also ask any advanced photographer or critic about the history of stereographs – very interesting and informative to what you are doing.

    Also someone in the comments mentioned the need for an open camera. Very much YES. The code doesn't have to be open source – it's the control of the camera that needs to be open and accessible externally. That could be as simple as making all interface elements assignable to literally anything variable within the camera system, plus a single flexible bridge to the outside world, e.g. Bluetooth. People currently doing projects with ugly Rasperry Pi hacks would buy your camera just for that, and discover the new stuff afterward. I don't grok why camera companies don't do this. Doesn't seem like it would suck resources too bad to simply include an unsupported, hidden interface page for assigning controls and turning on a wireless bridge – there for hackers who want it and hidden for the rest. Don't mention it in the main manuals, and don't publish any documentation on it on the main product page. Void the warranty if the hidden page is accessed.

    My first take on this presentation is that you've brought something into the world that has great potential for visual art. I hope you will keep developing it and bring it into wide usage. Good luck!

  2. Can't wait to see future smaller/cheaper Lytro cameras with higher resolution, sharper/noiseless recomposition, larger FOV and parallax, faster refocus/spread etc…

    Right now the only thing missing is perspective recalibration, in the original Stanford experiment they showed one of the great promise of light field was to be able to pull or approach the observation angle, not just shift it right/left/up/down.

  3. The future of photography is an open source camera. A camera that can be re coded… hacked. exploited and exerimented with by the ARTIST. not engineers that THINK they know what photographers want. We all want different things. So give us a blank canvas not a canvas with guidelines. we hate guidelines.

  4. This guy is arrogant indeed. He think he has a great camera that will transform photography when nobody has asked for it. Nobody wants it as a consumer product. There may be applications in medical or security screening. It is a bad idea for general photography. This guy is clearly a non-photographer, I do not care how he claims otherwise.

  5. Multiple focus images can be helpful for artistic values. Though can be off putting. Quick fact, they experimented with wide focus cameras in old black and white films, but found the "everything is in focus" images to be "weird" to the audience. So this device is a go between, and as it simulates our natural visions ability to change the focal point, actually provides an additional benefit in an image.

  6. Nice toy but what would impress me is if it could take a picture from the light of a subject in 360 degrees and create a 3D image from that. I don't think that would be impossible just need the right technology. Look up 1 trillionth frames per second on TED.

  7. Quite impressive, cant wait for the full motion video cameras that use this technology. I wonder if they have tried using 2 Lytro cameras to take a stereo shot then merge the pictures for even more depth in 3d ie mimicking human sight but in a single image.

Leave a Reply