Tumgik
michaelcioni · 6 years
Text
Resolution vs Sharpness: Slide 61
Slide 61 sits somewhere in the middle of a 98-slide Keynote deck presented by Panavision and Light Iron at the 2017 Camerimage Film Festival in Bydgoszcz, Poland. I’ve spent time at numerous festivals including Cannes, Sundance, and Tribeca, but comparatively speaking, the Camerimage Film Festival is without question my favorite.  Among many cultural elements, what sets this festival apart from others is the quality of lectures and the atmosphere for high-end learning.  Unlike spontaneous panels or even Q&A’s at other festivals, Camerimage has top-quality presentations from both the creative and technical perspectives.  Since my personal mantra is always to combine these worlds together with what I call a technative approach, Camerimage is an ideal place to hear new ideas, discuss new ideas, and present new concepts.
So last November, my colleagues Dan Sasaki, Ian Vertovec, and myself presented a resolution analysis in a session at the MCK Theater, a lecture hall a short walk away from the Opera Nova where most of the films at the festival are screened.  Our session was titled “The Beauty of 8K Large Format” and the 260 seat room was quickly filled to capacity while the MCK lobby played host to another hundred or so who viewed the lecture on monitors that lined the lobby walls.
Tumblr media
Throughout our presentation were excerpts from an interview with cinematographer Peter Deming, ASC that I conducted a few days before the event.  Between Peter’s powerful testimonial and our research, I felt the audience was pretty engaged by the time we got to Slide 61.  In fact, Slide 61 is actually pretty simple and straight forward.  You don’t have to fully understand how CMOS image capture actually works in order to make sense of what Slide 61 demonstrates.  But what I found most interesting is what Slide 61 means to different people.  To some, it represents justification for a decade of digital cinema passion with resolution being a familiar anchor point.  For others, Slide 61 explains a lot about the tenuous relationship between the different dogmas of the RED and Arri camps.  And to some, Slide 61 represents something creatives haven’t considered before, perhaps an opportunity to take a second look at how resolution has not only been misunderstood, but misrepresented.   In all, Slide 61 sparked an impressive amount of conversations in a dozen different tongues.  Over the course of the next few days, I found myself in countless discussions with people from all over the world about our conclusions and was told by a few, “This was one of the best presentations I’ve ever seen!”  (#humbled).  Others said, “That slide (61) could change a lot about digital cinema theory going forward.” A few even said, “This is the slide heard around the world.”   Before we get to Slide 61, I wanted to briefly share a more personal note. This event embodies why I do what I do.  Over the years I have had the honor of leading teams of talent who are willing to challenge the status quo and I always gravitate towards people who are comfortable with being uncomfortable.  While all are welcome to disagree with my conclusions, I find few are willing to criticize my passion.  My primary goal is to identify the boundaries of where technology and creativity intersect and learn how to leverage that area for improved creative control.  While doing that, I aim to keep an open mind as to what living in this intersection teaches me over time.  This technological and creative place is often a state of mind; a place where I believe the best ingredients reside for unbridled innovation.  I call working in this place being technative. So here is Slide 61.  Just a few colored boxes perched above a few basic numbers.  This photo of myself, Dan, and Ian not only represents a crossroads in opinion, it’s also one of the most rewarding moments of my career.  To get the full impact of this slide, you can watch the presentation here. 
Tumblr media
We all understand that some things in art are binary; that is to say they are absolute (aspect ratios, framerates, focal lengths, incident light readings, etc.).  But we also respect many things in art are open to interpretation.  This is why conversations are so critical and why an open mind is possibly the most important tool for artists, especially in technologically-driven industries like ours.  In our Camerimage presentation, we argued that resolution is not only important, it’s the core ingredient to the 3 most important mechanical components in creating images from a camera: 1. Acquisition & Exhibition (separating these concepts from one another) 2. Resolution & Sharpness (separating these concepts from one another) 3. Magnification & Perspective (understanding their relationship with 1 & 2) Every time Dan, Ian, and myself made a point, we used a chart (binary) to back up our point followed by an image (subjective) to allow for personal interpretation.  Our strategy in delivering this message was to use the combination of technology and creativity together in order to allow each audience member to follow along and draw their own conclusions.  So why is it that Slide 61 resonated with so many people?
The actual story of Slide 61 began a few years back when I borrowed the first prototype RED 8K VV sensor from RED President, Jarred Land.  At the time, many people on our team (not withstanding the cinematography community) were concerned about what a 35 megapixel CMOS sensor would do to a face.  Specifically, there was concern about the balance between sharpness, contrast, and optics based on an massive jump in motion picture pixel count and pixel density.  At the time time we were doing initial tests, the climate for large format photography was beginning an upward trend (you could argue, again).  Arri recently released the Alexa 65, a 20 megapixel, 54mm large format sensor.  Tarantino recently directed The Hateful Eight, shot and released in Panavision Ultra 70mm.  My father and I attended a special screening of Kubrick’s 2001: A Space Odyssey remastered in 4K and released on a brand new 65mm print at Arclight Cinemas.  So after testing a prototype 8K Weapon, I decided to publish one of our tests and see what the reaction was.  AC Phil Newman, Keenan Mock, and AP Megan Swanson helped me shoot a short portrait of my friend Erin Gales, a health coach and professional body builder.  On the surface, this portrait was a safe way to gather intel about what people understood or didn’t understand about large format and high resolution.  The comments on the Vimeo link still demonstrate how new the concept of large format is.  It’s also pretty clear proof as to how expectation and bias significantly impacts opinion. But underneath, this test was a way to further evaluate the Primo 70 lenses using the RED’s 16bit, 35 megapixel, 46mm large format sensor and how it would (eventually) behave in the DXL camera.  I didn’t realize it at the time, but the results of this test ended up being the first step in understanding the elements that would eventually make up Slide 61: perspective, magnification,  resolution and contrast.  These are the core elements that make a large format image trigger a different response from that of typical 35 images (it’s actually also one of the main reasons other than aspect ratio that anamorphic images trigger such a unique response).  The below equilateral triangle is designed by Dan Sasaki to measure the balance between these three tenants and how they rely on each other to improve an image.  
Tumblr media
It was images from the Erin Gales test (and subsequent tests like it) that helped pave the way for Dan’s triangle.  In the below frame grab you can see the physical results of a shallower depth of field and increased z-depth based on the focal length (magnification), less distortion or trapezoidal geometry (perspective), and more subtle transitions in dynamic range from shadow to highlight (resolution contrast).  These exact characteristics in this exact situation are only possible possible through the combination of high resolution and a large format sensor.
Tumblr media
Now, it is possible for these tenets to be independent from one another.  For example, you can have high resolution (8K) on a smaller sensor (such as a Helium) which results in less magnification but more transitional dynamic range (due to a 3.6µ pixel pitch). Or you can have a less resolution (6K) on a larger sensor (59mm) which results in more magnification but with less transitional dynamic range (due to a 8.25µ pixel pitch).
This is probably as good a place as any to bring up a comment you may have heard before...maybe you’re even thinking of it now.  In fact, if you ever want to see Ian and I get visibly frustrated, just walk up to either of us and say, “We don’t want more pixels we want better pixels!”  Not only does this statement make no sense, we believe it is merely a product designed to inaccurately satisfy a significant confirmation bias.  Here is how I break this phrase down: in order to understand the claim, “We don’t want more pixels we want better pixels,” we need to be able to measure it.  Assuming the (barely) quantifiable term in this statement is the word “better,” we need to define what actually makes a pixel better.  At Panavision and Light Iron, we believe a better pixel is one that offers a better-quality or more malleable output.  We argue a better pixel is one that has more dynamic range, more flexibility, and more range of manipulation.  Do you agree?  A better pixel is one that is generated with a lower signal-to-noise ratio and has a greater bit depth.  A better pixel produces an image that is smooth and can accurately replicate what the lens maker designed.  In other words, a better pixel (or arguably the best pixel) is the one that creates a final image that can be changed any way you want with little to no compromise.  Another way of understanding pixels is that a better input pixel will always produce a better output pixels.  Pixels on a sensor behave like a light meter and when you run the camera, pixels are given starting input values based on the light and colors they are fed.  And because pixels work in concert to generate an image as a whole, the increase in overall pixel count is the source of where manipulation, optical representation, signal to noise, bit depth, clarity, transition, contrast, and ultimately creative control begin.  In other words, pixels initially work independently to register a high quality input and then work together to create an image as a whole.  Therefore, a higher quality input yields a higher quality output.  So if a “better” pixel is measured by the collective properties of output, it’s “better” qualifier is enhanced when more pixels work together at the source.  Therefore, I believe the statement is made more accurate by saying, “We want more pixels to create better pixels.”
We discussed this concept among others when Dan Sasaki, Ian Vertovec and myself presented 90 minutes of analysis on the importance and beneficiary effects of creative control granted through high resolution.  It’s important to note that our end presentation was not our objective; in other words, we were not set on proving resolution is power through our conclusions, rather our independent research concluded there is power in resolution.  However, I’ve done well over a hundred presentations in my life and there was something odd about this presentation which is actually what prompted me to document this entire story:
When it was over, there were no questions.
In a standing-room-only and overflowing hallways full of attentive artists, technologists, filmmakers, and students, and even competitors, no one raised their hand. To buy some time and in an attempt to cajole even a delayed reaction, I pretended I couldn’t see the audience (even thought I could with the houselights slightly up). I lifted my hand to my brow, squinted my eyes and lied like a pro saying, “I can’t really see you with your hands up, so you’ll have to just speak up.” In the awkward silence, my first reaction on stage was fear.  Who wouldn’t be scared of presenting a mountain of work to what apparently was a cemetery.  There we were in a gigantic room under the banner of Panavision at one of the most a prestigious festivals presenting years of theory to the best in the world and virtually no one has anything to say. No critiques? No comments? No connections? In this giant room below the shadow of our presentation Keynote slides, my fear turned to self doubt.  But through an extended moment of patience (which admittedly felt like minutes) eventually @RedSharknews made the comment, “Why has no one assembled these concepts together before? This is profound! Bravo!” The silence, however, continued. My instinct was to rationalize a motionless room and my mind quickly filled with all the mistakes, the fumbles, the imperfect slides, and misspellings that possibly contributed to accidentally murdering our audience. I then attempted to make a joke to lighten the mood, “Sooo, was it that easy??? You all just get it???” The room chuckled. Then, somewhat thankfully, one person (a competitor I have tremendous personal respect for) made a comment.  That was it.  We responded with a respectful disagreement and the presentation ended. At first I thought we failed to defend our conclusions.  I thought my slides were too few.  Then perhaps too much.  We had not previously practiced the deck, so maybe the audience felt we were unprepared.  But as it turns out, none of this was the case.  And that’s where this experience will never leave me.  Over the next few hours, days, weeks, and now even months, more and more people reach out and say this information was so profound, even revolutionary, that people simply needed time to react to it.  One person explained to me the following day, “This information is so profound it cuts to the core of some of our beliefs.  You challenged my believes so well that I am beginning to doubt what I thought was true.  When I doubt myself instead of defend myself, my reaction is to remain silent.  In that moment, I think the whole room was going though the same process.”
That made me feel a lot better and I hope at least a few of these concepts resonate with you, too. 
I don’t yet know if this event was momentary thing;  lightning in a bottle, or perhaps the beginning of new awakening, but what I do know is the images we are capturing in 8K are different in ways we could have never predicted when we started this journey.  Leon Silverman told me once, “In the complex world of art and technology, the teacher is only one semester ahead of the student.”  That’s exactly what encapsulates moments like this. And that’s exactly what personally drives me to keep exploring what’s around the next corner. 
| m |
vimeo
6 notes · View notes
michaelcioni · 7 years
Text
Nuclear Fusion: RED Hydrogen
Tumblr media
My father once told me, “Sometimes those who lead get so far ahead, followers mistake them for the enemy.”
Entrepreneurs are used to facing ridicule and doubt.  We have what’s referred to as “positivity bias” which allows us to focus on the success of incremental problem solving instead of being demoralized by predictable points of failure.  I tell my colleagues when we brainstorm, “Skeptics are always welcome.”  I find that skepticism from educated and rational people can be a useful tool in triangulating the trajectory of a future that is still the fringes.  You just have to filter out the logic from the luddite.  And while most people tend to stay in the safety zone where things are predictable, eventually fringe-ideas mature, stabilize, and the biggest resistors can slowly evolve into paying customers.
RED knows this process better than most, only this time they have a lot more experience with the challenges of a blue ocean product like Hydrogen.  Even though there are numerous people boldly sharing negative reactions to today’s product announcement, the truth is without companies like RED and visionaries like Jim Jannard, the world’s technological trajectory becomes a default future.
The skeptics might not realize it, but their negative comments actually have two positive effects on entrepreneurs.   First, skepticism is like jet fuel in a turbine engine.  The more fuel you compress, the more powerful the thrust.  Skeptics that make a lot of noise create echoes through their connection networks which inadvertently validates the mission of the inventor.  Often times the greater the resistance to an idea, the greater potential impact the idea actually has.  Entrepreneurs know this.
The second thing skepticism does is work as a product roadmap.  Many skeptics make good points through logical criticisms and when a good company is listening, those criticisms can lead to course corrections.  This is precisely why Hydrogen was announced before its release.  In the case of the Panavision DXL camera, as product manager I insisted on showing a concept camera to the market 7 months before delivering it.  This allowed us to capture valuable input and course-correct the product in the months leading up to anyone actually shooting with it.  During this time, we were paying close attention to the critics and now I travel the world on tour with DXL hearing in city after city, “You guys thought of everything!”  We actually didn’t think of everything, we just listened more to the skeptics than to our fans.
But fans are also part of the equation and necessary as early adopters to help evolve the product, build infrastructure, and eventually bring the skeptics over. On the surface, RED’s announcement of the Hydrogen is not much more than an overpriced, underwhelming Solidworks picture of a phone.  As far as we’ve been told, that’s what the product is (or at least appears to be).  Because there are still many unanswered questions, what I encourage people to do is think out of the box and try to understand what the product means.  From the perspective of meaning, new ideas begin to emerge as to what is possible, or even probable, and how meaning can change the market. Here are some concepts that I see when I examine the product meaning, its makers, and its potential.
• Apple, LG, Samsung, and Sony are focused on the average consumer.  Their products and features are based on pricing tolerances which often means the products they release have known compromises in order to achieve a target price point.  Because RED’s core business is in the professional and prosumer markets, their phone can pick up where the incumbents are finically forced to leave off.
• RED is diversifying.  There are a finite amount of people that RED can consistently sell cameras to.  It’s conceivable that the RED professional market is beginning to flatline, which is likely the motivation for Raven to be introduced at a specific price point and add new customers to RED’s portfolio.  In most cases, it is healthy for a company to diversify, especially when it desires to maintain relevance in the existing market (as Jim Jannard and Jarred Land stated).  This is a sign of health, not of instability.
• Professionals are special.  Apple has been under tremendous pressure from professionals (including myself) to increase a product roadmap that includes the often unique needs that the small market of professionals require to do our jobs.  When it comes to the phone market, I have the same phone my mother has, which is to say one is no more “pro” than the other.  While I may use additional tools, accessories, or apps in order to increase my phone’s professional appeal, the products, at their core, are identical.  I see Hydrogen as an attempt at identifying a market that has been overlooked: a specialized phone for professionals who work in multi-media.
• Users make the best inventors.  When Reed Hastings was penalized for returning Apollo 13 late, Blockbuster had no idea they were the core inspiration behind Neflix, which eventually put them out of business.  I have spent a fair amount of time with Jim and Jarred and looking back, this Hydrogen idea has been with them for quite a long time.  They are always comparing and complaining about smartphones.  In the middle of a routine conversation, Jarred will get a text and complain about the phone, it’s inability to sync, its service provider, the quality of the battery, the OS, or how poorly it fits in his gigantic hands.  If you’ve been keeping track of how smartphones let you down for 10 years, you'd probably be an excellent candidate for inviting a better one.
• Focus on the knowns, not the unknowns. With over 6 months before expected delivery dates, information is, at this moment, scarce.  But from the people we know and the market we can observe, some really excellent ingredients that are likely to be incorporated into this new product in numerous ways:
RED is a modular company and modularity means more choices
RED’s core business is advanced compression and will incorporate that into the product
RED has virtually unlimited possibilities with large sensors and this will be one of the crown jewels of Hydrogen
RED’s claims about the holographic screen are not any more outlandish as a 4K camera that shoots to CF cards for $17,500
Helium is made out of Hydrogen and that metaphor means integration with cameras is guaranteed
There are numerous crossover points between RED cameras and Hydrogen which means R&D is in powerful a feedback loop which could accelerate developments of both product lines
In the end, few of us (especially creatives) want a default future.  The path to reformation requires trailblazing which means there will be blood.  Unfortunately, as evidenced with many of todays top critics, psychological changes are probably more difficult to navigate than the technological ones.  But thanks to the ingenuity and vision of RED, I believe when you examine the claims, the market, and their history, they possess what it will take to pull it off.   
But it doesn’t end there: we still play an important role in this process as well.  RED can’t get this right without our input, and that’s why the entrepreneur in me believes that our role is necessary to making this product even better: 
Some of us are the skeptics: share your criticisms in logical and constructive ways
Some of us are haters: unnecessary malice can often mask your valid input
Some of us are fans: don’t let your love for RED prevent you from having a wide perspective
If there is a healthy balance between all these things, I’m fairly confident that we’re all going to experience a very positive chain reaction when RED Hydrogen  is released. 
Michael
Twitter: @michaelcioni
Instagram: michaelcioni
4 notes · View notes
michaelcioni · 8 years
Text
Arming Yourself with 8K Weapons
Back in June of 2015 I was able to do some preliminary tests with the first 8K Weapon camera.  Some of you may have seen the "8K Aerials" Light Iron photographed and presented at CineGear Expo on a 98" BOE 8K television.  For people in attendance, CineGear probably marked the first time they saw an 8K image on and 8K display and I believe it was our first peek into what 8K means for the future...
Tumblr media Tumblr media Tumblr media
Since then, RED has been refining Weapon 8K and after a new camera test I just completed, I am certain Weapon 8K is absolutely RED's best work to date.  If you are a RED enthusiast, you are going to love what this camera can do for you.  But moreover, if you are RED cautionary, you re going to love what this camera can do for you.
There are already a number of people chatting about how unnecessary 8K is and that the "race for resolution" is a pointless contest.  Let me explain why this way of thinking is flawed:
1. COMPONENTS of UHD ADOPTION I believe UHD (4K) will be the mezzanine standard for home entertainment displays until around 2025.  By this, it's important for people to understand that we are entering a decade (minimum) of consumer UHD adoption, which means you can get comfortable with investing in it today.  By 2018, it will be impossible to purchase an HDTV and by 2020, broadcast technology will deliver content to less than 50% of consumers with OTT groups becoming the leading majority of content distribution (and for some of them, content creation).  Once OTT servers more than 50% of the population, they will make all the rules, and they're already telling us what those rules are going to look like.
2. BROADCAST VS BROADBAND I have had to get comfortable with the notion that the internet plays the most important role in the road to 4K exhibition, which is to say that the world of motion picture cinema is unfortunately not going to rise to the occasion like I had hoped.  Already guided by brave leaders like Netflix and Amazon, 4K capture is the minimum standard in acquisition for their original content.  Unlike broadcasters (many of which are stuck with 720p 59.94i broadcast infrastructures), OTT groups have a major technological advantage by instantly upgrading their exhibition systems with relatively inexpensive and simple software upgrades.  They know that in just a few short years, every home entertainment system will not only be SMART, they are bi-directional and will enable UHD resolution, high dynamic range pictures (HDR), and a wider color gamut (WCG) to users as adaptation of these technologies widens.  I call UHD, HDR, and WCG the “Tripod of Fidelity."  When I was at NAB 2015 I asked a leading major network engineer what it would take to implement this tripod of fidelity at his network like the OTT companies.  His reply was, "At this point, it would be impossible."
3. AVOIDING THE UPSCALE If you've ever run the video tap of a modern digital cinema camera to a SONY OLED monitor on set, you can look at that picture and say with confidence, "Wow! HD looks really good!"  The fact is, HD does look good when super-sampled from a higher quality source.  But after those same images are conformed, colored, compressed, and consumed, they are a long way from where they started.  By 2020, every image not mastered and delivered in UHD, HDR, WCG, will become mangled in a calamity of unpredictable scale, contrast, and color conversions. All of us have experienced the tragedy of SD>HD broadcast upconversions.  To master in HD or 2K today is to relinquish your image to any number of factors you cannot control in the near future.  UHD/HDR/WCG mastering is the only way to ensure present-proofing.
4. EMBRACING THE OVERSAMPLE Avoiding the upscale is a major part of the equation, but the next critical step in superior images is ensuring we have more pixels than we actually need.  When the RED ONE hit the market in 2007, the images were so fantastic because even at bayer-pattern 4K, they were super-samples of 2K and performed wonderfully in HD.  Now that OTT UHD is becoming the new normal, we need to apply that same logic to today's content which is where the introduction of Weapon 8K becomes a powerful tool.  Don't just use resolution as a pixel meter for bragging rights, rather use it as a Swiss Army knife multi-tool that can be leveraged in a number of creatively constructive ways.  
5. RESOLUTION IS NOT SHARPNESS When you look at your Facebook wall you instantly know the difference between an iPhone photo and a DSLR photo.  Yet on Facebook, if the compression and size of the photos are identical, how can you tell?  Certainly the iPhone is not blurry.  In fact, I find iPhone photos to be remarkably sharp and with surprisingly excellent DR and color.  So what is the intangible difference between a lowres iPhone photo and lowres DSLR photo? The digitally-educated understand that you cannot make a perfect circle using pixels because they are merely a series of polygons. Film, ironically, can actually photograph a perfect circle because it's not limited to polygon arrangements.  By this, the more pixels you can apply towards a perfect circle, the more perfect the circle appears to be.  And there is the #1 reason for lovers and haters alike to test Weapon 8K: 8K is not about sharpness, it's about smoothness.
When the stills world migrated to digital cameras, one thing they weren't necessarily looking for was film-like resolution.  What they were actually looking for was a more film-like smoothness.  And the smoothness they were after became better and better as sensor resolutions increased.  And that is what 8K is all about.  The same way a 32 megapixel DSLR camera can look good after being rendered down to Facebook, so will Weapon 8K when rendered down to UHD.
6. WEAPON IS NOT EPIC I first noticed "The Smoothing Effect" when the 6K Dragon came out in the Epic.  I remember seeing the first set of dailies come in from Don Burgess, ASC on the 2017 release "Monster Trucks" and saying to my team, "This doesn't look sharper...but it does look smoother."  Over time it was clear that the super-sample of Dragon 6K was producing a smoother transition of round objects which made them appear more realistic (the opposite of hidden/blurred aliasing or approximating that lower resolution BP cameras must do to hide imperfect edges).  Now with 32 megapixels in a Weapon 8K, people will get the best representation between what resolution feels like without sharpness.
7. THE SUPER SENSOR So as I said at the beginning; "There are already a number of people chatting about how unnecessary 8K is and that the 'race for resolution' is a pointless contest.”
What I encourage everyone to consider is that 8K is not the new 4K.  Instead, 8K is about to open up an entirely new era of cameras which I now call "The Super Sensors."  Super Sensors are camera systems like Alexa65 or Weapn 8K that are capturing with so much resolution that (like a DSLR) they are able to create a new level of smoothness that makes things look more like a photograph and less like a digital representation of film.  Ansel Adams shot large format and no one has ever said, "His images look too sharp!"  On the contrary, Adams' images look smoother, cleaner, and multi-dimensional because they were super samples.  These are the creative words I think people will begin to use when describing what they see while shooting Weapon 8K.  
But creativity aside, there are also technical elements that I used in my test that I was excited to evaluate.  I used the new Panavision Primo 70mm lenses which cover the 8K sensor beautifully up through my widest lens, which was a 24mm.  They are a perfect blend between clarity, size, weight, and texture.  If ultra-high quality glass is important to you, you need to get to the nearest Panavision and do some tests with the Primo 70s.  You will fall in love with how perfectly they harmonize with a super sensor.
Tumblr media
I also tested our first Weapon Module called the Panavision Hot Swap Module.
Tumblr media Tumblr media
This module seamlessly ties into the Weapon body and provides:
• D-tap on top powered from either input • audio in & out • 5v USB • 00 Lemo Control connector • 5 pin Timecode in with Genlock • 3 - HDSDI outputs • 3 - 12v outputs • 2 - 24v outputs • hot swap between studio power and Anton Bauer gold mount • built and 3D printed out of carbon polymer • total weight of 8K Weapon camera & module is 5.4lbs
This is the camera exercise I did with the help of my friend, Erin Gales.  Everything was shot Weapon 8K at 1280ISO except for a 2nd camera angle during interviews, which was a Dragon 6K.  Special thanks to Phil Newman, Megan Swanson, and Keenan Mock for their help on set.
vimeo
5 notes · View notes
michaelcioni · 11 years
Text
ENDER'S GAME
For most of my life my mind has wrestled with the ongoing question: what to do when braun seems to overpower brain?  I have always been small in stature, always the shortest in my class and, therefore, always somewhat limited to what I could physically accomplish.  But don't we all have mechanical limitations to our abilities?  Don't we all share some sort of inability based on what makes us who we are?  
Tumblr media
I think so.
Tumblr media
For me, limitations that manifested themselves of a physical nature simply revealed to me "there must be another way."  Whether it's something high up on a shelf or something too heavy to lift, it was the act of creative problem solving that became both the source of fuel and satisfaction of success.  Little did I know that this type of arrangement and experience as both a child and an adult was absolutely the best preparation for taking on the life of an entrepreneur in an industry on the brink of change.
Author Malcolm Gladwell's latest book, "David and Goliath" is the story of the complex relationship between strength and weakness.  Based on this old Bible story, Gladwell is quick to point out giants are not always as strong or powerful as they may seem.  In fact, David wasn't the lowly sacrificial lamb that the Philistine's thought was about to commit suicide, rather David knew he was going to win before anything ever happened.  So how could a shepherd boy armed with five stones and a sling walk to an eight-foot warrior and know, without a doubt, that he was going to win?  The answer is simple: David was not going to fight on Goliath's terms.
Tumblr media
All too often, we find ourselves losing battles because we are fighting them on the terms of our adversaries.  The story of David and Goliath is not one of luck, but one of control.  David knew that the source of Goliath's greatest strength was simultaneously the source of his biggest weakness:  Size.  And that's exactly what the information age is proving over and over and over again: The size, age, and design of giants can be overtaken by newer, smaller, and innovative ideas that are not playing by the same rules.
Tumblr media
Recently, the Light Iron team had the pleasure of collaborating with one of the best cinematographers of all time, Don McAlpine, on the film, Ender's Game.  Like us, Don recognizes Goliath when he sees him and has no intention of falling into a trap.
In Don's words, "I chose RED because the Hollywood establishment had produced so much anti-propoganda that I knew it must be equal or superior to the more established equipment.  While I will leave the future equipment requirements to younger and more agile minds than mine, I will, however, be very pleased to consider any device that can expend our vision."  Fittingly, Ender's Game is a film about strength and weakness and an unlikely character who succeeds by holding strong to a vision that others lack.
Tumblr media
To my friends and colleagues: Whenever I am engaged in situations that I do not control I get confused, I become fearful, and I am prone to make mistakes in a battle I most likely cannot win.  It's overwhelming.  I know many of you know what that feels like.  But when I take control, make predictions, and adequately prepare based on known weaknesses, I can face the biggest giants with the certainty I will succeed.  In the business of digital cinema, when you arm yourself with workflow and innovation, you move the odds in your favor.  This is the story of how just a few self-driven people armed with workflow and innovation were able to manage one of the biggest films of the year.  And it's a story that if you "get it,"demonstrates how you can accomplish exactly the same thing.
Tumblr media
https://vimeo.com/78581143
1 note · View note
michaelcioni · 11 years
Text
"Advancing the Mind at the Same Speed as the Processor"
Think about it: Processor upgrading compared to mental upgrading.
Tumblr media
While I’m not attempting to make an argument for the theory of technical singularity (not yet, at least), I would like to point out that many technological developments for cinema are ahead of the average user. Looking back, the transformation of the motion picture industry from a fully linear/analog group to a fully file/digital group is now almost two-thirds complete. What started en masse in 1999 I estimate should be complete by 2020 and we’re now past the halfway point. But as the distance to the d-cinema finish line approaches over the next 5 years, the widening gap between the processor and the mind is not just easier to see, it’s becoming more unguarded and in some cases, worrisome.
Tumblr media
My observation comes from the discrepancy between the spoken desire for progressive techniques and the ultimate choice to simply maintain the status quo. I know I’m not the only one who notices this…
Tumblr media
While some of us are motivated to examine and explore the pros and cons of emerging technologies through real experience, some of us draw conclusions indirectly based on the opinions of others. I believe that an evolving education may be the most valuable tool we have in the d-cinema transformation, and that those who elect to lead for the right reasons will have a memorable place in history while the luddites will be forgotten.
Tumblr media
To prove my point, I’ve tried something a little different: an innovative tools case study.
Tumblr media
In my spare time, I compose and record music on about 6 different instruments I play on a regular basis. Sometimes I get to collaborate with other musicians and earlier this year I was introduced to the upcoming artist Kayte Grace (www.kaytegracemusic.com). After being able to collaborate with Kayte on her latest EP, we began talks about a music video, which I was interested in using as a vehicle for a case study about some innovative tools that most people haven’t had direct exposure to. So the following case study was designed and documented in effort to demonstrate the validity of the tools I chose to experiment with in this project. In addition, I wanted to demonstrate that the behavior of taking calculated risks is a trait which all of us (myself included) need to have the courage to consistently possess.
*NOTE*
I have been consistently asked to consult for companies over the years and enjoy evaluating tools and their impact on the creative process as part of my role as a filmmaker and facility owner.  I can see my fingerprints and those of my partners in many of the tools professionals use.  But when companies want endorsements, I never really know how to do that well.  This case study is my attempt at un-marketed information.  These are simply examples of technology I particularly enjoy working with.  But this video was done without the consent or request of any of the associated companies mentioned.  
• No company saw an edit before I uploaded it
• No company contracting me to do this in any way
• I was not paid for any of this information
| m |
https://vimeo.com/73797466
2 notes · View notes
michaelcioni · 11 years
Text
The DIT Dilemma
This entry was based on a reduser.net thread about building DIT carts.  The thread asked a lot of challenging questions about the process, the tools, the players, the evolution and the future.  For reference, the original thread can be viewed here:
Tumblr media
http://www.reduser.net/forum/showthread.php?88935-Building-a-DIT-cart
Tumblr media
The official role of a DIT is one that continues requiring discussion.  Defining the DIT is hard for anyone to "officially" quantify, though there are rules and regulations that suggest a textbook definition.  But the more one talks to DITs, the more you find they are not "consistent" in their feelings on the job.  This is not normal.  If you ask pilots what they do, they tend to agree on their role.  Same with directors or even script supervisors and DPs.  This suggests that the future is not yet written for DITs, which means they have the unique opportunity to either shape the industry going forward in their favor, or let another entity define what their role is and even make the DIT an unnecessary expense.  Going into 2013, there is significant evidence for both arguments.
Tumblr media
[ Response to reduser.net thread]
I've enjoyed reading this thread since it started, and other threads like it.  There are some good points made by all contributors and I take the time to read each one and "feel out" where the writer might be coming from.  As they say, try to imagine yourself in another man's shoes before you jump to conclusions.
Tumblr media
That said, I want to make perfectly clear that what I'm about to write is based not only on extensive on-set experience, but on the relationships with production supervisors, unit production managers, producers, production companies and studios who run digitally captured features and television shows.  Some DITs recognize that the "Light Iron way" of managing the DIT and data management roles differ from other entities.  -Some prefer our arrangement, others do not.  But the reason Light Iron manages the DIT process the way we do is because we largely designed our system based on the[I] feedback[/I] we receive from the people who do the hiring and firing.  So take a read of my comments if you like and understand that I oversee the on-set dailies process of close to 100 projects per year - which accounts for more than 1,000 shoot days annually.  -I don't have every answer, but I've come across a lot in a short time and deal with DITs and the people who pay them.
Tumblr media
Consider this: If you are interested in knowing what a DIT does, ask a DIT.  They have plenty of ways to answer that question.  But I am more interested in knowing what a paying production thinks a DIT [I]should[/I] do: so I asked the people who hire them.  
Tumblr media
From there, it should come as no surprise that the role of the DIT is one that carries along with it some controversy.  -Some readers may feel the word [I]controversy[/I] is a bit extreme, but I can easily say that the role of the DIT is one that is consistently debated (partially proven by this post).  But the debate of the DIT doesn't stem from the DITs themselves  -  why would a DIT ever want to have to defend their own relevance?  No creature on earth wants to have to defend its very right to exist, yet DITs constantly fall into this trap.  No, the debate of the DIT comes from the top down and the growing number of productions that choose *not* to have a DIT as part of their arrangement.  While I often feel the anti-DIT position is definitely short-sighted and (in many cases) flat our wrong, productions that regularly operate without DITs do not seem to demonstrate an inability to make their shoot days or create high quality content.  So how is that possible?
Tumblr media
Well in my experience, the DIT Dilemma is a lot more serious than some realize.    There is no other production role on digital productions that is equally "in question" other than the DIT.  No one is questioning the relevance of DPs, script supervisors, actors or camera operators.  But whether or not a DIT is needed is a common discussion in the production office and above-the-line meetings.  I have personally been in dozens of production meetings on large and small productions in which *not* having a DIT has been fervently discussed.  Since I am ultimately in favor of the DIT, my vote counts in that direction, but there are plenty of cases in which DITs are not hired on a show or are demoted to another ancillary position.  -If anything, that should be the initial reason DITs should be thinking about this issue: [I]they have been named.[/I]  And once you are "named" and the momentum of defending ones relevance comes into question, it becomes more and more difficult to maintain favor as time goes on and trends inevitably shift.
Tumblr media
Ironically, it was during this this observation that my partners and I continued to examine the role of the post house as well.  While DIT's have been named, they're not the only ones: post houses are also being challenged.  And then we thought: these two related areas represent endangered species in our changing media ecosystem.  Consider this: if a measurable portion of the industry feels like they no longer want to rely on post houses, and another measurable portion of the industry feels like they no longer want to rely on DITs, then the beginnings of a predictable outcome starts to be revealed.  -Clearly these two issues are related in some way!  We can even assume that there is some portion of these two camps that believe the same thing.  -Overlapping their no-post/no-DIT notion together!  -Those became people worth targeting and inquiring as to "[B]WHY[/B]?"  The results are typically plain and simple: economics.  Everyone wants more for less.  I don't care if it's a producer on a film or your mother at the grocery store.  There is a perceived notion in the eyes of many producers that the DIT and the Post House are more expensive than the return service they provide.  -And for many, that's not a new perspective, but until now, an alternative simply didn't exist.
Tumblr media
The data that I've collected and successfully applied into my business suggests that the post house is less desirable than ever before and in terms of dailies provisions, will not exist by 2017.  Mark my words.  Likewise, the sophistication of cameras continues to increase and for the DITs who have been in the game for 10 years, you know that many of the tools you used to require to "normalize" images on your cart have been absorbed into the camera itself.  In fact, I predict that by 2021, all the capture, transcodes (there won't be transcodes, but the equivalent of the transcode), sync, color, windowburn, watermarking, versioning, color space conversions and even lined-script notes based on totalcode-timecode during capture will *ALL* be recorded and managed by the camera, saved to an online cloud server and instantly distributed worldwide.  In other words, a significant portion of what Light Iron does today to make its money will not exists in 10 years (which is the same for thousands of people around the globe). Again, mark my words.
Tumblr media
These predictions are based on following the data that has been compiling for 10 years, analyzing Moore's Law, talking with targets of manufacturing, evaluating the market evolution and making a few educated guesses.  The result: in 2021, we will not have DITs or dailies post houses.  -Sure, I'm scared, too, but I know enough of my own abilities to predict the market that I intend to evolve along with it - as opposed to devolve in spite of it (as some foolishly attempt do).  If you are a DIT today, I can assure you that you won't be a DIT in 2021.  -Maybe that's a relief:-)  But it means that one needs to find ways to A) build a career that leads to professional satisfaction in the future and B) find ways to extend your relevance today as far in the future as appropriate.
Tumblr media
For those that are interested in what that exactly looks like, I can tell you what Light Iron does.  Again, while some people disagree with the Li model, I can tell you that it has changed the lives of the producers who take advantage of the system and the people who are our regular operators.  We call it "OUTPOST." OUTPOST is not the only system out there, but it's the largest fleet of mobile systems and has cumulatively done more shoot days than just about anyone else combined.  To illustrate the significance of that point:  Light Iron is a company of less than 30 people at the time this was written.  Currently, there are 16 carts out in the field servicing shows internationally.  Concurrently, we're in the middle of 8 DIs.  That's a combined active slate of 24 projects.  No post company has ever had a business model that can service nearly a 1:1 ratio of feature film projects to employees.  That's never been possible before and that is why my findings should be taken very seriously.  When the implications of a statement that significant are fully comprehended, the future and the past will have finally and fully divorced.
Tumblr media
On Light Iron systems, we have been tailoring our tools so they can provide these popular processes on a set:
• Checksum
• Triplication
• Visual QC
• Automated Scene and Take naming
• CDL color
• Advanced Live Grading (based on a new system we are announcing publicly in 2013)
• Sound Syncing with 1/4 frame accuracy
• Transcoding at over 90 frames per second
• Parallel rendering
• Advanced reporting (PDF, CSV, XML and TD, ALE)
• Advanced iPad integration
• LTO [automated robot support]
• (Virtually) unlimited camera support
• ACES or 709 color space management
• Transcoding to MXF & Quicktime with custom naming/automated Scene and Take metadata
• CDL/ALE/ArriXML/XML/FLEx/EMD/RMD sidecars
Tumblr media
Admittedly, there are a lot of very smart people who can build systems that are capable of all of the above.  But just as there are differences between a Dodge Neon, Dodge Ram and Dodge Viper, so there are differences between tools that claim to provide the same thing.  On Li OUTPOST carts, we have operators that handle up to 4 hours of ArriRAW and provide everything listed above on the set.  Even on our smallest mobile system, Lily Pad Case, we had an operator do an average of 6 hours of dailies in Africa which included backup, color, syncing, rendering to AVID MXF and H.264 for web uploading and ended up uploading to the web for dailies viewing.  But on the average show that shoots 2.5-3.5 hours of footage per day, the operators are having fun doing all this so quickly and efficiently.  Creative liberation is what speed delivers, and the creatives you serve sense that right after the first mag.
Sound people play along when we manage expectations correctly.  Sound will be perfectly synced - in fact, I teach a syncing training course that gets people to learn how to sync takes in under 8 seconds per take (which means a 20-take magazine can be synced in about 2 minutes and 30 seconds - even with TC drifts).  We build custom drives that eliminate slow downloads and transfers - as our systems take that into account with 500MB/s read/write for shuttles (ie: once off the mag, we move 1 hour of Epic 5:1 in 7.5 minutes to the Li shuttle drive). Smart people make it clear that doing this correctly is expensive - which is the truth.  Be prepared to invest if you want to do this without compromises.  But when LIght Iron is able to improve the horsepower with more significant investments, realize that is an answer to the call of the production, not a threat against weaker, less expensive owner-operator tools.  I don't set the pricing, I simply install what is required to solve the above list.  That requires a good DIT to have access to the latest and greatest and even develop improvements, software and hardware.
Tumblr media
The post house is at the end of its life, and unless DIT's realize this is their last chance, DIT's will share in the slowly fading demise of post houses.  The DITs that quickly point out they don't have time to do certain tasks, or that it's not their job are the prey of the post house; big post houses absolutely love the DITs who don't deliver things complete, correct or compliant because it enables them to charge production for the same services all over again.  -Often at a premium!  But for the enabler DIT - the ones that have desire and the skill set to provide the services that post is, they are the threat to the post house and the white knight to the producer.  Remember: based on the changing media ecosystem, most post houses use similar equipment DITs use.  -Only they are willing and able to finish the job.  If you share similar tools, then it becomes more of a mental game of completing the task, not a physical one.
Tumblr media
CONSIDER THIS:
In a matter of survival: focus on what you can bring to the production, not what you can't.
Focus on how you can protect the filmmaker, not protect yourself.
Focus on what technologies exist that elevate potential and possibly, not just copying what everyone else uses (there are lots of bad DIT habits that are spread across the world).
Focus on practicing and improving on the areas in which you struggle, not ignoring them or pass them off to another person.
Tumblr media
I have personally operated numerous jobs in order to learn the way and the truth and from that experience, I've applied it into some of the best DIT carts in the world.  I've also done some of the hardest jobs in the world and probably do the largest jobs with the smallest footprint.  -And our systems are getting better with 2 new massive upgrades in 2013.  I love working with DITs and helping them make more money and provide more services and satisfying the creatives they serve in a completely new way. 
Tumblr media
If you are providing all of the above, then you know what I'm taking about.  Your job is a lot more protected when you can demonstrate you are more relevant than a dailies post house.
If you are doubting all of the above, then I wish you the best.  It truly is a dilemma and we can all agree it's a complex issue.  But know that the post house is dying and that it's hoping that DIT's make mistakes and refuse to provide services.  The less DIT's do, the more air in their lungs.  And while that may seem good for both the DIT and post house, it's bad for the creative.  The truth is getting out and there are creatives who are experiencing life without a post house thanks to the talent of good DIT's.  By this, a new standard is being born and I have watched it explode over the past 3 years - which is why this is all worth debating.  But one things is for sure, 3 years ago on-set dailies was but a fraction of a fraction of services on set.  Today, it's 100% of my dailies business and growing with companies all over the world.  If you think the DIT as a post provider is a phase that will phase itself out, then you are already less valuable than your employers think...but they'll figure it out soon.  They always do.
| m |
14 notes · View notes
michaelcioni · 12 years
Text
The New Breed of Communicators
Think back to a significant or monumental moment in your life in which you were experiencing with many others around you.  Try to find a moment you can easily recall, perhaps talk about often and remember where you were when you heard about it.
Are you thinking of a moment of monumental progress - like the Apollo 11 landing?  Or are you thinking of something more destructive - like the September 11th attacks in New York and Washington D.C.?  As it turns out, when asked this question, statistics show most people tend to recall more traumatic events as opposed to more encouraging ones.  In other words, when asked about profound moments in life they will never forget, people tend to recall disruptive events such as the death of a rockstar over something encouraging like their wedding day.
But the question I’m proposing is not about which end of the spectrum you choose to recall significant events; rather the question whether or not you even realize the event is significant when it’s happening.
When the event was happening, did you have the inclination to stop and document it while you were experiencing it?  Did you get out a camera and take pictures of the event or the TV screen?  Did you record the reactions of others in the area or in your home?  Did you write in a journal what happened that day and who was there?  In other words, when a significant event is happening, are we typically able to realize that it was, in fact, profound at all?
For most people, the significance or impact of an event only tends to become clearer as the passage of time physically distances you from the initial affair.  Psychologists sometimes qualify a version of this as traumatic disassociation.  For most people, the greater the impact on your life, the greater the traumatic disassociation affect.  Admittedly, contrasting circumstances produce varying affects on individuals, but one thing is fairly consistent: the average person is not necessarily “aware” of a moment of significance until well after the moment has taken place.
It’s easier to quantify moments of massive trauma when they affect a nation or a generation - especially when heavily covered by the media.  But what about circumstances of dramatic change that are not covered by the media?  What about episodes that large groups experience over a long time period?  What about instances that are happening “under the surface” of mainstream culture?  I believe there is a connection between what is happening in digital cinema and the notion that most people are unaware of its pending affects.  Without the clear realization that something significant is occurring, people, on average, are unable to prepare or even react to a moment of significance...until either a lot of time passes or a larger group of people collective agree on its significance.
Imagine being in a large room with a lightbulb that was dimmed 1% every 60 seconds.  It might be hours before you notice that the room was slowly getting darker.  A few people might notice when the ambient light level is at 50%.  And for others, they might not notice until the light was nearly gone.  But no one would be able to determine the change after only a few minutes.  I believe the most dangerous element of our profoundly changing media ecosystem is that it is so subtle, so effortless and so orthodox that for most professionals, its affects are virtually undetectable.  I don’t feel the word dangerous is an exaggeration in the least.  Motion picture communicators are in the beginning of an era of change that will never be undone, and will never be outdone.  To the most experienced professionals, the digital cinema transformation is, in fact, so traumatic an event that most people have yet to comprehend its significance.  There is a divide in our professional culture, spanning every international territory, in which only a few are aware that the lights are dimming. This divide is creating more mechanical controversy over the creation of motion pictures than we will likely ever see.  But for those of us that are aware that the lights are dimming, the conviction to technological resistance is starting to become increasingly frustrating.  But I want to clarify that this is not necessarily a generational thing.  My research shows that while technologically solvent people tend to be younger, it is not always the case.  But while there are many masters of the craft who are acutely aware and in support of this transformation, there are actually very few industry new-comers who make arguments against the practices and protocols of progressive technical development.  In other words, being older doesn’t mean you lack the ability to grasp imminent change, but that younger counterparts will adapt with less resistance, if any at all.
Man’s ability to create technology is one of the most fundamental components to consistent progression.  Without routine technology development, talents, tools and techniques quickly become stale and obsolete.  While that sounds fairly obvious, the last ten years working in the motion picture business have shown me that rapid advancements in technology are not necessarily welcomed by all professionals.  For many, the technological side of creativity is liberating.  For others, it can represent a serious threat.  Because of this, I’ve tried to make education a normal part of my life.  In a given year, I likely give in the neighborhood of 150 lectures focused on creative technological progression to all sorts of demographics.  I also have done everything I can to be transparent in what I believe, why I believe it and how I do it.  -So transparent that in some cases, I’ve been criticized for “sharing too much with competitors.”  But I believe that the risk of sharing trade secrets is worth the gain in global education.  That being said, sadly in some of these discussions, I have unfortunately ended up encountering people who make strong arguments against many aspects of technological development - particularly information technology (IT) components as they pertain to both production and post production.
But fear and resistance to technological change in creative outlets is not new.  It’s not rare and it’s not unique to the motion picture business by any means.  -It just seems new to generations of artists who are frustrated with (what many claim is) an unpredictable, unstable, unstandardized, undeveloped, unproven and un-led movement.
In the early 19th Century, a mechanical version of the loom was invented and brought to England where it quickly became an asset to the growing textile industry.  For centuries up until that time, skilled artists would hand-weave threads together to make anything from clothing to blankets and scarfs.  But faced with the notion that a seemingly less-skilled worker was now adding value to manufacturing through the use of modern technology, creative artists felt they were being cheated out of their jobs and began to revolt.  Known today as “Luddites,” this group of artists began breaking into manufacturing plants and destroying looms, burning materials and stealing from their employers in protest of producing textiles with machines instead of artists.
The Luddites were among the first groups of people in recorded history that were victims of a theory known as “Skilled Based Technological Change” (SBTC).  Personally, I have witnessed a measurable increased Luddite mentality over the past 5 years as well as an increase in SBTC conditions along with it.  While at first conditions seemed to manifest themselves in what I would consider appropriate amounts, today I believe it is reaching a level in which (for some people) political agendas and (more significantly) personal legacies are at stake.  This means opposers of new technologies are becoming less comfortable because some of the introductory technology is beginning to become accepted faster, thus the balance of power is starting to list.
Assembling all this together, I believe this boils down to two key ingredients which are somewhat linked together:
Traumatic disassociation affects the people in today’s professional industry who fail to recognize the significance of the changing media ecosystem
Skilled Based Technological Change represents the people who resist and attack the change they are being forced to recognize
Referencing the past as an aid to predict the future, I submit people fall into one of three categories of professionals:
Those of us who believe in democratization and technological evolution for creatives is part of designing the best future possible = (I’ll get to this title later)
Those of us who fail to recognize the benefits will still reap all the benefits once the progressive movement has completely overcome the traditional = followers
Those of us who choose to resist the benefits of change delay the potential of expedited progress and create a controversial environment for all parties = Luddites 
It is because of these groups that I feel today is perhaps the best time to be a working professional in the motion picture industry since its inception.  Teaching, learning, challenging, preparing and evolving is paramount to what we must do to succeed in this, and yes, to especially help those who are impervious to it.
One of the most common arguments I run across is the criticism that digital cinema and the subsequent file-based transformation is without rules, without standards and without best practices.  It lacks defined leadership which forces professionals to make misinformed choices in directions that I believe will have detrimental affects in the future.  It’s even been likened to the extreme anecdote of 19th century lawlessness in the form of “the digital wild west.”  Actually, I feel this is a fair criticism in many ways.  On the whole, the changing media ecosystem does lack the rigidity of more traditional approaches and many people can cite bona fide examples in which progressive techniques fail more often than they succeed.  But for those of us who are a part of refining our craft, we are left with two choices: We can either yell louder than those who we feel are in error, or address the criticisms head-on in order to remedy them.  It’s a classic case of aimlessly criticizing the powers that be, or do something positive to change their minds.
By far the worst thing we can do for the cause is to argue amongst ourselves.  Fruitless arguments for the technical betterment of creatives have been camouflaging the real issues for years, holding both parties back from necessary development.  Below are some examples of rampant arguments in the market that are not only counterproductive, but they act as a distraction of the real issues at hand:
AVID vs. Final Cut
The actual issue here is not which software is better for editors, rather that editors use nonlinear editing technology over linear tape or film cutting systems.
RED vs. Alexa
The actual issue here is not which camera is better for cinematographers, rather that images are being captured at high resolution and high fidelity instead on film or tape.
Tape Masters vs. Disks Masters
The actual issue here is not which format is better for archiving, rather that we are fixed on a digital direction instead of an analog one.
Compression vs. Non-compression
The actual issue here is not to suggest that non-compression is the best way for professionals to work, rather that advanced compression technology has yielded ongoing developments in lossless compression.
Resolve vs. Pablo
The actual issue here isn’t that expensive or cheap solutions essentially do the same thing, rather that color correction is all being done through computer-based tools instead of film negative cutting.
3K vs. 4K vs. 8K
The issue here isn’t that Arri or RED or Canon or SONY have different positions on sensor size, rather that all camera manufactures are building single-sensor solutions that produce higher fidelity images than film.
Satellites vs. Terrestrial Distribution
The issue here isn’t which technology is better and for whom, rather that we are getting away from film prints and tape cassettes for exhibiting movies and television programs.
RAW vs. RGB
The issue here isn’t which one offers more color, contrast or control, rather that we are all in agreement to move away from tape and film for image acquisition and rely exclusively on file-based capture.
On-set vs. The Lab
The issue here isn’t to figure out which one is better for you, rather to show that some legacy infrastructure are able to be replaced by progressive tools and reduce reliance on large, expensive corporations for support.
Indy vs. Studio
The issue here isn’t that good movies are high quality and bad movies are low quality, rather democratization is enabling people with less money to partake in shared technologies that generate images which compete with bigger budgets.
These are just a few examples in which people designate as battlegrounds worth fighting for, when in reality, I think there is a much bigger picture worth fighting for.  If we could harness all these discussions and more and focus the energy on the bigger picture, I know we could build a new breed of communicators that could change the way the entire world tells stories.
Generals and politicians alike know that you need a plan before you engage in a conflict.  Perhaps the lack of planning has been the problem all along.  Perhaps it stems from the inability for many to recognize that this change is happening.  Or perhaps it’s because much of the opposition holds high office and people tend to “look up” for direction.  Whatever the case may be, I propose the beginning of a solution that starts with the most basic element.  -An element that has been missing from the beginning.  -An element that can be used to define a movement.  -An element that people of all ages can relate to.  -An element that people want to get behind.  -An element that represents those of us who know the lights are dimming.  I propose we start by giving a NAME to who we are: We are Technatives.
We are technological creatives.  Equally able to service both initiatives.  We know we’re different from our mentors, we just didn’t know why.  We are fueled by advancing technology and creatively applying it not just to our work, but our entire lives.  We embrace change, often design it ourselves, and weave it into the creative outlets that are ultimately used to tell our stories.  We reject the limits of being labeled either left-brained or right-brained.  We are able to shoot, edit, configure, troubleshoot, write, direct and develop.  We teach our children and our parents how technology can be used to improve the quality of life.  We track changes in the market and are eager to set new trends.  We seek out leaders in the industry that embrace technological creativity.  We celebrate updates, upgrades and new releases instead of delaying them. We are comfortable with being uncomfortable.  We are are early adopters.  We take risks.  We’re not afraid to be attacked.  We’re not afraid to be labeled.  When we discover something, we make our ideas a matter of public record.  We are inventing new business models.  We are most happy when we’re the first to do something.
So welcome technatives.  The artists of tomorrow need us today to be the architects of the future.  The lights are dimming, but our acute senses are already crafting ways in which we can capitalize change for the betterment of the art.  Steve Jobs taught us that.  Probably because was the very first technative.  Steve made nerds cool and made artists nerds, all at the same time.  That’s where I want to be.  That’s where I feel normal.
| m |
14 notes · View notes
michaelcioni · 12 years
Text
iPad going invisible
On March 14th, Apple made good on a number of rumors with the release of its 3rd installment of iPad.  While they don't officially call it the iPad 3, people in my circle are quickly defining it as such so as to distinguish it from previous iterations.
Tumblr media
About 2 weeks ago I had the pleasure of hosting a really cool group of people that belong to the Independent Filmmakers of the Inland Empire (IFIE) led by Eric Harnden.  This was just before the latest iPad came out.  It's not uncommon for my company to host different groups and give tours of the facility.  I always think that sharing knowledge with others trumps keeping it to yourself.  That is why we pride ourselves as educators and do our best to maintain a transparent identity with the community.  After the tours, myself and a few of my partners sat down to talk about tools, technology and techniques with the group.  One of the members of IFIE asked a great question: "What is it that bothers you most about the current state of tools and technology?"  I think a few of us gave answers because it's always a rare opportunity to answer such a bold question.  I think my colleagues and close friends commonly agree that what bothers us the most is the way in which technology can be an obstruction.  It's becoming clearer to us that there are people in different parts of the industry that are using technology in ways to influence or even validate their position.  In some cases I feel there are types of people that are opportunistic, others are just part of a transition.  But the equation remains: 
As time goes on > Moore's Law applies > new technology develops > people's expectations go up > masters of the technology immerge > technology becomes innate > all people master it > need for masters goes away > and the cycle repeats
Tumblr media
In other words, we are a society with the immense capability to acclimate.  I think every sociologist would agree with that.  And in that acclimation there are times where we need to be led, and then times where we take the lead.  And in that transition to becoming a user/master/leader of a technique or technology the need for help dissipates.  That is where some people use technology to create an overt level of complexity (in some cases unnecessarily), where the technology is literally designed to be a wall between the filmmaker and the end goal in order to justify existence.  Almost no one does this maliciously, but this is a growing problem that few are able to detect until well after a technology has matured.  After technological maturity and adaptation passes 2/3rds, the need for help becomes harder to justify.  But it's not all the fault of the masters, in fact some people ask me routinely, "Where do you think my job went?"  I tell them the same metaphor, "Some hosts eventually accept the transplant."  Once a transplant is accepted, the host no longer needs a routine doctor or even medicine.  But this isn't all bad.  I actually think for many it's a sign of massive opportunity because as the cycle repeats, the same early masters of the technology can re-immerge and become new masters and lead people through the next wave!  The mistake is when you ride the wave too long and wonder how you got so far out to sea.
Tumblr media
And so comes quite possibly the world's best example of this: the modern Apple company.  This is a group of people who have found ways to repeat the cycle of architecting a new technology, then relying on masters to adapt it, then enabling everyone to adapt it, then making it so "normal" that the next iteration is accepted with less hesitation than the preceding cycle.  They build and built upon this until they are not just leading the market, they (in many cases) control the market.  An example of this is with Meg Whitman, CEO of HP, who recently spoke about a new tablet device HP is designing.  She said it was going to be a great addition and impact on the tablet market.  She was then corrected by someone in suggesting, "It's not the tablet market, it's the iPad market."  Apple iPad has greater than 90% in the tablet world...so if you want to get into the world of tablets, there is only one object to design against.  And Apple is not selling people short, they start out their latest iPad introductory video with a statement that I identify with, I believe in, I focus on, and I want to live by example:
Tumblr media
"We believe that technology is at its very best when it's invisible."
Tumblr media
Man, I love that.  I love realizing that the most complex item I own is the easiest to use.  Could there be a better resolution to the evolution of the computer in which we initially measured them by size in square feet? Occam's Razor suggests that the path of least resistance in a given problem is likely the best solution.  When technology becomes invisible, then everyone can leverage its benefits.  For some, that's an invasion of space.  For others, it's purely liberating.  And when I opened and turned on my new iPad, the cycle reset, fresh ideas poured in, and the technology continued to disappear.
Tumblr media
3 years ago when the first iPad was announced I was in a meeting with a post production supervisor for a movie we were about to start shooting.  I told her that 2 weeks after production started, the iPad 1 would come out and we would send a few to production for dailies viewing.  She looks at me and says, "Why would anyone ever want to use an iPad for viewing dailies?"  3 years later, iPad dailies review has become the preferred method of image delivery (at least amongst my circle).  The physically-limited endangered species known as "DVD" has been falling further and further behind the technological developments of file-based tools such as iPads.  And when the rumors of a Retina displaying iPad was in the works, I knew that was the technological breakthrough that we needed in order to retire DVDs and tapes forever.
Tumblr media
The A5X chip is enabling a major amount of pixels to be populated.  With a screen of 2048x1536, the iPad is able to display nearly the exact same resolution of a 2K film scan.  With 1 million pixels more than an HDTV, the new iPad is putting HDTV on notice.  Consider this:
In 2002 Apple effectively told the music industry, "This is how you're going to operate and why."
-No one believed them.
In 2007 Apple effectively told the telecommunications industry, "This is how you're going to operate and why."
-Few believed them.
I predict in 2013 Apple will effectively tell the broadcast industry, "This is how you're going to operate and why."
-Who will believe them?
If I were laying the groundwork for taking on broadcasting and migrating it to broadbanding, I would need three major things: 
1. I would start with releasing tools that enhance the viewing experience, not inhibit it (Retina).  Every person that I show the new iPad to is literally floored with the pictures.  And I realized this is the first step in getting consumers acclimated to images that make high definition seem mediocre.
2. I would need to find a way to integrate that tool into the existing monitoring system (televisions).  Last year, Apple made a big push for AirPlay which allows you to easily and wirelessly push content from your computer or iPad to your television.  
3. I would need a single place in which to control and distribute all content without the limitations of terrestrial broadcast, satellite or cable (Apple Television).  While this hasn't even been announced yet, I predict that Apple is building a device that enables the power of interconnected device control (iPhone, iPad, MacBook, etc) to a computer that is the hybrid of a television and computer in one.  This cloud-centric device means you will control it with your phone (goodbye TV remote) and view what you want, when you want through an Apple Television application similar to iTunes (maybe iViews).
Tumblr media
Clearly Apple has a plan and I think we're experiencing some major foundational components that are required for the release.  But that doesn't mean we can't enjoy the isolated power of this latest chip set on the new iPad.
Tumblr media
For me and my team, we took a moment to do some experiments with the display, the resolution and some high fidelity files that we had.  First, we took some resolution charts and loaded them on the ipad to see how well the display handled.  This picture is not the file itself, rather a 600dip, 6K (6299x4725) resolution file loaded onto the iPad, then screen captured on the iPad and off-loaded.  The result is amazing. I can see nearly twice as deep into the zone as my 15" Macbook Pro.  
***Note: the images on the Tumblr blog are 1/4 the resolution of the source.  These frame grabs unfortunately do not exhibit the actual resolution of the iPad.  For the exact files themselves, I encourage you to download and inspect them using my FTP site.  You can find the images at:
webftp.lightiron.com
user: ipad
pass: resolution
Tumblr media Tumblr media Tumblr media
This file does the same, only it shows 2K all the way to the edge without and banding or aliasing.
Tumblr media Tumblr media Tumblr media
But the more important question is how does this affect dailies?  I decided to go from some really good source material.  I went to some high quality DI sources of some recent projects (The Muppets, Dragon Tattoo, Shanghai Calling, The Social Network) and decided to create some mock-dailies from this material.  2048 files are currently unaccepted by the iPad, so 2K dailies are possible on the Retina, but Apple will have to enable Quicktime decoding of that resolution in the future.  Today we are limited to 1920x1080.  So I made some H.264 files at 6Mbs, 8Mbs and 12Mbs per second.  8 and 12Mbs looked identical, so I'm considering a 1920 8Mbs H.264 the "butter zone" for iPad dailies.  When I did this, I showed some of the content to a few of the DPs that shot these films who were conveniently in-house and the expression on their face was amazing.  Not one, but two people actually said "it looks better than the DCP!"  I agreed.  With the brightness, contrast, detail, vibrance, texture and pure resolution of this display it literally takes on a characteristic all its own.  While H.264 is not without its limits, what the latest iPad does with a good source outweighs the limits of the codec.
Here are some screen captures that were from the iPad itself.  These were taken from a Muppets trailer at 8Mbs.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
In frame one, notice how banding artifacts are very low.
In frame two, notice how Amy's skin is soft and multi-coloered, yet her fly-away hairs remain alias free
In frame three, notice the smallest of details and the gradients of the the color separation
In frame four, notice the deep red, the sharp blue and the controlled blacks that do not clip
In frame five, notice how well low-light is rendered and displayed and even smoke is not breaking up and details don't go too mushy on the sides.
Tumblr media
By the above example, I believe the latest Apple iPad will have a major affect on the film industry and ultimately, the consumer exhibition industry.   This is the first motion display device that works with consumer portability that out-resolves your home theater system.  By a lot!  A mere 4 years ago roughly 40% of the panel makers for HDTVs were still building 720p displays.  And if you have not seen this device in person, I assure you: people will not need to be told the difference, they will instantly experience it.  And what changes?  That's the best part, the technology in this devices is invisible.  The migration to the latest iPad changes nothing about the ergonomics, the space, the touch and operation, and even the price change is minimal.  This device is as much a portal into how clear the future will physically look as it is a nail in the coffin of DVDs.  I am excited to begin offering dailies on this iPad immediately and it has nearly zero affect on the process in which we produce dailies today.  And a device this advanced that automatically improves the quality work and doesn't require one to change anything is, perhaps, what makes it, in fact, the most advanced.   How transparent is that?
Tumblr media
|  m   |
13 notes · View notes
michaelcioni · 12 years
Text
major league independents: Sundance 2012
It is no secret that independent film is the greenhouse for digital cinema trend development.  Anyone arguing otherwise has not experienced the petri dish of digital development that the independent community is responsible for.  And as much as filmmakers communicate with each other, it is the independent community (a common place where film-to-film in-breeding can often occur) where the story of "what is your movie?" is just as important as "how did you make your movie?"
 My first Sundance Film Festival was in 2001 where I began to explore the roots of digital cinema through the stories of independent filmmakers who managed to successfully create a feature without the use of film.  Even at that time, less than one year out of college, the interest of myself and my close friends was as much for the tools that storytellers use as it was for the stories themselves.  It was at this time that Sundance took a very unique position on the concept of digital cinema, which was in its absolute infancy 12 years ago, by opening up what they called the “Technology Center.” Many patrons of the festival have no doubt visited the Technology Center, but in the early 2000’s, it was the most concentrated place to experience the most up-to-date developments of digital cinema.  This place was so inspiring at the time that I visited it every day of the festival.  Sometimes with my friends, and sometimes by myself.  I caught myself talking to Sony about the F900 they showed off, Apple who showed Final Cut Pro 2, XPri, DVD printers and duplicators, Zeiss Digi Primes and Rorke Data raids capable of playing back uncompressed media were all over the room and they all had a role to play in the cultivation of a dramatic digital future.  But the most profound impact it had on me was when I came to the most liberating conclusion: the tools that were being developed for digital cinema were merely upgrades of the tools that my friends were already familiar with…
Could that be true?  Could the gap between the independent and studio worlds really be closed by digital technology?  Could it be that simple?
In an attempt to define the word “gap” I would say that it’s not digital tools themselves that enabled independent movies to be any less independent.  The word “independent” to us didn’t mean it looked better or was better, it didn’t mean it was more or less likely to be bought or that was more likely to be profitable.  The term “independent” to us and many other filmmakers simply meant “freedom.”  Freedom from the typical boundaries that studio projects were essentially forced to impose.  For us, it was the digitally-driven technological advantages that simply made the cost of acquiring a project attainable to a degree that the money required for shooting and finishing on film was avoided and therefore put to better use.  All the while narrowing the gap between independent films and studio films without the reduction any of a filmmakers aspiration for freedom.
Sundance seemed to recognize digital cinema’s sizable potential. 10 years ago, festival director Jonathan Wells said:
“Sundance’s increasing attention to digital filmmaking is really a stamp of approval.  It gives recognition to this movement as viable and real. People from all of the [Hollywood] studios are in Park City, and they can suddenly see that digital filmmaking is something more than an amateur movement, something more than people running around with DV cameras.”
Wells was right.  And 10 years later it took people like him, early adopting companies like SONY and Apple and festivals like Sundance that supported both independent freedom and digital tools to prove it could be done.
Tumblr media
Even though Sundance looks a lot different in 2012 than it did in 2001, I think a decade of developing attainable tools and educating the community has paid off, but the bulk of that work was built for and in many cases by the independent community. It was this community that believed they could, in fact, improve exponentially and unearth ways to ameliorate the look and sound of their projects so they could syphon dollars away from the expense of high fidelity image acquisition and relocate it to more valuable areas of the budget.  After moving to Los Angeles in 2001, myself and a group of my close friends dedicated our full-time employment to the cultivation of a company that delivered a professional independent resource to the community.  Over 150 independents later, our “digital version” of Sundance truly moved from an amateur movement to center stage.
Through this objective, 2012 marks a year in which Sundance has become essentially a nearly digitally-exclusive film festival with not only the near-elimination of all exhibition prints, but the near-elimination of all acquisition celluloid.  While this doesn’t change the potential for films to be all that different from decades past, it does improve the possibility to showcase works that without digital, may never have made it to center stage in the first place.  This is true with 2 films that we had a hand in making that are being showcased at Sundance 2012.  ”Goats,” directed by Chris Neil and photographed on the Arri Alexa by Wyatt Troll and “I Am Not A Hipster,” directed by Destin Cretton, photographed on RED MX by Brett Pawlak and produced by Ron Najor (one of my 2001 Sundance Tech Center friends) are the modern-day benefactors of a decade of digital cinema maturity.  
Tumblr media Tumblr media Tumblr media
That cinema maturity can partially be measured by how these films are made, which is where the notion of becoming a “master of the craft” is evolving.  In the past, it was typical for post production infrastructures to offer a unique and often specialized “independent” pipeline that was designed to enable independent films opportunities to share access to talent and tools that were previously reserved for studio-driven projects.  Only as the years went on, I find it has become the studio-driven projects that now share access to the tools and talent that were initially designed for independents.  This “technological serendipity” is where I base one of my more popular theories of market development called “Evolving Creative Democratization.”  In an ECD model, it is a bottom-up culmination of technological development that takes advantage of disruptive technologies and matures over 3-5 years until they are eventually adapted by studio-driven projects and ultimately become workflows, pipelines and even the standards for making any project.  Nevermind the budget, the project origins or the intended destination.  After this bottom-up approach reaches the apex of the motion picture community, new advancements from the top then work their way back down to the independent, further fueling the cycle.
Tumblr media
Major league independents like Goats and Hipster are perfect examples of the ECD model in that they represent a process of filmmaking that is no longer unique (or limited) to an independent film.  For example, Goats was shot on the Arri Alexa using the ProRes capture mode on SONY SxS cards.  ProRes is a format that was never initially designed to be used for acquisition, rather a simpler alternative to uncompressed mastering and moving data at the same quality of an uncompressed signal at a fraction of the size.  Initially controlled by Apple and for use on Final Cut Pro machines, ProRes was a massive leap for independents to move and use data with the look of uncompressed HD and the performance of DV25 video.  With the release of the Arri Alexa, this model was essentially turned upside-down and ProRes became the point of capture for an estimated 90% of digitally acquired dramatic television shows in 2011.  The result for a film like Goats is that the workflows pioneered by independents for nearly a decade paved the way for compressed high fidelity RGB capture and led to its use as a new capture medium.  The workflow for a small independent film like Goats was cured by professional network television enabling an independent to take advantage of higher end professional development dollars and, thus, capitalizing on it.  When people watch Goats on the big screen at Sundance, they are, in effect, simply watching a Quicktime movie.  On paper, for people that use the most common independent creative editorial tool, Final Cut Pro, this workflow is something everyone on the production can not only get behind, but more importantly, can understand.
Tumblr media Tumblr media Tumblr media
Similarly, Hipster is a project that represents an even more powerful ECD model in which a production with few resources is able to shoot in future-proof 4K files which almost certainly out-performs the resolution potential of the movie had it been shot on film.  The workflow of Hipster used ProRes files only as offline media, then relinked back to the original 4K source and was color corrected to look like it was filtered using the iPhone application “Hipstamatic.” This process used the same color science, resolution and even toolsets that we regularly use on Hollywood’s biggest films.  The workflow of Hipster basically is a mirror image of the workflow we used on 2010’s “The Social Network.” From camera to creative and technical processes, there is essentially no technical difference between TSN, Muppets or Haywire and Hipster, and Hipster spared no expense achieving the same fidelity and mobility as one of Sony’s largest films last year.  For the smartest independent filmmakers, we find that mastery of the technology simply enables them to get the technology out of the way.  Independents that seek or are enticed to uncover a unique or specialized workflow often leave something on the table.  So I suggest to independents who understand these tools to avoid temptations of implementing an alternative pipeline when “the alternative” might not be “the ideal.”
Tumblr media Tumblr media Tumblr media
For people attending Sundance, Goats is a fantastic film.  It appears just about as professional as an independent movie can get (if there is such a thing).  If it came off any more professional, I’m convinced people would assume it was studio-backed.  With a leader like Chris and a film as well shot and told as it is, Goats is the ideal representation of using the technology to work for you instead of letting technology get ahold of you.  In contrast, I Am Not A Hipster is literally as independent as it gets.  It is clear that this film is made by a group truly creative friends that simply put together a story and photographed it with a camera on their shoulder.  Examining these particular two titles is a testament to the difference of creative style and divergent stories.  But under the hood, these films represent the brightest of futures for what digital cinema offers.  For some films, it’s not about shooting something better, faster or cheaper.  For some films, it’s about being able to do it at all.
Tumblr media
| m |
5 notes · View notes
michaelcioni · 12 years
Text
The DI Difference of HAYWIRE
Last year, Light Iron had the opportunity to collaborate with director Steven Soderbergh and his top-notch post crew on his action-thriller, Haywire.  This is a director who's outstanding credits and generous personality make him an absolute pleasure to deal with, but what many people forget is how avant guard he really is.  Back in 2007, Steven was the first true champion of 4K digital cinema as he shot 2 features back-to-back on the first RED cameras.  As I've come to understand it, these were cameras that literally ran on the bare bones. We're talking the most minimal of functionality and he managed to shoot two movies ("Che Pt1" and "Che Pt2") that still hold up to today's digital cinematography advancements...and he did it in the jungle.  In 2009 we collaborated with him and his team on "The Informant" and altogether Steven has shot 8 feature films on various RED cameras and even provided cameras to other noteworthy directors and projects when he's not shooting.  It's easy to forget the early days and these early leaders of the digital cinema transformation.  In 2007, shooting major motion pictures digitally was still new and shooting them on files instead of tape was virtually non-existant.  I think it's critical to continue recognizing the people who slowly but surely helped pave the roads we so swiftly drive on.  If digital cinema had its own country, Steven would probably be President.
Tumblr media Tumblr media Tumblr media
Steven's latest release, Haywire, brings to the screen some concepts that are relatively still "new" to digital cinema.  Haywire is a film shot anamorphic and on a 16:9 native sensor, anamorphic is a bit tricker to deal with.  Using Hawk lenses, this film has an amazing texture that differentiates it from other digital images.  These 2x Hawks actually are designed for digital cameras so they can cover a 1.33 area of a 1.77 (in this case 1.89) sensor.  The de-squeeze on the image takes advantage of the higher pixel count (in this case) of the RED MX and simultaneously allows for a reduction in overall resolution by not sampling the entire sensor.  When you see the pictures and you evaluate what makes them look the way they look, it really is coming largely from the lenses themselves.  But there are two results that are happening from this:
First, you are seeing a high resolution camera only sampling about 75% of it's pixels.  That has an affect on the overall "precision" of the picture.  Secondly, the anamorphic glass means we have to de-squeeze the picture and stretch it from 1.33 to (essentially) 2.40.  Once we do that, we end up with a scope picture that literally takes square pixels from the sensor and makes them oblong rectangles.  The combination of less source resolution (4K to 3K) and stretching to fill a 2.40:1 aperture makes for a beautiful texture that goes well beyond the typical lens flares.  I think this makes the images more accepting of natural and enhanced vignettes and concentrates focus to the center of the frame as the edges tend to fall off fairly quickly.  A simple summation of the look of Haywire is this: optically driven texture.  People have been using optics and filtration to get looks for over a hundred years.  I always find it silly when people criticize an unfiltered digital image on modern spherical glass and say "it doesn't look like film."  -No kidding.
Tumblr media
Altogether, for people that want to explore a more gentle, classic texture, Haywire is a great example and shows off a technique that is sadly not yet very popular on the big screen with digital cameras - partially due to sensor design and anamorphic lens availability.  But all that is starting to change as people see better looking digital cinema and the sensors increase in resolution and are better optimized for more textured glass like Hawks.
Tumblr media Tumblr media
In the below stills, notice the vignetting, which is sometimes enhanced in DI, but it's sourced in the lenses themselves.  
Notice the falloff of focus from the center, even on the same focus plane.  This is a nice touch to achieving a classic look with a digital camera.
The Hawks in combination with the smaller sensor area (3K) and filtration make for a very smooth textured image that looks organic, appropriately "imperfect," colorful and smooth.
Tumblr media Tumblr media Tumblr media Tumblr media
But there is another component of Haywire that I found interesting on this project and it has to do with the way in which the post workflow was evolving at the time of the DI.
In my company, we tend to favor a workflow that utilizes DPX frames as a means for optimized image manipulation.  This goes for RED RAW files as well as P2 or XDCAM media.  It is common for people to discuss the pros and cons of RAW image accessibility in DI vs. pre-debayered image accessibility, and I've been vocal that native support isn't always a choice we have.  Sometimes it's the tool that controls the workflow, other times is the client and other times its the nature of the project (a feature with 1000+ visual effects, for example).  But on Haywire, because the film was anamorphic, the director wanted to work with full resolution and still have access not only to the RAW files but to be able to do some color manipulation himself.  It was at this time that the idea of exploring RESOLVE for the MAC came to our attention and the decision was made to grade Haywire on a Macintosh.  There were two major components that made this idea very interesting:
1. We were excited about the prospect of pushing Resolve on a Mac in a large feature setting
2. We were interested in exploring the pros and cons of using native 3K anamorphic R3D files throughout the process
Tumblr media
On Haywire, colorist Corinne Bogdanowicz used the latest release build of DaVinici Resolve for the Mac.  At the time it was still version 7, which was first major version since BlackMagic Design acquired DaVinici in early 2010.  First and foremost, Resolve has come a long way since version 7 and that's important to this story, but on the other hand, I was excited and impressed that BlackMagic had decided to unleash this technology at a price point that made it accessible for anyone who has a computer ($1,000).  Our system included a 12-core MacPro, RedRocket cart, Decklink HD card, NVIDA QuadroFX4000 and the Resolve panel surface.  Because of the way BlackMagic does business, most people can get these components fairly quickly and fairly easily, which is a massive pro in my opinion as I am an advocate for accessibility of technology.  But the challenge of Haywire was to work at full resolution in an anamorphic setting on a 6 reel feature film.  This is where the real challenge came about that pushed Resolve and the Mac to the limits:
***Remember: EPIC and RED MX in anamorphic mode shoot an aspect ratio of 1.22:1***
Tumblr media
1. We have a 3K anamorphic source R3D picture (2816x2304) which is roughly 6 megapixles
(this is 3x larger than standard DCI 2K and 3.5x larger than HDTV 1080p)
2. We have to debayer every frame using the RedRocket card at 24 frames per second
3. We have to desqueeze the X (east and west) every frame using the graphics card to 1.5x = 4224
4. We have to desqueeze the Y (north and south) every frame using the graphics card to .75 = 1728
("Y" desqueezes are always 50% of the X)
This leaves us with the desqueezed resolution of 4224x1728.  That's the equivalent aspect ratio of 2.44:1 in 4K.  So even though Haywire starts in 3K anamorphic, it ends up turning into 4K and the rendered grades, shapes, results, titles and visual effects needed to be all calculated in 4K.
5. Because the picture is 2.44, we are a little too big for cinema, so we have to crop about 1% off every frame and do a center extraction of 1% in order to make it fit academy 2.40:1
6. Then we take every debayered, desqueezed, cropped and center-extracted frame and scale it down 2048x1080 (2K projection on a Christie CP2000)
Each frame as it is played must go through every one of these steps prior to color correction.  Only after these calculations are performed can we start to color correct...
Tumblr media Tumblr media Tumblr media
For a Mac, a medium-qualtiy graphics card, a RedRocket and a Decklink, we pushed this machine to the edge.  Literally.  The amount of calculations over the course of 9 hour coloring days was so substantial that we had to remove the side panel of the computer and blow cool air into the machine while Corinne worked in order to keep it from overheating.  There was a point in which CTO Chris Peariso actually burned himself on the NVIDIA card when inspecting it close up!  If you look at the above list, this is a significant amount of cycles that are being taken up by a tremendous amount of on-the-fly geometry and playback, thus leaving less cycles available for color and image manipulation.  This is something that owners need to consider if they're exploring Resolve on a 4K anamorphic level with substantial VFX and plan on running it 60+ hours a week. Granted, Rev8 has included massive improvements which were unavailable to Haywire at the time, and this was the first time we used Resolve on the Mac with so many pre-grading calculations (which are not common for every project).  But given these circumstances, were we disappointed?  Of course not!  This was somewhat of a controlled experiment that Steven himself wanted to explore.  But I can say at the time we're fairly confident Haywire was pushing a MacPro-based Resolve system to its absolute limit.  And we were collectively in favor of seeing where it could go.
Tumblr media
Since then, Resolve version 8 has been released and further enhanced the the power that can be harnessed inside a MacPro.  BlackMagic has released a free version (Resolve Lite) and opened up control surface support to multiple systems starting as low as $1,000.  We use Resolve systems on our mobile post lab systems, OUTPOST, and I see Resolve's used on set all over the world.  Resolve is a tool that is changing the face of image manipulation and it's clearly picking up where Apple left off.  But what impact will Resolve have on the industry as time goes on?  Clearly there is friction amongst companies that paid $500,000 36 months ago for a system that I now run on my laptop.  Clearly there is controversy stemming from a company like BlackMagic that is transitioning into selling products that require support contracts.  And clearly there is optimism amongst a rapidly growing set of users that finally have an opportunity to practice and develop skills on the most popular coloring system in the professional world.  I believe all of these concepts ultimately point to positive things, but the industry will have to sift through a few more growing pains until this topic finally stabilizes.
Tumblr media
Working with the Resolve on Haywire represents a great style of DI pipeline and clearly demonstrates the level of evolving creative democratization the industry is experiencing.  I want to encourage people who are exploring Resolve on all levels to keep at it because the evidence put out by BlackMagic over the past 3 years suggest they will surely keep at it, too. And while there are major benefits to the power working with R3Ds natively, there is a downside when faced with a tremendous amount of source footage in the conform (which can have an affect on GPU performance) as well as the cycles required to manage the complex arithmetic brought on by complex projects like Haywire.  It is for this reason that we still favor the Quantel Pablo system, which is capable of working with native files such as R3Ds, but we choose to pre-debayer them into uncompressed files so that the load is lessened and the computer spreads the tasks of geometry, color, scaling, debayer, decode, etc., across more processors (pic-stores) which improves overall performance, speed and reliability.  Unlike GPU-based systems like Resolve, pic-stores enable tasks to be farmed out to many dedicated parts of the machine, instead of a few.  It is also why even on Haywire why we used the Pablo for the final mastering, titles and versioning outputs to different color spaces, resolutions and aspect ratios.  It goes to show there will never be one way or "the best way" to do any task.  As the tools evolve, the best users are the ones who learn and experience the pros and cons of every system and use the right tools on the right jobs, regardless (sometimes) of how many it takes.
Tumblr media
But in the case of this subject, the most important thing to note is that Haywire was mastered in 4K (anamorphic) and was conformed and graded on a DaVinci Resolve using not much more than standard and available tools that fit inside your off-the-shelf MacPro.  That, on every level, is significant.
| m |
16 notes · View notes
michaelcioni · 12 years
Text
4K+ Digital Intermediate
If you really wanted to make a 4K end-to-end movie, what would that entail?  
While myself and my team have been working with 4K material for close to 4 years, we recently got a chance to perform the mastering of our first 4K internationally delivered feature film; The Girl with the Dragon Tattoo. 
In a way, it was really the summation of 4 and 1/2 years of planning that led to the 4K execution of this film and I am confident that enough audiences will feel the benefits of 4K to warrant a rapid international expansion.  But as people all over the world begin to plan for 4K, this post is meant to discuss some of the technical and creative challenges we faced that may help people cultivate sound workflows so that 4K technology itself doesn't get in the way of the creative process.  
First and foremost, I must thank the director, David Fincher, director of photography, Jeff Cronenweth, post supervisor, Peter Mavromates and assistant editor, Tyler Nelson for allowing my company and my team to collaborate with them on The Girl with the Dragon Tattoo. These individuals are truly masters of their craft and I believe they are sincerely pushing art and technology in a direction that needs leadership as well as fearless augmentation.  Regardless of how you feel about the past few movies this team has made, if you are going to make a 4K movie going forward, simply doing what David does is a great place to start.
OVERVIEW The Girl with the Dragon Tattoo (GDT) is an end-to-end file-based feature film that represents much of the greatest technology available to us at this time.  This includes cameras, codecs, color science, software, hardware, projection and distribution techniques that I am confident have never been used altogether, and at this level and at this speed.  A good example of this is that when we were finishing The Social Network in September of 2010, GDT began shooting overseas in Sweden.  At that time, the newest RED camera, EPIC, had not been fully completed and was not ready for use as principle photography began on GDT.  By December, we started using the first EPIC's on SONY's THE AMAZING SPIDERMAN (3D) which planned to shoot nearly 100% in Los Angeles.  With EPIC being "battle-tested" on SPIDERMAN, development on the camera and its stability took place mostly in Los Angeles while GDT shot on the RED One MX camera.  Due to this scheduling, approximately 2/3rds of GDT was photographed using the RED ONE MX camera and 1/3rd captured on EPIC after camera builds matured.
There are scores of incredible moments in GDT that I believe audiences as well as filmmakers are going to be talking about for some time.  David's talent has a way of rubbing off on people who admire his work and this film is full of these moments. However I wish to highlight some noteworthy components which fashioned a technical and creative blend that I believe all filmmakers should consider...or at least aware of.
4K DATA
GDT is approximately 230,000 frames long.  Due to the amount of visual effects in this film and timeline for editorial, vfx, conform and DI, the film was debayered to 10bit DPX files in 4K and 5K respectively in a 2:1 aspect ratio.  RED MX files came in at 4352x2176 and EPIC files came in at 5120x2560.  These files averaged out to be approximately 45MBs per frame.  For those of you doing the math, this comes to a little over 1GB per second of data.  It also means that much of the  DI was done at 5K, not 4K.  That's roughly 33% larger than 4K.  I was recently asked in an interview "What are 3 things people should be concerned about when preparing for a 4K future?"  The answer is simple:
1. Playback
2. Playback
3. Playback
Most feature releases with heavy visual effects pipelines are going to need to do everything uncompressed.   This is not the only choice people have, but it is the best choice when dealing with a 50%+ VFX ratio.  It's also an ideal way to work when there are numerous parts of the process being worked on simultaneously.  But many people get concerned they will need a lot of space for 4K, which isn't necessarily the case.  With files that exceed 1GB per second, it's not all about capacity.  Today's market for a single gigabyte of storage space is around $0.20 USD.  So it is likely that many people have enough storage to easily hold a 4K movie in its uncompressed state.  In the case of GDT, the uncompressed elements were approximately 55 terabytes of total storage.  On the whole, that's not that much storage-probably only around $25,000 worth of actual drives.  But probably what many will need to consider is the speed in which these drives will need to play back reliably.  At 1GB per second, drives need to be configured in two ways:
1. PLAYBACK DRIVES need to be optimized for a minimum of 1.5 gigabytes per second sustained playback per stream of playback.  Drives will need to be raid protected (which slows them down) and need to be large enough that no more than 60% of them are full at any one time (or they slow down again).  Plus, when dealing with the scale and schedule of a 4K DI, drives need to be configured to play more than one stream or version of the film simultaneously.
2. SHUTTLE DRIVES need to be optimized for a minimum of 500 megabytes per second sustained transfer rates.  After a DI is complete, there are many agencies that need copies of the finished files which need to be delivered in a timely manner.  FireWire or eSata are not possible for use of transferring because their bandwidth limit is far slower than considered acceptable (eSata tops-out around 300MB/s or 3x real time).  Most of our transfer times on projects are scheduled to meet REAL TIME requirements.  However with today's drive technology, a series of small portable disks cannot currently achieve 1GB per second, so we have to settle for 500+ megabytes per second, which is as good as we can do right now.  This means transfer times are approximately 2x real time, which is slow, but manageable.  Li and the GDT team worked with MAXX Digital who helped optimize small shuttle SAS drives we call "shoeboxes" that enable us to move data at around 600 megabytes per second.  Nearly 2/3rds real time, these small shoeboxes were used to move data to and from Light Iron as well as various other vendors dealing with the film on a 1-reel-per-shoebox configuration.  This meant reels in their various stages could be managed in smaller, self contained volumes which made things a bit easier to track and manage without too much waiting time.
Tumblr media
LIGHT IRON 12TB SAS SHOEBOX DRIVE
Putting this into practice, as we were getting down to the final push, it was common to have 2 reels of the film actively playing back, 1 reel of the an output being QC'd and another reel being transferred.  This means our collective network was peaking around 4 gigabytes per second.  Li post producer, Katie Fellion prepared the facility ahead of time by implementing techniques with CTO Chris Peariso so that 4 gigabytes per second would be achievable.  My advice to the community is to perform benchmark tests well ahead of time so that the grading, QC and transferring are not affected in a network "tug-of-war" so that each of these steps can be executed without inhibiting the work in the room next door.  
When most films are in DI, it is common to complete each stage and move the film from point A to B to C.  With GDT, we were aware ahead of time the delivery might not allow for such time.  So as the film was being finished, we created an assembly line-much like a car assembly-to which finished DCI P3 reels would be output, converted through 32-vertices cubes for film record, QC'd and transferred all at the same time.  With a 9 reel film, a common step-by-step example of this data assembly line looked something like this, all happening at once:
• Reel 5 in the color assist bay with Monique Eissing being prepared for the final color pass (Theater 3 using Quantel Pablo #2)
• Reel 4 undergoing the final color pass in the premiere color bay with the client by Ian Vertovec (Theater 1 using Quantel Pablo #1)
• Reel 3 being converted from DCI P3 to film log (using a 12-core MacPro)
• Reel 2 being transferred to a shuttle shoebox drive (using a 12-core MacPro)
• Reel 1 being QC'd for the conversion from P3 to film log (Theater 2 using DVS Cliptster)
As the world continues to become more and more comfortable with 4K, post production teams will need to not only increase the capacity of their networks, but more importantly the bandwidth that it shares amongst users.  BlackMagic makes a great free speed test tool that helped us evaluate your system performances and address potential bottle-necks in the process.  As 4K becomes more and more routine for people, I recommend building an assembly-like plan and bench-marking all of your I/O speeds in each phase of the process.  This will help you find cracks in the system and address them instead of investing in drives that you don't need.
http://itunes.apple.com/us/app/blackmagic-disk-speed-test/id425264550?mt=12
Tumblr media
BLACK MAGIC DESIGN DISK SPEED TEST
4K FRAMING
On The Social Network, David utilized a technique that allowed him ample choices for reframing and stabilization by capturing 4K and 5K images with a 10% look-around pad that was pre-framed in camera.  In the case of GDT, because EPIC was used, the look-around image could be increased to roughly a 20% pad.  In the past when shooting film, it was common for people to frame differently from what the viewfinder or gate was photographing on the original negative (hence one of the needs for shooting framing charts).  With HD video cameras, there is not enough resolution to accomplish this and cameras typically displayed what they were recording with no look-arond area or padding.  The result was more of a "WYSIWYG" (what you see is what you get) in terms of limited framing and limited resolution.  With EPIC, I believe David's technique of a 20% look-around is something filmmakers should consider on all projects.  The ability to take advantage of ample look-around space becomes a key component in reframing and stabilization (which are techniques being adapted by more filmmakers and more departments) but this reframing also allows for a much better transition to varying aspect ratios in different deliveries.  For example, GDT was photographed in 5K 2:1 and the theatrical release aperture is 2.40:1.  But with the 20% padding, the same plate was used without the 2.40:1 matte which made it tall enough to be used in 1.78:1 versions that are required for different broadcast deliverables.  This means the film did not have to go through the typical 1.78:1 blow-up in order to fit 16:9 correctly.  The result is a visible improvement in image quality for broadcast deliverables, which is increased by shooting 5K for 4K.
On top of this, by doing a center extraction, there turned out to be (what I consider) an accidental benefit:  By shooting 5K with a 4K center extraction (or "5K for 4K"), images shot on the EPIC undergo a subtle texture change.  The texture becomes a bit smoother because the bayer-pattern pixels are not scaled down from 5K to 4K, rather they are instead cropped to 4K.  While still clearly boasting a 4K feel, the subtle difference between a scale and a crop presented what I consider an aesthetic benefit that came somewhat unexpected.
Below is the framing chart for the EPIC on GDT that was created by AE Tyler Nelson.  Because EPIC's native resolution is so significant, one can create their own custom extraction that suits the project.  There are numerous technical benefits, such as enhanced look-around, lens millimeters re-line up, older or wider lenses do not vignette, etc.   But the slight difference in image texture due to the crop is the one that I think many people will like.  There are many ways to get the image texture of any camera to be different based on optics, but when that option isn't available or ideal, consider this technique which is responsible for the pixel texture you will see in 4K and 2K projections of the film.
Tumblr media
GDT EPIC CENTER-EXTRACTION FRAMING CHART
       4K TOOLS
Sometime during the long nights of delivering the film, I was passing by my server room on my way to deliver a shoebox drive to the vault.  There was a point where I stopped and something hit me I hadn't realized before: I was staring at all the blinking lights in the doorway of the server room (as most people know, blinking lights represent drive read and write accessing.  -The more blinking, the more disk activity) and I it struck me how small and powerful the overall arsenal of tools has become in order to produce the type of content we're producing these days.  In other words, what struck me as remarkable is that the hardware infrastructure required to move, manage and manipulate all that 4K data was manifested right in front of me in those tiny blinking lights.  Like a bulldog, this small array of technology was all it takes to push movie after movie out, 2D, 3D and 4K, version after version after version.  I was recently at a post production facility auction and watched tens of millions of dollars of a 5-10 year old infrastructure go for a fraction of the price. After seeing powerful and popular equipment literally given away at the auction, it was clear that the tools that lost all of their value were tools that performed a single task. This small array of computers is exactly where the name "Light Iron" comes from: the blending of both light and big-iron systems together to stay nimble, remain efficient and manage the simplest and most complex tasks respectively.
With little exaggeration, the GDT DI required the use of just a few main components pictured below.  2x Quantel Pablo's, 2x DVS Clipsters, 2x 12-core MacPro's and a few dozen terabytes of storage that was optimized for multi-stream 4K playback.  This is not the only way to do a 4K DI, but my advice to people exploring 4K DI is to make investments in systems that perform dozens of tasks and lower the reliance on tools that are powerful, but specific to a single job.  Most super-computer systems people can buy today are capable of numerous tasks and cost less than single-task systems 5-10 years ago.  Our infrastructure is a good example of one way to get the job done, which is why we started with a single set of this gear combination and continued to duplicate the tools as the company grew.  
Tumblr media
LIGHT IRON SERVER ROOM DOORWAY
   4K LOOK
About midway through the DI of GDT, I went into the theater to talk to Ian and he told me to sit down as he wanted to show me something.  He then pulled up some sections of the film which had only recently received their first color pass.  He told me to watch this scene play and pay close attention to the skin tones.  "There's that term again," I thought, "skin tones..."  Skin tones is a phrase I hear thrown around all over the place (sort of like workflow, which I'm also sick of hearing) but I find it has become the latest flagship criticism for what makes a poor digital camera image.  In the past there have been numerous (what I call) "flagship criticisms" of digital cameras such incorrect framerate, low resolution, shutter type, deep depth of field, weak dynamic range, limited sensor technology, and so on.  Today's flavor of digital criticism just happens to be skin tone and tomorrow it will be something else.  You watch the goal posts move...
Anyway, Ian has always been able to get good skin tones on numerous cameras, but this was something different.  Much of what one can get in good skin tones does, in fact, start with the camera.  Greater bit depth and more resolution is certainly going to help, but it also comes down to the exact range in which the skin is initially exposed. This critical range, perhaps just at or over key, enables an image with massive bit depth to undergo significant and more precise separation.  This pushing and pulling of the image at the perfect exposure in the exact area of skin allows a DI artist to really find a way to reveal what is in the skin.  Ian says to me "magazines have convinced a lot of people that good skin tone is about concealing of detail...sometimes to the point of a blatant blur.  But beauty in faces shouldn't be concealing, rather revealing."  Ian goes on, "When I work on a film, I challenge concealment (in a sense) by attempting to reveal everything in detail."  Ian doesn't mean he wants to show off wrinkles or scars, but the more you see of someones true face, the more their face can be read thus the more realistic or perhaps better they look.
So Ian then plays a few scenes that demonstrate this very well.  In reel 8 (about 2 hours into GDT) there are some good examples of this technique through a few shots of extreme closeups.  These shots are truly full screen faces on their sides-to which Ian spent a lot of time massaging this sequence to pull as much color and separation as he could.  Ian said to me "When you at your hand you will see the real nuances of what makes up true skin tone.  Human skin has yellow, red, green, blue, brown and subtle colors in-between.  I worked hard to isolate this outer-beauty and brought out as much of these subtleties as I could and let the millions of colors in their faces reveal exactly who they are."
Doing this isn't as simple as an actor having good skin.  In DI, we need to have a color pipeline, a color tool and color talent that is designed to identify, manage and preserve this level of color separation.  For more than a decade, people have been using film emulations and lookup tables to act as transforms of digital images into film.  But what we have been able to observe is that if you look at things through a film lut, you are blending some colors together.  So the dimension of some digital subtleties may go into a film lut and come out the other end as a single color.  This doesn't mean you can't get good skin tones until now, but we believe it does mean today's level of precision has improved over film and that the bar, once again, is raised.  This is one of the best reasons to let files behave natively instead of filtering them through a film lut transform.  GDT is one example from Ian's work that followed this design and complex, controlled and revealing skin tones are a direct result.
4K DISTRIBUTION
GDT is a film that is truly taking advantage of the times we live in.  While GDT is not at all the first 4K film, it is the first 4K film to be seen by mass audiences thanks to very recent developments from a number of agencies and technologies world wide.  The exact numbers are difficult to quantify partially because they are changing so much and partially because they are managed by numerous companies. But the second half of 2011 showed a tremendous leap in the long-promised 4K digital cinema rollout and GDT happens to be perfectly timed to take advantage of the progress.  Some estimations suggest 60% of screens are digitally screening GDT world-wide.  That could mean people on average have a more likely chance of seeing it digitally than a print - even in smaller cities.  There have been publications that predict the 2012 summer rollout may reach their target of 75% digital conversion in North America mid next year. This is some of the best news for cinema in general and is going to give to theaters what HD did for television in 2004.  SONY produced GDT and as a technology company, made the right choice in preserving this film for the future.  When a project is green-lit, not everyone is thinking about 4K and the way digital films will look in the future. That's why shooting 4K is still a relatively new concept.  And it is clear by watching the industry that some people are realizing the impact of 4K in capture and others are clearly not. But once people commit to shooting in 4K, the next phase is to convince people to do the VFX in 4K, which is difficult and often is the main barrier to 4K finishing.  The next phase is to master everything in 4K, which is rare but happening more and more.  The last and most difficult phase is to take the entire package and distribute it in 4K on both DCPs and 4K prints.  Alongside the filmmakers, SONY seemed to clearly recognize the importance of this need, and have been pushing 4K on a lot of their Columbia films slated for 2012 release including The Amazing Spiderman. SONY professional has reportedly sold approximately 17,000 4K projectors which are being installed this year and next.  Of 45,000 screens in North America, it is possible that over 1/3rd of them will be 4K by the time SONY is done with the installations in 2012.
What all of that means to me is that the creativity behind the crafting of these masterpieces can finally be seen by audiences without excuses.  Each of us as filmmakers-whatever our department-need to consider the impact that 4K digital cinema has on what we do for a living.  For many, 4K has become a criticism because of how it impacts the process. But I believe a serious impact like 4K gives us the opportunity to measure ourselves and therefore find motivation in how we change instead of asking should we change.  4K should change how we use makeup.  4K should change how we dress a set.  4K should change how we perform, direct, shoot, edit, affect and manipulate because we are no longer able to hide behind the imperfections of an exhibition format long overdue for extinction.  If there is a criticism that "digital shows all" then I am totally for it.  If the net iteration of your craft reveals more, then learn how to use it.  I am a firm believer that it is unwise to change something for the sake of change.  But I strongly believe in radical change when the change is unquestionably superior.  Digital 4K capture, 4K effecting and 4K distribution is the 1-2-3 punch that movies have desperately needed ever since digital tools stepped into the ring.  15 years since the early experimentation of digital intermediate, we finally have the tools in place to do something that motion picture film was never able to do before:
Until now, audiences were given copies of copies of copies in order to see a movie. A four-point variation in color balance per channel and per reel was considered acceptable.  Color balance, softening, distortion over time, high speed printing and lower-cost release stocks all contributed to making the mass distribution of great films a second-rate version the source at best.
Today, for the first time ever on this scale, thanks to more than 5 years of infrastructure and development of end-to-end 4K, mass audiences will see pictures from The Girl with the Dragon Tattoo as good as the filmmakers who created it.  
It's about time.
| m |
65 notes · View notes
michaelcioni · 12 years
Text
DRAGON TATTOO 4K DI WORKFLOW OVERVIEW
Thank you to Debra Kaufman who conducted this interview about our recent 4K DI workflow on "The Girl with the Dragon Tattoo."  On this film, the director and editorial team presented a series of new and interesting challenges which enabled all of to explore territory we had not previously been before.
That last sentence, all by itself, is exactly why we do this.
Tumblr media
Debra Kaufman Santa Monica California USA ©2011 CreativeCOW.net. All rights reserved.
Article Focus: Re-teaming with director David Fincher after their successful collaboration on The Social Network, Michael Cioni and the team at Light Iron built 5K workflows for real time, full resolution post for Fincher's The Girl With The Dragon Tattoo. The running time of the 4K print is 2:38, creating a data size larger than six 2K features combined. Cioni and Light Iron co-founder Ian Vertovec spoke to Creative COW's Debra Kaufman about how working that way in real time is even possible, working with David Fincher, and what frame sizes larger than 4K mean for all of us.
Re-teaming with director David Fincher after their successful collaboration on The Social Network, Michael Cioni and the team at Light Iron built 5K workflows for real time, full resolution post for Fincher's The Girl With The Dragon Tattoo. The running time of the 4K print is 2:38, with a data size larger than six 2K features combined! Michael Cioni, CEO of Light Iron, is a champion of 4K data-based workflows. Prior to starting Light Iron, he co-founded and built PlasterCITY Digital Post, a desktop-based post production facility in 2003. Michael has served as a Digital Intermediate supervisor on hundreds of feature films, and provided 2D and 3D data-centric post services and support for many film and TV projects. As a founding member and instructor at REDucation, RED Digital Cinema's training program, Michael is a strong proponent of empowering clients through education. He sits on the Board of Directors of the Hollywood Post Alliance and Filmmakers Alliance and was an adjunct faculty member at USC's Annenberg School of Journalism.  Ian Vertovec, co-founder of Light Iron and PlasterCITY Digital Post, is a supervising colorist. In addition to many music video and commercial credits, Ian has also been colorist on numerous feature films including The Social Network,Goats, and Street Kings 2. Cioni and Vertovec spoke to Creative COW's Debra Kaufman about what it takes to work with full-res files larger than 4K in real time -- on multiple workstations no less - working with David Fincher, and what frame sizes larger than 4K mean for all of us. 
MICHAEL CIONI: To describe what Light Iron did on The Girl with the Dragon Tattoo, I think the best place to start is to talk about our new facility in Hollywood. When we mastered The Social Network last year, we didn't have a facility, so we worked at RED Studios. When planning the layout of our new facility, we wanted to build it so that individuals like David Fincher would feel creatively and technically comfortable. Looking towards the technological future as David does, and embracing the demands he places on facilities, we had to get down to the tiniest details to build this facility to be the highest fidelity. We wanted to build a future-proof facility with, for example, a network capable of moving multiple gigabytes of data per second, 4K projection and non-perf projection screens, which look a lot better than the more typical perforated screens.  Most importantly, our facility offers a fully viable 4K pipeline and the ability to master in 4K and beyond. Dragon Tattoo was shot on location in Sweden over 167 days, using the RED Epic MX and Epic, which shot 4.5K and 5K resolutions respectively. The shoot produced 483 hours of footage; they printed 443 hours of footage, which translates to over 1.9 million feet of film in 3-perf. This is among the largest 4K movies ever delivered, if not the largest. At 2 hours and 38 minutes, it consists of almost a quarter of a million frames at 45 megabytes each. The post team that worked together on Fincher's The Social Network reunited on this picture: editors Angus Wall and Kirk Baxter, assistant editor Tyler Nelson and colorist Ian Vertovec.  IAN VERTOVEC: The Girl with the Dragon Tattoo is the type of film we built the facility for. It's a purely data-centric movie with a very progressive workflow. A lot of people who design 4K equipment benchmark it at 4096; once it gets larger -- Dragon Tattoo was done at 4352x2176 -- things tend to slow down. This is why we initially designed Light Iron for a larger-than-4K pipeline.  The facility was also designed to be totally data-centric. We're working very traditionally in the DI theatre with the colorist and the cinematographer and director, all looking at the output of our 2KChristie DLP projector. We did all the color correction off that 2K Christie and then viewed the DCP on a 4K Barco, which is in the same projection booth. CIONI: A common question I hear is, are we a 2K or 4K industry? Most people say we're 2K but we're going towards 4K. That is to say, the technology is in transition. To do a 4K movie now we temporarily need both projectors. The 4K technology hasn't matured enough to use it exclusively. VERTOVEC: There are subtle differences between how each projector projects the images, like the subtle differences between Plasma and LCD displays. Even when the content has been captured at 2K resolution, it looks sharper in the 4K projection. We find the Christie 2K projector still has the deepest blacks and the best contrast. I haven't seen any digital projection that can beat the Christie 2K projector in that category.  But the 4K projector is the only way we can look at every pixel. We also want to know if there is some noisy shot if we're pushing the limits of the exposure how it'll look at 4K. It's diversifying how we view the material and better informing us overall. CIONI: It's also becoming very popular for audiences to see 4K projectors even though they're not seeing 4K content.Sony has sold 17,000 4K projectors, and several theatre chains have stated their intent to switch to these projectors. 2K content looks better on the 4K projector because the distance between pixels is reduced, so the perception of higher resolution goes up because there's less negative space. Dragon Tattoo was shot one-third with the RED Epic and the rest with the RED MX; these are essentially extremely low signal-to-noise-ratio cameras, very quiet, so they scale really well. So, although Dragon Tattoo will be released in 4K, it's worth noting that 4K sourced projects that master in 2K scale up well to 4K.  We're also doing tests for another film we're starting soon which was shot 3K RAW with the ARRI Alexa. We did the blow-ups to 4K for a 4K DCP output and it looks amazing. New content can handle the blow-up better because today's cameras start at greater pixel counts and are much quieter now.  VERTOVEC: David Fincher is a very post production-conscious director so he has a very strong post production team that manages the dailies all internally. Because they're so post conscious, they don't rely on us for front-end services. They only relied on us for color correction and finishing. I think one of the most powerful techniques done was an intentional center extraction from the RED footage. The actual frame was probably 75 to 80 percent cut out of the center of the whole image they captured. We color corrected the full 4.5K plate, but only 3600x1500 made up the actual frame. We have almost 1,000 pixels horizontally to do repositions, stabilizations and blow-ups. David was able to come into the DI suite, look at a shot and then say, "...zoom in a little bit" or "pan left" without any resolution penalty. This is the image extraction chart that Michael and Ian used to show how they did a center extraction from the RED file. Click on image above for larger view. CIONI: This is a good way for people to think about shooting with high resolution data. With tape it was typical to shoot the full aperture or almost the full aperture and go to post from there. With high resolution cameras, people capture the full resolution. From David Fincher's point of view, he had enough resolution to spare and used that as a creative tool to adjust the framing with more precision in post rather than when it was shot. I think there'll be a trend that people want to follow in that you shoot high resolution full aperture, but only intend to use 75 or 80 percent for finishing. Some people think that makes sense for 3D, to compensate for convergence. But David is saying why doesn't that make sense for 2D as well? There was no resolution penalty and we didn't scale down as much as you normally would, so you won't feel like the film is blown up. VERTOVEC: There is also another benefit. Editorial did a large number of split screens; Angus and Kirk pick the takes they want for the best performance and Tyler builds a split screen. Sometimes there would be three- or four-way splits. One of the reasons you need that center extraction is to match all the plates together.  The bird's eye view of the workflow is that I talked to Tyler and we planned to use what they shot on set, unless we need to go back and re-bake it at different ISO settings. We get full reels at this 4.5 K resolution from Tyler, and sub-clip it out into shorter DPX sequences so what the conform in our Quantel Pablo refers to the original camera source time code.  There really are no more standards in terms of frame rates or frame sizes. For the longest time, the DI was only 2040x1586 and people designed tools specifically for that, but with data you can have any frame size and any frame rate. I think our non-standard resolution/extraction combination is the wave of the future. Post people and manufacturers have to be thinking in those terms. 
Please click on individual images above for larger view.
CIONI: All those split screens created an issue, however, that we had to deal with. The connection to the original information for each shot gets lost because, when you do all those split screens and then conform and render it out, they don't have original file names. What's the timecode of a shot if it comes from 4 different takes? That level of metadata all goes away.  To solve this problem, Light Iron's Stevo Brock built an in-house custom app we call Sub-Clipper that allows the Pablo to treat the 4K DPX files as "camera original" footage. Because of the amount of visual effects, Red RAW files cannot be used in the DI. However, the cut may change well after VFX are already processed. With Sub-Clipper, all editing changes and VFX shots can smoothly ripple through to the composited 4K DPX files on the Pablo, even though the offline editorial list refers to an R3D source. It allows the final conform to automatically be in perfect sync with editorial, up to and including the latest revision.  Without Sub-Clipper, it's like having a picture of North America but not knowing where all the state lines are. With Sub Clipper, we see where all the edits are and the names of the visual effects. Now when we load it into Pablo, it looks like the original files. We see it organized as if in an offline. VERTOVEC: It's analogous to a standard tape-to-tape correction suite where you have the timecode of the long-play tape but not the source code or original footage. Sub Clipper reverts the whole DI to a simpler form of itself.  If you do a standard DI, the conform system looks for the camera rolls and loads the shots in. The problem is that we're destroying the relationship of the clips to the original camera reels. If you move the shot, you're changing its position in the reel and negating any relationship it originally had in the sequence.  Sub-Clipper re-establishes that, so we can leapfrog the metadata over the assistant editors. We can now spit out all those frames into smaller sequences and then re-stripe the timecode of every single one of those sequences. So we have actual VFX shot names and the actual timecode of the original camera timecode. The advantage is if there were a re-conform or editorial changes - which there always are -- we reload the new edit and it'll just move the shot to the new place in the timeline. With the long play timecode, and  Color correcting The Girl with the Dragon Tattoo in such high resolution was a big deal. Being at 4.5K is about five times the file size of 2K and five times the processing power needed to calculate the color correction. Even if we had tripled our infrastructure and tripled our throughput, it would still be twice as slow as a regular 2K DI. We're working at full resolution in the Pablo at all times. The way I work with David, which we did before and was very successful, is that he'll come in and set key frames with me. So we won't work through the entire scene. He'll show me the shots, we work on them together, and then he'll ask me to match the whole scene to one shot.  This method saves David's time and allows me to finish a reel unsupervised. Then when he comes in to review the reel that I've worked on, I'll record our thoughts on a Flip camera and use this as my director's commentary. That gives me a day or two's worth of notes to address without monopolizing David's time.  CIONI: The Girl with the Dragon Tattoo is 9 reels long, which is long for a movie. The fact that it's 4K means the files are 45 megabytes per frame. If you think about that, the original source files from the RED camera are about 40 megabytes per second. So this is more than 25 times larger than the original source file, plus it's 4K, plus it's nine reels long. The data footprint of this DI, when you add it all together, is the equivalent of about six 2K 120-minute movies.  It's really important to understand that when you're engaging in 4K at this level, the data footprint is huge. It's not twice or even quadruple 2K. On a linear scale, it's five times the render, transfer, drives, waiting…there are so many levels that that can bite you. There were areas where it nearly did bite us - and areas where we were totally prepared. Rooney Mara. VERTOVEC: We had a traditional structure with an assist station loading files and out-loading files and doing conform operations while I was coloring in another room. Monique Eissing was responsible for loading and prepping all the material, utilizing the Sub-Clipper application to carry over color corrections to newly revised VFX or stabilized sequences. One pleasant surprise was how well Pablo with Gene Pool, Quantel's shared storage solution, worked. Gene Pool allows our two Pablos sharing the same media to have 4K or greater playback at all times. We never reviewed anything at less than 24 fps. Most systems struggle to play back even a single stream of uncompressed 4K at 24 fps.  CIONI: With Gene Pool and Monique, we could multi-task. It's like having two colorists working at the same time. We also enlisted the help of two other additional components; we have multiple DVS Clipsters equipped with 4K acceleration boards that allowed us to encode different types of files in 4K in virtually real time. That helped us tremendously. Also, we had a 10-gigabit Ethernet link between the Clipsters and Pablos. Daniel Craig. We also used Shoeboxes, which is Light Iron's version of a shuttle drive, but on steroids. We could push files around via "sneaker-net" or we could move them at greater than 500 megabytes per second. When we delivered the digital master -- all 230,000 frames, properly organized -- to Deluxe for film-out, the only way to move that number of terabytes and check it was to use Shoeboxes and a very fat pipe, a SAS connection. The solution is never just one component -- it's a series of steps that need to be planned.  On a purely technical level, the color correction was the easiest part of working on Dragon Tattoo. Delivering this film -- which was invisible to Fincher -- was the most difficult thing we've ever done as a facility: getting the footage into the facility and delivering it out of the facility.  When both Pablos and Clipsters were working -- non-stop 24 hours -- the facility was playing back 4 gigabytes per second. Our network operated at that level for days and days, and we're impressed with that. It's all due to due to SAS, fiber and 10-gigabit in unique configurations for each step, harmoniously working together SAS (Serial Attached SCSI) is a protocol based off of E-SATA, but it moves the data 12 times faster than Firewire. It's like taking four E-SATA cables and threading them together. It's a very small connector that can push data almost up to 1.5 gigabytes per second on its own (provided the attached storage can support the bandwidth). That's important for us with the Sony F65 because its files are enormous. And SAS is a connection that a lot of people need to take a serious look at. We're still surprised that most facilities use antiquated protocols to move data around. Rooney Mara. VERTOVEC: People are used to film and videotape, which only runs at 24 fps. If you had to transfer footage from one place to another, it always transferred at real time. With 4K data, and files over 40 megabytes a frame -- that's 800 megabytes a second, and that's the challenge. Few people have technology that runs at 800 megabytes per second. CIONI: In a data-centric world, clients have looked into our machine room, saw a couple of Mac towers and thought they could do it on their own. But the complexity of technology has increased. In actuality, their ability to do it themselves is as out of touch as when we printed film. Working in 4K is like 2K was several years ago, only four times bigger and six times more dynamic.  Over the course of working on The Girl with the Dragon Tattoo, I re-learned that every time technology comes out that advances something, an artist will use it. It's our job as a facility to make that new technology as transparent as possible to the client. Directors like David Fincher will never stop pushing the boundaries, and companies like ours should always operate outside of our comfort zone. We need to keep inventing ways to make this technology as transparent -- and available -- and empowering as possible. The Girl With The Dragon Tattoo images ©2011 Columbia TriStar Marketing Group Inc. Images with Rooney Mara by Giles Keyte. Daniel Craig in the snow photo by Baldur Bragason. Title image background photo by Merrick Morton. Please click on individual images above for larger views.
18 notes · View notes
michaelcioni · 13 years
Text
THE MUPPETS MEET 4K
35 years ago Jim Henson and his team began broadcasting its one-of-a-kind 1/2 hour television entertainment comedy program, "The Muppet Show."  Through the last three decades, 10 additional Muppet movies were made for the big screen and home video enabling this unique cast of characters to transcend at least 2 generations.  Though the Walt Disney company purchased Muppets in 2004, the last major Muppet theatrical release was 12 years ago with 1999's "Muppets in Space." But the latest Muppet movie is taking the franchise to a new level and I believe Disney and the filmmakers have taken advantage of a powerful technological edge: file-based digital cinema.  This result may seem small, simple or even typical, but a lot had to happen to make Muppets what I consider a digital cinema milestone.  -And the story isn't all about technology, rather how the technology breathed a new dimension of creative potential to an entirely new and exciting chapter of the Muppet history books and, thus, an entirely new generation of Muppet fans.  Being a part of it is not only a privilege, but one of the most rewarding projects I've ever worked on.
So why does digital cinema mark any significance to a seemingly timeless act like the Muppets?
Like most stories, it starts at the beginning.  In the late 1970's, baby boomers loved The Muppet Show during it's 5 year, 120 episode run, to the point where it's hard to find people that weren't fans of the original series.  By the late 1980's, way before I was old enough to have a Blockbuster card, my brother Peter and I would rent movies from our church library (I guess they thought renting movies to 8 year old children through a church was okay, even though it wasn't cool with Blockbuster).  Encouraged by our parents who loved the Muppets, I even recall renting the same Muppet movies several times a year ("The Muppets Take Manhattan" and "The Great Muppet Caper" being our favorites).  
Nearly 10 years later by the time I was in high school, the Muppets were still making movies and keeping the attention of my generation.  During high school, "Muppet Treasure Island" came out and my friends and I couldn't get enough.  I remember realizing I was finally old enough to understand all of the jokes that I wasn't able to get as a kid.  Until that time, I didn't catch all the jokes most of the Muppet movies packed in for an older audience.  But the effects and the modern themes they hidden in the story behind RL Stevenson's "Treasure Island" seemed to make a perfect fit for a contemporary Muppet rendition.  I even remember a strangely familiar "Muppet" that showed up in "Treasure Island" that looked a little too much like Kurt Cobain.  Because Kurt had only died a few years earlier, old photos are all that could circulate and one of Kurt's trademark striped sweaters and long blonde hair seemed to be the inspiration of this Muppet, which always made me smile.
Tumblr media
In August of 2010, collaboration on a new Muppet Movie under the Disney umbrella ended up blowing in my direction.  But a lot had to happen in order for this to work out - both creatively and technically - but the makings for the first digital Muppet movie seemed to be comfortable to everyone involved.  
At this time, we were about 2/3rds through the shooting of Disney's Pirates of the Caribbean 4, which was the largest RED (MX) movie every shot at that time, one of the largest 3D movies ever produced as well as Disney's first tent-pole film shot on a file-based camera.  While Alexa had entered the market a few months earlier, ARRI RAW was not yet available and so my discussions with Disney prior to meeting the filmmakers was that RED was the right tool for the job.  That notion, plus our collaboration with Freehill Productions on Pirates removed any concern Disney had about RED files, solid pictures, or a solid workflow.
I think DP Don Burgess had the same idea as Disney.  Don had shot an early theatrical release on RED (non MX) called "The Book of Eli."  This highly stylized movie played an important role in pushing digital cinema forward, especially since 75% of that film is shot in direct sunlight and the whole film looked great.  On September 9th, 2010, Jim Jannard, Jarred Land, Deanan DaSilva and I met Don at RED Studios to talk about RED MX, the latest color science, early EPIC cameras and my proposed fully on-set Muppet workflow.  I was working at RED on The Social Network that summer, which was the first MX film, so I remember talking a lot about the sensor as it was still fairly new to the community.  About a month earlier, Graeme Nattress had finished development on REDLogFilm and REDGamma (v1) and it was just coming out for experimentation.  RLF was really offering a significant benefit in terms of additional dynamic range which we hadn't yet been able to use on features at that time, including The Social Network.  We knew this would be an added advantage for Muppets.
Don struck me as an instant master of the craft.  I had always been a fan of his work because one of my favorite movies of all time was Robert Zemekis' "Contact" and I love how so many of Don's movies incorporated visual effects in ways in which they were nearly impossible to notice.  But after getting to know him better, it is largely because of Don's experience with visual effects, digital cinema capture and digital intermediate that "The Muppets" looks so amazing.  He truly is a master of the craft.
On September 28th, we shot the first Muppets test at Disney Studios with the puppeteers.  My partner Chris Peariso and I provided on-set data support with our OUTPOST cart and started to carve out a workflow with all the specific Disney departments that need unique versions of the content.  The test we shot was hysterical.  I was told that it might make it as an "easter egg" on the BluRay, but this was no ordinary camera test.  I wasn't the only person behind the camera laughing at the content, in fact, the test was shot in the executive offices on the Disney lot!  Imagine Disney execs like Jeff Zacha and Leon Silverman working in their offices and when they walk into the hall, they are greeted by Muppets on skateboards being towed by a RED camera.  Since Zacha and Silverman are big advocates of this technology, the whole day was probably like any other - part of me wants to think that on the Disney lot, Muppets in the bathroom is "normal."
The test was as funny as it was educational and practical, and Don himself even made an appearance in the test when the Fozzie asked "Why are we even here doing this?"  Kermit answers "It's a camera test! (points at Don) We need to know how we look on a digital camera and figure out how to do perspective cheats!"  
On October 28th, production began shooting for 3 months.  Working again with Freehill Productions, Cory Schulthies operated one of our OUTPOST carts where the data for this job was 100% done on set.  This was a big deal for Disney because it was the first time they were going to get everything done on set by (essentially) one person.  On some previous works with Disney and ABC, the task of downstream data was shared across our onset OUTPOST systems and post houses.  But for Muppets, the studio and filmmakers agreed to move forward with all of the data management taking place on set.  
Even the dailies for the filmmakers were screened on set, which is never an easy thing to accomplish.  Thanks to the help of Jeroen Hendriks' mobile trailer, the filmmakers could review their work from ProRes 422 transcodes with a Panasonic projector right on location and on the same day.
Tumblr media
Cory and the OUTPOST cart on set:
Tumblr media
Don and James reviewing takes an hour after photographing right on set:
Tumblr media
While there are projects out there in 2010 that did similar workflows, I'm fairly confident in saying that Muppets was the first movie of this magnitude and for a major studio that literally did not have a post house on the show at all until the DI.  Archiving, LTO, dailies color, syncing, web deliverables, visual effects pulls and temp conforms were all done by the OUTPOST operator, Disney's on-site DEPOT backup and the Muppet editorial team.  Muppets followed our recommended workflow exactly, creating what was a completely self-sufficient machine that was independent of outside, 3rd party post production support.  On the scale that Muppets was with the talent and politics involved, this was a huge task and couldn't have been done without the blessing of Don and director James Bobin, Jeff and Leon, the talented and forward-thinking DIT Carissa Ridgeway and one of the sharpest and most up-to-date post supervisors out there, Jill Breitzman. 
A good workflow on paper should always be simple.  I'm shocked when I see workflows that look more like the electrical plans of an office building than a flowchart. If a workflow cannot pass the Occam's Razor test, it means its overt complexity on paper is likely to manifest itself in practice, thus hard to execute in reality.  The Muppets workflow was simple, streamlined and proximal.  Cory was on set, and worked in between the production party (led by the DP and DIT) and the post production party (led by the post supervisor and assistant editor).  Putting the heavy-horsepower of OUTPOST with Cory on set allowed him to satisfy the needs of both entities while simultaneously eliminating unnecessary 3rd party involvement such as post laboratory...including Light Iron!  And that's the way I like it.
The Muppets workflow as it was finalized:
Tumblr media
As an example, while on set each day, Cory created the following elements as they happened.  There was no entity outside of Cory and OUTPOST and Disney and their offline cutting rooms involved in this process:
Triple backups of R3Ds: 2x Raid 5 and 1x Raid 0
REDCode 36 @ 16:9 (4096x2304)
Color: looks set by Don or Carissa, applied by Cory
Synced sound: audio fed and synced in REDCine-X (v. 400+)
Avid files: DNx115 (MXF) were chosen because of expected test screenings in 2011
ProRes files: 422 (MOV) used for screening in a portable RV screening room made by Jeroen Hendriks
H264 #1: for Disney intra-net (1080 @ 8Mbs)
H264 #2: for Web & iPad distribution (720 @ 2Mbs)
It was actually during the camera and makeup tests that I started noticing something different about the Muppets.  There was a small joke floating around the set based on an inaccurate rumor that people claim; "RED can't get good skin tones."  While I personally have never had trouble with RED skin tones, the joke we made on Muppets was the very real concern over Muppet skin tones!  -It sounds strange, but it's actually very important.  A major actor might have a slightly different look in tone from movie to movie.  This can be based on the the time of year, type of lighting or filtration or the color correction at the end.  But so long as the look is "natural" or "appropriate," no one would probably notice.  But unlike people, the Muppets have a very distinct set of colors.  Kermit green is bright, but not electric.  Fozzie orange is semi-saturated, but not too brown.  Gonzo is blue with purple, not purple with blue.  It is critical that the RED captures the colors of these characters that look, essentially, "perfect" based on 35 years of memory.  Until now, most memories you have of the Muppets were photographed on film and almost none of them went through any precise digital color correction whatsoever.  But it was clear after a couple days that Muppet skin tones were looking perfect and people were immediately talking about the content, not the technology (which is more important).
But as soon as people started looking closer at the iPads and the projected dailies, everyone realized that there was a new dimension of Muppets we have not seen before: texture.
You know what Kermit looks like, but when you see the Muppets, you will know what he feels like.  You know what Rolf looks like, but when you see the Muppets, you will know what he feels like.  This additional level of dimensionality brings these characters to life in an entirely new way.  I didn't see it coming, but it was clear to me that what Don was lighting and what the RED MX was capturing was a combination that the Muppets have never experienced before.  This merging of elments: Don & Book of Eli, Disney & Pirates, RED MX & Epic, RedLogFilm and REDGamma, the list goes on and on, but it was a fantastic intersection of talent and trust that enabled this film to look the way that audiences will experience upon it's release tomorrow night.
Tumblr media
A common criticism of digital cinema in general is that the texture (or lack of grain) doesn't produce a look as good as film.  But with Muppets, I believe it was film grain itself that robbed Muppets of the unique multi-dimensional textures that were always there.  When I first saw the Muppet puppets at an early camera test, I asked a puppeteer how old they were (since the looked exactly as I remembered them).  He told me that some Muppets have multiple puppets, which is to be expected.  For example, Kermit with legs is different than Kermit that you wear on your arm.  But to my surprise, he also mentioned that some of the Muppets are still the originals!  That made me feel good because it tells me that the characters I and millions have come to love are some of the same physical characters we've seen in the past.  But it also told me that the same Muppet's I've laughed with in decades past were never fully translated to film.
As I mentioned before, the only portion of "The Muppets" that was done at a post house was the final conform, DI and film record.  We did the conform on a Quantel Pablo 4K using the RED Rocket for debayering in P3 using REDLogFilm and REDColor2.  Other than small adjustments to some other metadata fields, our initial preparation for these files to look as good as they do are elements that I'm pleased to reiterate everyone has access to.  Corinne Bogdanowicz pre-colored the film as the reels were locked one-by-one.  Soon after, Don, Michael Burgess and James came in for what was some of the most efficient collaboration we've had in DI hands down.  And because of the forward-thinking leadership of Disney folks like Jeff and Leon, "The Muppets" is a 100% end-to-end file based success.  Not a single tape was made to create the content people are about to see or have seen.  From trailers, the DCP, the Fuji film record or even the home entertainment deliverables such as BluRay and iTunes, only files were created from the 1:1 master (DSM) all sourced directly from the Pablo.  By significantly minimizing the translations created by various tapes, "The Muppets" is among the most "pure" of films that is literally the closest to the original source an exhibition format can be.  Props to Disney for taking a much-needed anti-comprimise stand in file-based acquisition, exhibition and distribution.
And so the latest chapter of the Muppet story brings with it an entirely new level of audience interaction.  I am convinced that every person that sees "The Muppets" will have a strong, positive reaction to how it looks and feels - even those who are not savvy enough to fully comprehend the element that 4K digital cinema or file-based delivery brings to the film.  Jason Segel, James Bobin, Don Burgess and many others deliver a film that will not disappoint on a creative and story level.  But for people that are savvy enough to go beyond the great story and peer closer at the images as they unfold this Thanksgiving season, I encourage you to examine a level of textural detail rarely experienced on the big screen.  Part of what make Muppets such believable and lovable characters is their very being - and thanks to the MX, Don's lighting and Corinne's skills as a colorist, the textures and nuisances of these beings hopefully enhances the experiences of new and old audiences so they can better suspend disbelief all over again.
|m|
8 notes · View notes
michaelcioni · 13 years
Text
SCARLET's WEB
In typical RED fashion, the latest announcement of SCARLET-X has as many people excited as concerned.  There are a lot of opinions on the matter and I have enjoyed hearing the multiple perspectives from many corners of the market.  But some of the discussions came from what looked like a "reactionary point of view."  Granted, many people have been making serious assumptions about what SCARLET was going to be, largely derived from RED's own literature and demonstrations, but RED has been clear that everything changes.  Disappointment is hard to avoid when you count on something that changes, but that has been a struggle bleeding-edge professionals have all gotten used to.  SCARLET is affecting a slightly different set of consumers and those consumers are new to the "fringes" territory, and have yet to learn how to fully manage technological unpredictability.  Satisfied and safe participation on this level means you must learn how to:
1. predict 2. prepare 3. adjust 4. trouble shoot Without considering each of these steps, disappointment will be a familiar feeling.  This can be seen in many of the reactions posted online, but these are often coming from people who are not used to this type of bleeding-edge development/implementation cycle.  The "bleeding edge" has been coined just that because you routinely get cut.  But in effort to help deconstruct some of the main points issues, I wanted to spend some time examining the situation, the market and the reactions and use that to draw some conclusions. To those concerned about Scarlet market disruption, pricing models or unnecessary specs, consider this: SPECS  It is true that a Venn diagram of EPIC and SCARLET demonstrates a considerable amount of overlap.  To some, this may constitute a disadvantage to EPIC: • EPIC is perceived to now be priced too high • EPIC does not offer enough features • EPIC owners do not want to compete with SCARLET rentals Ironically, to others, the same overlap constitutes a disadvantage to SCARLET: • SCARLET is perceived to now be priced too high • SCARLET offers too many EPIC features • SCARLET owners do not want to compete with EPIC rentals Can they both be right? I agree, from varying perspectives, all of the above statements hold some water.  Most of the statements sound like they come from a filmmakers perspective.  But a filmmaking perspective is, in itself, incomplete.  What needs to be done is also examine the Venn diagram from a business perspective, to which I draw the following conclusion: All SCARLET feature-driven disadvantages are offset by its low barrier to entry. This means that the price of getting a SCARLET is low enough that its feature set is then justified without cannibalizing EPIC rentals or owners.  This is possible because SCARLET and EPIC (ironically) both are stand-alone products to which competitors do not offer similar systems.  4 years ago, the RED ONE body alone cost more than a fully-functioning SCARLET.  Yet SCARLET's feature set borrows more from EPIC than it does from the RED ONE.  This means the economy of the RED ONE body ($17,500) has dropped by 45% to $9,750 in 4 years.  That massive shift in a significantly lower barrier to entry coupled with the favoring of features from the EPIC make the SCARLET the best price-per-feature purchase, not the most over-priced. SATURATION The RED One is the first mass-produced cinema camera ever.  The number of RED ONE cameras out numbers the Sony F900, Arri D20 & D21 and Panavision Genesis combined.  You could probably add on the total number of Arri 435 cameras (around 2,000 built) and the RED One and Epic cameras would still outnumber the entire fleet of cinema systems combined.  The side-effect of this is Jim gave birth to a significant number of owner-operators of cinema cameras.  For the most part, this was a market that did not previously exist as there was no supply for high fidelity cinema systems in mass quantities from a single manufacturer.  The market effects of this can be likened to a series of earthquake aftershocks as the earth slowly settles in after a large tectonic shift.  Admittedly, because the market had never experienced mass-production of cinema equipment before, it is still carving trends that many people have been capitalizing on while others are missing out on.  Adding SCARLET to the mix will be somewhat difficult for all parties because we haven't yet fully recovered from the original "RED One quake."  RENTAL Every rental market is all about the shortest achievable amortization of equipment that is purchased.  -Especially cameras because rental agents, production companies and owner-operators always know that when it comes to cameras "...there is another..." There are different formulas for different pieces of equipment; some require years of amortization, some require months and others only require weeks depending on the product. A large Hollywood rental agent once told me when they purchased 8 SONY F35 cameras, "We buy F35 cameras at a total package price of about $240,000 each.  We are forced to spend that money in order to keep our clients from going elsewhere, knowing we will likely never make a profit on the cameras themselves.  The amortization period of the F35 is not necessarily as the camera's own lifespan.  The profit is found in accessories and glass.  For the most part, HD tape cameras are largely an unfortunate cost of doing business for many prominent rental houses." HISTORY In 1999, the two rivaling low-cost top-dogs were the Sony PD150 and the Canon XL-1.  Both modestly priced around $5,000 USD.  If you adjust for inflation, the same cameras today would cost $6,650 USD.  A decade ago, the specs that these cameras provided were being used just about everywhere including a bit of narrative motion picture filmmaking.  Cable, reality, independent, documentary, web and even a few specialty shots in films like "28 Days Later" were common places to see these two 25Mb/s, 525-lined cameras.  But these cameras were not designed as cinema solutions.  Most of us recall how important these cameras and the first native data transport into a computer was to progressive filmmaking.  I believe that these systems were a very real part of the birth of digital cinema, even though they themselves were not designed for cinema use.  But the success of this story is that 12 years later, the specs of the SCARLET out-perform the lineage of where Canon and SONY cameras have ended up in 2012. CONCLUSION: So after examining my thoughts on the categories above, this is the conclusion I come to with Scarlet-X: If you are a cinema system owner-operator, you are in a market that Jim Jannard almost single-handedly created himself.  While criticism is fair and I know Jim gives pause to all criticism, you must realize that criticizing how Jim is affecting cinema system owner-operators is actually criticizing the very person who created it.  Similar to the criticisms of OS9 to OSX, Apple has the sole authority to evolve platforms it created.  I personally believe it is well within Jim's jurisdiction to evolve a market that he populates as the majority. If you are a professional of any degree, meaning that you are paid in some form for your work, whatever that is, you will have the ability to write-off your SCARLET or even more likely, rent it out.  A 15,000 SCARLET package is comparable to a fairly cheap car - like a used Honda Civic.  Most of those cars have a 24-36 month payback with very low interest these days.  If you considered yourself in the market where you do not use your SCARLET every week, then I would choose a 36 month payback.  If you use it a lot, you might want to finalize the amor in 25 weeks.  Everyone is different.  A 36 month payback with 0% interest is $416 per month.  A 25 week payout is $600 per week.  Anyone who considers themselves a filmmaker on any level can fit within one of these groups.  If you cannot fit in this, you also do not own a car, you do not have cable and cannot afford to eat. What this comes down to is a skewed comparative perception:What is the right price for a cinema camera?  Looking at the numbers without consideration to features is as follows: Arri Alexa = 75K Sony F65 = 70K EPIC = 40K Canon C300 = 20K Scarlet = 10K So SCARLET only seems expensive because some people are currently comparing it to the 5D and 7D.  But that's an inaccurate comparison and can be proven by last week's Canon announcement.  Canon has released the C300 as a digital cinema solution for 20K, roughly twice the price of a SCARLET.  So features aside, the comparison of a SCARLET to a 5D is not a correct comparison at all because the 5D and 7D are not built for digital cinema.  All this is to say that SCARLET is in a class all its own; giving the benefits of digital cinema at a price that no other manufacturer is able to offer. SCARLET's value is worth more than what many people are currently perceiving and that's partially why they feel duped.  If you look at the market and evaluate the other cameras, you will not find a cheaper version across town because there simply is none.  In other words, it's impossible to compare SCARLET's pricing to anything because there is literally nothing to compare it with.  The closest camera with SCARLET-like features is the EPIC, and (I revert back to a previous point) All SCARLET feature-driven disadvantages are offset by its low barrier to entry. My guess is that the cost of manufacturing different camera systems (such as the 2/3rds inch or fixed-lens Scarlet) would cost RED more money than $3,000 per unit.  That sounds backwards, but RED is doing consumers a favor by making the housing the same.  This means the price per unit on each accessory can be amortized across two sets of cameras and therefore drives down prices across the board and increases saturation.  The trade off is consumers get more features for a better price and they get them sooner. FINAL THOUGHT: The SCARLET is 47% more expensive than a PD150 or XL-1 when they were released.  But far more importantly, the SCARLET is 47% cheaper than the nearest current models of those camera companies today.  Consumers need both the past and the present lineage to determine if they are being ripped off.  This data proves that SCARLET is properly priced right in the middle.Don't think about the price of a SCARLET, rather think about the price per unit of features.  SCARLET features are in a league all their own.  It means the price of renting the camera will be as little as a 25 week amortization.  That means the cameras will be available for the price of the accessories.  That's great for renters because they can purchase brains very cheap and rent the glass.  This is also great for rentees because they can get access to the brain and shoot identical pictures to most EPIC feature films.  Oversampling at 4K is still unfortunately misunderstood by so many people, but it's a critical component that most camera companies are finally giving credence.  When we shoot in 4K with REDCode, there are almost no disadvantages.  This is critical for everyone releasing theatrically and on the web.  To think that the SCARLET is overkill is extremely short-sighted.  Just like anti-lock brakes eventually became standard on all cars, so will 4K on all cameras.  I've been saying that since 2006.  Now it looks like the majority is starting to feel that way. | m |
19 notes · View notes
michaelcioni · 13 years
Note
Michael - RED has listed on it's website that REDCODE RAW is capable of 12 and 16-bit RAW : Compression choices of 18:1 to 3:1. Is there a way to select what bit depth is being captured? I couldn't find that anywhere in the menus. In my mind, the lower the compression, the higher the bit depth. If that's true, does that mean that from 3:1-10:1 the bit depth is 16bit and subsequently 11:1-18:1 is 12bit? Appreciate the opportunity to ask questions.
Great question!  Bit depth is not necessarily contingent on compression.   WIth many cameras like RED, they do not specifically limit bit depth with the compression scheme.  So it's possible to record at multiple compression schemes while still maintaining the same bit depth.  So if yo shoot 3:1 or 12:1 compression, you are still able to capture 12bit RAW data, only it is compressed into a smaller package.  I will check with RED the exact bit depth options with regard to their compression relationship and let you know the exact numbers.
m
0 notes
michaelcioni · 13 years
Text
To Mr. Jobs
To Mr. Jobs, You didn't simply inspire people to be creative, you invented outlets so creativity could simply thrive.  You paved a way for people like me to succeed on our own, but my career was only made possible because you possessed the foresight I didn't. I didn't know how much you impacted my life until I was mature enough to start realizing I am not the only reason I am special.  -You are a legitimate part of why I am special.  I need you to know that I copied you in so many ways simply because I respected and wanted to be you; because you are better than me.  I never copied you because I needed to make a product look like Apple made it (which I know has often been the case).  I copied you out of recognition of a near-perfect track record and perfect execution.  Imitating you makes me feel like your fingerprints are apart of my own ideas, and that has always been a safe place for us all.
No doubt, people will liken your life and tragic death to the death of a celebrity or rockstar.  Needless to say I've cried over the like in my own past.  But your contribution to this world was not so elementary to entertain it - it was something far more powerful and far more noble - your contribution to the world was to allow the world to empower itself.
More amazingly, your unparalleled ability to empower people to be creative has become such a normal way of life that it actually has made you invisible to the majority.  Your ideas were breathed into life because you knew it was for the benefit of people who would not celebrate you.  This is precisely why you will not be remembered like a celebrity or a rockstar entertainer.  You were not self-seeking in your greatness and your ideas were not mean to simply "help us forget our problems" -the typical boundary line that so many entertainers are limited to.  You've gone to the edge of potential and dared to go beyond for the benefit of others, not just the sport of it or to reap the rewards.  Your ability to share what I believe is among the greatest gifts God blessed one of his children with was, in fact, to better our culture.  Your hands paved a new runway for business, creativity and communication, to which no one has achieved on their own before.  The ambition that you most certainly possessed was, in fact, for the betterment of strangers whom you would never meet.  I am one of them.
In the summer of 1999, my roommate, Ian, and I embarked on a new journey that would truly change our lives forever.  We were in school (Carbondale, Illinois of all places) and Ian began to do research on a better way for us to grow as filmmakers.  The iMac was new, and there was a build you were releasing coined "The DV Special."  Coupled with the announcement of FireWire, the SONY DSR (DVCAM) series and the G3 tower, you began to build the beginnings of the most monumental change in content creation's history.  We were 19 years old and had our tickets to ride with you every day for the next decade.  Thanks to the help of our families, Final Cut Pro and the SONY PD150 became our 3rd and 4th roommates.  Barely men, Ian and I began learning the trade of software manipulation through digital capture and motion graphics.  Within two years, we won 5 Emmy's, went to Cannes, began distributing our self-authored DVDs, changed Southern Illinois University's media program forever and moved to Los Angeles with nothing except our computers.  Since that day, your tools have taught me how to create and I've taught other how to create with them.  And it is through these tools that my own personal stories have been told and shared with so many others.  Thank you.
Tumblr media
In my opinion, the "deepest" of all your PIXAR films is "Ratatouille."  Though the undertones of this film are largely reminiscent of the struggles between a Walt's original desires for the Disney company and his successor's failed attempts, the heart of this film is wonderfully grand and pleasantly simple: In the film, Anton Ego doesn't realize the impact of Gusteau's doctrine "Anyone can cook."  But Gusteau was applying the humility of a great leader and mentor by trying to encourage as many people he could that they each possessed the potential for greatness amongst many.  Ego eventually realizes this is what makes us human and best sums up his understanding that "Not everyone is a great cook, but a great cook can come from anywhere."  
Tumblr media
By this act, you have literally created millions of millionaires.  You have enabled millions of artists to create and compete.  You have connected millions of people who are far apart.  You have simplified millions of businesses and their transactions.  You have influenced the look of an art movement and redefined advertising and branding for every company on the globe.  You have inspired the development of other developers and made millions of us contributors of your brand.  And while all this happened to millions of common people, you still had the foresight in 1986 to build an animation studio and managed to entertain us as a hobby.  
I believe the greatest invention of the last 100 years is the iPhone.  I also believe that was your greatest achievement.  The iPhone is the most complex device on the planet that has no manual and requires no training.  I'm convinced that if you were hired to build the space shuttle instead of Lockheed Martin and Boeing, it probably could be flown without much training.  On June 29th, 2007, I excitedly and happily waited with my friend Steve and a few others for the iPhone.  We actually took off work and waited for some 12 hours so we could ensure we would leave with an iPhone that day and be a part of what we recognized as history.  The first photo I took with the iPhone was, appropriately, of an iPhone displaying a picture of an iPhone.  
But as with a device that has become nearly a part of my being, never more than 24 inches away from me at any time day or night,  it was your own invention that serendipitously delivered me and millions of others the news of your death.
Tumblr media
  What you did, Mr. Jobs, that so many others have not dared to do is empower God's earth with an unparalleled level of potential for great cooks to come from everywhere.  I am but one of many who have taken advantage of your generosity and I only wish I had the foresight to empower others in the way you have empowered me.  
Thank you.  I pray for your salvation in Heaven. michael
Tumblr media
2 notes · View notes
michaelcioni · 13 years
Note
Michael... it was a pleasure meeting you at RED Studios as well as getting an insider's look at Light Iron. Thanks for taking the time and letting us have a peek into your world. :-) I was wondering if you will be teaching the Reducation Post Production class on December 8th and 9th?
I will be:-)
REDucation will be very exciting this December as we have some very neat plans and some special guests that will make it a REDucation you will really want to be a part of.  I am really looking forward to this one.
m
0 notes