Saurora could be in many ways a perfect example of how not to go about your first short film. From questionable script to changing significantly the concept midway through the post-production (when fixing the story of course), those kind of missteps cost me not only time and resources but sound judgement over the coherence of my story as well. And even though I heavily reworked or completely omitted some already finished assets / shots in order to keep everything connected, I was still left in post-production with pretty much incompatible pieces which I simply didn’t know how to connect. Furthermore all this was going on during long nights wearing all the hats there are on the list while still having to function in a daytime job to pay the bills and to fund this project.
And as little as I know back then, I never thought that “the story of survival” I was dying to tell on screen will eventually happen for me. But not just in front, behind the camera as well.
Now, that I played the empathy card, I can begin to tell you what went right. Because unlike my fictional hero, I survived (ops, spoiler alert) and I made all the pain worth it.
So even though the storyline got sidelined, I’m really proud of this film and of how I never backed off from pursuing my vision. Because this wasn’t that common type of a short film you write down one week and finish the next one. It took almost two years of day-to-day hard work from first idea to final cut, presenting the biggest test of my commitment so far. Which, in the end, led to some pretty satisfying results. Especially when you consider the self-funded micro-budget nature and that only one person worked on this most of the time (those few great people on my team who stood close by me through all of it are mentioned at the end, without their help I couldn’t finish this film).
The most valuable thing however about this project to me is what I learned and discovered along the way – about filmmaking in general and myself as a storyteller. How this project changed me. This short article is supposed to map that journey. Therefore if you’re interested, want to take a look behind the scenes of Saurora and see how this passion project of mine came to life – feel free to dive right in!
From the very beginning I envisioned this story to take place underwater because it felt to me visually and emotionally like very interesting and intensive environment that hasn’t been explored as deeply as it deserves within the sci-fi pool of fictional worlds. The environment was kind of always in place from the beginning but in parallel with finding the story I also had to figure out how exactly I’ll bring this concept to life. Danger of coming up with something unachievable is always very real. Obviously I couldn’t produce this concept 1:1 in real-life due to how dangerous, expensive and time-consuming it would be. Second option was practical effects or miniatures. But they don’t work well when you throw in elements like water or fire. So CGI turned out to be my weapon of choice as the only option basically. Furthermore it really opened up the creative possibilities when it came to what can be physically achieved in terms of stunts and camera movement.
But first drafts of my script were centered around divers fighting for survival in neoprene suits directly in underwater medium. Few quick tests captured in terrestrial conditions have shown however, that the integration of actor directly into CGI underwater medium without anything in-between simply doesn’t work, no matter how much the actor pretended “to be underwater” or how well I blended the footage in composition software. Being underwater simply comes with the level of physicality which can’t be replicated within the boundaries of land conditions. I quickly realized I need a borderline interface between the actor and water – something to bridge the transition, something easier to integrate into CGI surroundings. Like a machine for example, with the actor inside and that’s how The High-Pressure Underwater Suit was born.
Original design of High-Pressure Underwater Suit with model of diver inside
But early prototypes as you can see on the right felt much more utilitarian and looked very different from the final version which would of course lead to very different kind of story. But not just that the early versions looked little uninteresting to me they also wouldn’t gave me any wiggle room in the process of placing the actor’s footage inside (shot against blue-screen) after principal photography is over. Since the interior perspective lines (probably static for most of the part) created on set (by the image of my actor) would have to match precisely with the movement of exterior lines, defined later by CGI submersible. That’s because you would see the actor nearly from head to toe which would create a need for the perfect match with the CGI counterpart extension. So having the actor from neck up with framed head only was definitely liberating and new bipedal design gave me exactly that freedom.
However that kind of freedom came with hidden catch when two very different terms, wiggle room and fix-it-in-post destructive approach, got little mixed up in my workflow. I was convinced a detailed pre-visualization of my story is redundant because I have so much wiggle room I can fix up basically every type of issue later in post-production. Moreover wheels have been set in motion even before I have figured out in detail the technical side. Meaning I decided for a path that easily could be technically impossible. This entire project was more or less a series of actions I have never done before. And as I was constantly challenging myself in ever-changing conditions, sometimes, in order to keep moving forward, I just had to jump out of that airplane without any parachute only with mere faith it’s gonna get solved somehow later.
Design and model of the underwater suit however had to be sorted out as clearly and soon as possible because it was central asset of my story. I had to pay very close attention to future motion requirements and how exactly will this thing move underwater with human inside. Layered accessories presented extra challenge. Stuff like arm chains and leg belts had to slide on top of body surface or move around without intersecting the actor’s body or the suit. Some parts were even fully dynamic which gave the suit a needed touch of realism.
Testing the suit
Great bonus was that the all visual effects were done in free and open-source tool Blender 3D (which can be downloaded here). This tool has a great community of users and developers around it. So there was never problem with lack of support. Therefore for all my Blender buddies out there I have small little something for you – source file of this machine – feel free to grab it and do whatever you please with it. Hope you’ll have fun!
Last step in creation of this asset was texturing and shading. Tricky part was developing the look mostly inside low-visibility / underwater environment to ensure the textures are standing out clearly. Consequently I chose contrasted texture with emphasize on edges which helped the audience to recognize the shape in dark and fast-paced shots. Also I had to cast extra fake reflections on the suit from invisible planes around it because what I was getting from the real conditions wasn’t simply enough to make things pop out and shine.
Rendering in Cycles was incredibly flexible. The cockpit glass for example introduced an interesting situation. The small interior light inside was causing a lot of path-tracing noise and fireflies outside of the glass / all around the shot. This was caused by the glass surface which was defined by so many detailed layers (refracting condensed water, fogged up parts, etc..) it completely bewildered the light rays coming from the inside. But thankfully the software (Blender – Cycles) I was using allows you to cheat the reality in very clever way. Basically It can hold the real light rays inside the glass and send outside only modified fake ones. So I was sending out only fake uniform emission color that causes little or no noise outside and gives me exactly the bounce light effect outside of the cockpit glass as I was looking for, with untouched glass looking exactly the way it looked before. Similar technique was used in several other scenarios, one of them for example was replacing every shader with simplified version (no detailed bump, limited glossy, etc…) in non-camera rays for faster and less noisy results. Again in comparison there was almost no difference except faster render-times and less noisy images.
Final asset (drag and slide around it)
Looking back at past work usually comes with certain deal of second-guessing. Now I wish I’d have more time to collect real-life references, draw detailed designs for individual body parts, try out different ideas and really nail the feel of hyper-real diver accessory. But even tough the final asset worked in a different way than I planned, it worked. Like a memento, I’m gonna come back again and again throughout this article to the importance of true pre-production. Unfortunately at that time, I took pre-production as a superfluous mandatory boring step before the cool stuff can come.
Undoubtedly it’s the foundation of every project and if you don’t have it there you ain’t definitely gonna have it later. No matter how much labor or money you pile on top of it.
Anyway, having the machine ready for my shots was very important moment. Something that was just in my head and roughly sketched out on paper suddenly stood and moved in front of me in color. And it was the most complex character I’ve ever created. But of course, things haven’t even started to get interesting yet.
One of the biggest challenges was how to recreate believable underwater environment in CGI. It was key to me the picture has to have certain quality I knew it’s essential for deep underwater shots. I was going after very dark claustrophobic atmosphere (which I probably pushed too far in some of the shots). But even though the evolution of developing the final look of my shots was quite unpredictable, it was very exciting to see my vision forming gradually in front of me. On the breakdown below you can see how every underwater shot in my film got created, step by step, from plain geometry to final shot.
Technologically speaking, the underwater volume is in terms of computational load incredibly heavy. I had only one workstation at my disposal, not hundreds of computers like professional productions usually do have. Quick calculations showed me I have to figure out how to get the times required to process every pixel of my film under control if I eventually plan to deliver the entire short film one day. The final resolution had to be cut down to half HD from the original full HD and I also went for cropped 2.35:1 wide format which saved me up to 25% of time compared to larger 16:9 aspect ratio pictures.
Another thing I realized when I watched real-life footages was on how small details the overall believability of underwater environment actually relies. Especially on those tiny dynamic particles, floating weeds, oxygen bubbles and dust blows, all beautifully interacting between each other inside one joined dynamic container. I knew right away I won’t be able to reach such a degree of realism and maintain the production momentum at the same time. So I had to push certain secondary aspects of underwater imagery a little further to sell the underwateter illusion – things like chromatic dispersion or light absorption (the water absorbs red wavelengths first, causing environment to appear blue and completely dark in very short distance) for example. Which probably didn’t make the image hyper- realistic in the end but it communicated clearly to you “Hey Buddy, this is underwater. Got it? Underwater.”
My biggest nemesis (which I never fully defeated though) was omnipresent path-tracing noise. Not surprisingly given the lighting conditions I was operating in. Best approach proved to be to do the cleaning on all layers and render passes individually and then putting it all back together nice and clean (more or less). Tricky part was how to define and isolate the fireflies before performing common de-noising. But I figured this part out somehow and the results were pleasing enough to proceed. At this point things were starting to get a little heavy on the technical side so all I was really afraid of was losing the momentum. Moving forward was all I had on my mind so I was making compromises left and right.
Regarding my toolset, Foundry’s Nuke composition software proved to be the industry standard it’s taken for nowadays and it played the essential role in my composition workflow. However the Non-Commercial license that’s offered to artist with limited budget like me, turned out to be not so handy. Due to its Non-commercial nature I had to back off from a deal which could potentially secure me to place Saurora on several streaming giants few weeks after I released it. But I wasn’t aware such deal based on a short film could even come. The lesson here is definitely be careful about choosing the right tool for you even on small projects like your personal short films. If I knew this situation could arise I would keep the shots in Blender. Nuke is still surprisingly very expensive compared to its competitors. Too bad Foundry isn’t offering some kind of clever indie license (like Side Effects Software is doing with its incredible flagship Houdini) which would allow using Nuke on smaller commercial scale too and help Nuke community to grow even faster.
Re-creation of the underwater environment
There’s something magical about seeing your ideas realized on screen, in motion. And even though I now wish I’d broaden the color palette a little more, warm up the tones slightly or make things more visible, it still has that vibe that initially got me hooked for that story. I’m proud of the energy those pictures have.
Moreover the amount of full-CGI shots (40+) was simply ridiculous for one-man production, and there was like two thirds that didn’t even make it to the final cut. So imagine the workload. My entire life at that point pretty much consisted of only two things – sleep and shot management. Simply put – Saurora got slowly prioritized over everything else in my life.
The concept of having a real actor inside mechanized suit underwater was very challenging and appealing to me. From the beginning I wanted an average looking type inside to play my hero who would ground in reality that completely virtual environment around him and make it look like an ordinary world in a way. And thus hopefully more realistic and relatable.
I took all my footage inside blue-painted garage with special interior layer made of super-flat boards (lights were very close, any bumpiness would immediately be recognized and it had to be 180° set to allow me to move around with the camera quickly). Then I tore down those boards and hand the garage back to its landlord, right after I got finished to cut the budget down as much as possible, hoping the technical integration is gonna get figured out somehow later down the line.
The process of integration itself consisted of two phases. First was locking the actor’s headshots in proper place. That required the actor staying completely still from neck down on the set, acting out a shot and then projecting that keyed-out footage on a plane (that always faced the camera) which was locked inside the 3D suit interior space. After I got that head in the right place, second phase was inserting the actor inside the actual suit. I had to peel off all the digital layers of my CGI image, including the glass to get inside that suit (by having the exact copy of just that glass in additional separate layer and then literally subtracting that layer from the main image). Point was to decouple the footage of my actor from main image so I can switch or adjust the footage without having to re-process the main image again every time I do so.
Process of integrating the actor inside CGI machine
The process of decoupling the footage from the main image was even supported by cubic design of the cockpit glass (consisting basically of 4 divided “screens”) which enabled me really to mess around with every shot and even switching between different takes in the middle of the shot. It was then when I found out I can basically remix the entire story. Which, for better or worse, I really did.
As I mentioned before, what gave me the freedom to do anything at first, got me in the corner later down the line. And with each new shot, hunger for completely new direction was getting stronger. Right thing would be taking completely new footage based on finalized script, but that wasn’t my mindset back then. Still I think it was a right decision to face the problems directly and redo most of the work in order to produce what I really believed in at that time.
This was for me probably the most intimidating part. I basically had to pull off all the techniques I had knowledge of from the dynamic simulation arena to achieve at least acceptable results. Even so I was most of the time only messing up parameters to see what it does without any deeper understanding. Every time something usable came out of my experimenting, trust me, it was a small miracle.
Moreover the software I was using is great for many things but simulating real-life physics unfortunately isn’t one of them. So that didn’t help. This was truly challenging. Old school sprite particles (used mostly in games), practical effects (composed over CGI shots) and heavy dynamic fluid simulations, all of it had to come together in turbulent mixture to make the environment believable.
Dynamic effects in action
Bubble explosion breakdown
More complex bubbles generated in SideFX Houdini software
Test of sprite particle dust underwater
Small waterdrop of practical effects in the ocean of CGI visual effects
Again I feel like I dodged a bullet here. The outcome fulfilled its purpose and helped to make the environment look more connected and realistic when all layers got comped together. Countless times I went back and forth, powering through my inner and outer limitations, in pains and repugnance. But it wasn’t for nothing.
Anyway I was wondering what kind of all-embracing wisdom should I leave you with at the end of this particular section. But nothing useful comes to mind when the dynamic simulation is the topic. Just do whatever it takes to get your project done even if it’s way out of your expertise. Or rather hire someone who can do what you can’t.
But regardless of all the difficulties mentioned above making of Saurora definitely came with great deal of satisfaction. Having something I literally only dreamed of so far, growing up and shaping up in front of my eyes, was an absolutely priceless experience. Each step along the way was a step into the unknown and I was never quite really sure what’s waiting for me behind the next corner.
And in spite of all the odds, Saurora, the small project many thought will never even get off the ground, got in the last third of production. The light at the end of the tunnel was very close and it felt to me like my two-year journey is gonna be completed soon. But if I took a lesson from this project, it’s that I have to always keep the failure close and actively go after it in order to succeed. Because nothing was over yet.
However, before sharing the darkest hour of this production I wanna also share with you in this last chapter something else. Something I’m not particularly proud of as an independent filmmaker who’s aiming for the highest possible level of cinematic craftsmanship. But it’s a relevant part of the story. The list of things which weren’t executed by the book on this project, as you can probably see, is nearly infinite. And when I was nearing the end things elevated to the whole new level. The filthy box of unrecommendable dirty post-production tricks (good just enough to get me through the day) got really opened up here. To pick the best, one of the common issues was for example that the footage of actor’s head didn’t fit in the CGI shot either due to difference in perspective (given how it was put together) or it ended too early or included something I wanted to be removed.
Footage correction using texture projection on the animated 3D model (The glitches on ears and back of the head were out of focus, hence not a priority)
In that case I used a re-animated 3D model of actor’s head, tracked onto his performance and then I used that model to bend the footage into whatever needed. As you can see on the picture above it worked mostly around the triangle defined by eyes and mouth – rest of it was sort of out of focus, hence not a priority. And if the character was far enough I used an entire CGI head instead (pictured below) to speed things up even more. Wondering about it now, would be interesting to see how things would turn out would I have chose to do this short completely in CGI.
Test of the digital double of actor's head used in extreme wide shots for faster integration
These shortcuts helped to connect the pieces I was so desperate to connect. Instead of compromising like that I should probably insist on making as many reshoots as I needed but unfortunately I was living in a completely different mindset back then.
Anyway, one might think after this it was finally over. But as mentioned above there was still one moment that almost killed the entire project. And to me it was life-defining.
But before I go deeper into it, as you may already noticed, the chapters of this article aren’t written chronologically – all of it was happening pretty much simultaneously. Arcs of development on every issue were more or less identical. So imagine there was a moment where EVERYTHING got hard and also moment where simply NONE of it worked. Simultaneously. Nevertheless around this moment of everything collapsing came around another technological problem. This time however fatal and unsolvable.
How the hell could that happen?! I was asking myself shaken. To quickly introduce the issue: Every shot needed approximately 250-300 textures (image files which define the look of every object). Also I was using new and fast (yet still experimental) GPU Rendering technology essential to process every pixel of my short film. Connection between those two will become obvious soon. Stay with me. Anyway up today GPU is the only alternative to standard and slower (yet proven) CPU Rendering technology. I needed GPU, without it I couldn’t finish this project. There was too much of processing involved. I went to the bank and took out a loan for quite expensive graphic card that was absolutely essential to utilize this GPU Rendering technology I was so dependent on. It seemed to work amazingly. But up to the critical moment I was working only and doing tests mostly with CPU. GPU got used only on smaller testing setups. So unfortunately I never noticed anything suspicious. But finally when time came around to bring the shots together and to use all the final assets at once, it made the shots logically that much more complex and that was the first time I noticed something’s not working.
Most of the 250-300 image files I needed were simply NOT THERE. All shots were unusable. And short after I googled out that the GPU has a hidden catch (!!!!!). An unbreakable hardware limitation (or at least it had by the time I was working on this in Blender in 2016) of using ~120 image files maximum per single shot – against 250-300 I needed. I put everything on one card and it failed me. Simply put, I was screwed. Few sections above I was talking about keeping the production momentum. Well, the momentum was nowhere in sight now. I remember that evening so vividly. Sitting in front of a pink shot (when some image file is missing it’s rendered in pink color instead to immediately alert you). So picture me exhausted, sitting in silence in front of completely pink shot. Clueless. Everything was dawning on me – Is this it? Nothing’s working anyway. Now this. Maybe I should really drop it – was what I was thinking. At that moment of pure despair, in my mind, I iterated through all those all-embracing motivational quotes and speeches which promote some almighty life secret that’ll help you overcome any type of crisis. Which is exactly what I needed.
You feel so boosted up and enlighten… but like for five minutes. Soon enough though you have to rely on the power you source from your own motivations again. And not just for five minutes but for every day and every minute through continuous and meaningful series of focused steps until you really change or finish what you set your mind to. Shortcuts, life hacks, magic pills, fix-it-all buttons or whatever which all promise an instant solution, that does it all for you, probably never work. Because they unwittingly shift your beliefs into thinking it’s not YOU who’s pivotal to your own success. That you need this super special puzzle piece to be complete and that without it you’re no one. Which in the end only pushes you further away from yourself and ultimately from what you love.
To me, it’s pretty simple. I just try to not allow the complications to poison my heart. Like this GPU problem for instance. I just told to myself – Whatever you do, don’t give this up. It’s just another obstacle (Even though I felt it could be the end). All I have to do now is to give it another go and attack this problem from another angle. And then again. And again. And again until I’ll have a solution in my hands. No sleep, no eat, this obstacle was the only the only thing on my mind.
Problem was that I was so focused on things I can’t do nothing about and my own frustration, I completely forgot about things I do have in my control. Like my attitude for instance. The solution was staring in my face the whole time. It seems obvious now, but it surely wasn’t back then. In the moment of inexplicable brilliance I realized that I could squeeze the information I need in those 300 original textures I have, into just few newly generated gigantic ones and thus getting the texture count under 120. The entire process was handled automatically through custom scripts. I cut my 300 regular-sized down to 50 gigantic in two days without having to sacrifice anything and it put the project and my confidence instantly BACK ON.
To merge my textures I used handy little tool called ImageMagick. Together with custom linux bash script I was able to stack all my textures into few big ones. This script as well generated a dynamic spreadsheet that included all texture names and new positions of all the previous textures inside those new gigantic merged ones. Afterwards, data from that dynamic spreadsheet were loaded into Blender using Python. When the script had all the information it could begin to automatically adjust (scale and offset) every object’s UV coordinates to match new UV sets. With new UV sets all the script had to do was to re-assign filepaths to newly generated textures.
From this moment on I knew I’ll finish this film no matter what. New energy got me back in the saddle and I started to finish the pieces one by one. Suddenly all problems had a solution. And I didn’t care how dirty my hands will get. If it meant scripting in Linux (totally my thing) or doing my own sound mix because the sound designer ditched me. Anything that had to be done in order to put this baby out there, was done.
In 2016 Saurora was miraculously released on-line after exhausting two year of painful production. It got featured on several prestigious sites like CGBros, Blendernation, Alien Hive, 7th Matrix, SideFX or Film Shortage to whose admins I’d like to thank again for helping to get the short film out. I also received several notable awards and nominations for my work.
For many this is just a little short film. To me it’s way more. I’m extremely proud of Saurora. Project that will forever signify one incredible chapter of my life. It was a first step on my filmmaking journey and hell of a leap into the unknown. Test of commitment which would break many. I survived and proved to myself that I can be who I decided to be and that I’m ready for the next step of my journey. And that has a value to me that outweighs all the shortcomings of this film. Because I know I’ll come back to it again and reinvent this story of survival from the ground up one day. And when I will – it’s gonna be like nothing you have ever seen before. Thank you for your time.
All content ©2018 Pavel Siska, unless stated otherwise.