Friday 24 December 2010

Renderfarm

As the number of elements increases while we progress in this project, the rendering time on our laptop became time-consuming and highly inefficient. Through our tutors we discovered the Renderfarm being available for students in our Uni. As a result, we sought assistance from our tutors and one of them, Alex Hulse, kindly gave us a simple demonstration about the method of operating it. We immediately set up a render of all our seaforts with Jure's low resolution textures, HDRI lighting, ocean shader and the camera path.
While about 150 frames were successfully rendered out via the renderfarm, Ladji had finished texturing the seaforts with higher resolution textures, thus we killed the ongoing render and set up another one with Ladji's textures. However, the jobs began to fail continuously at this point despite the fact that we set up everything correctly.
As a result, we decided to break down our scenes and test out the few major elements, which needed to be calculated heavily by the machines and might have caused the failure. These elements including, Global Illumination and Final Gathering with HDRI lighting, apart from this we also did a separate test for the Ocean Shader itself just to be sure.


Ball Test Renderfarm from Reno Cicero on Vimeo.

This is a simple test of a ball and a plane with blinn textures applied and a spot light to see if the renderfarm is working.


OceanShader_Renderfarm Test from Reno Cicero on Vimeo.

This is the test for the Ocean Shader just to see if the renderfarm is capable of rendering this element in a scene.


GI_FG_Renderfarm Test from Reno Cicero on Vimeo.

With the ocean, we lit the scene with HDRI lighting and placed two objects with reflectivity and refractivity in order to test that if the renderfarm is capable of rendering a scene with Global Illumination and Final Gathering in it.



After a series of tests of various rendering elements with the Renderfarm, we suspected the problem might be the resolution of Ladji's textures being too high as the result of the 'Render Diagnostic' shown above.


Above image is the interface of the software 'Qube' which accesses us to the renderfarm.
The college does provide the mac version of the software on the college server 'Vanguard' but all our attempts resulted in failures. This is the reason why we were usually working in the prototyping room at seventh floor in college, where a dozen of PC computers installed with the software are located. However, the computers are old and inefficient, also there were often problems with the connection to the college network. This drastically increased the level of frustration and time consumption to the rendering process.


As a result, we decided to render out the scene with the low resolution textures that Jure did for the seaforts previously and imported all the other scene elements into Jure's scene file of seaforts. The render came out successfully and aided us tremendously in terms of improving the scene, especially the camera path. This render came out within a day, this proves the efficiency of the renderfarm when it is working correctly.


Second Test Render from Reno Cicero on Vimeo.

After the first render, Reno added fog and clouds into the scene and changed the colour of the ocean attempting to create an atmospheric look that feels more appropriate to our theme of decay. Also we improved the camera path with a few more interesting shots and angles on the path. I also experimented more with the scale of 'Global Illumination' and 'Final Gathering' in order to improve the lighting.
When the render came out we felt that the colour of the ocean was too red. The lighting was brighter than how we intended it to be, this consequently changed the colour of clouds as they change according to the lighting. Fog seemed to be a bit too flickery, but that's due to the camera movement, so we thought it was inevitable.
At this point we also realised resolutions of the textures we used for the seaforts were too low, the images thus look very rough and unrealistic when the camera approaches the models.

Although the second render was a success, it was at this point that the renderfarm's processers started to cease functioning one by one. As a result, the render time increased tremendously and eventually took about four days to complete. Reno and I spent a lot of time trying to work out what was happening to the renderfarm. We tried connecting to the renderfarm through our laptops as well as the PC computers at the seventh floor. Both failed due to our minimal level of knowledge to the renderfarm. We then spoke to some of our tutors and realised that it was the technical side of the renderfarm that is causing the problems, they thus recommended us to send a report through JIRA so the technicians will be able to see the problems officially and deal with them.



We learned to set up the render layers in order to give us maximum level of controls over colour grading and compositing of the scene. When Simon from IT finally came back from holiday, he restarted all the processors and updated some firmwares to keep the renderfarm up and running. We then happily placed our scene in the renderfarm. As we stayed in college all night making sure that everything runs smoothly we were convinced that we were going to get our first scenes rendered out nicely in layers. However, we were wrong.
When we came back second day in the morning, what awaited us was a failed render that ran for approximately seven hours long. That's when we realised the level of sensitivity between the settings inside Maya and the settings in the renderfarm.

Thursday 9 December 2010

Render Passes/Layers

Achieving Realism and Depth using Render Layers in Maya - CG Tuts


Depth pass rendered out from Maya


Depth pass brought into After Effect


Ambient Occlusion pass brought into After Effect


Colour/Beauty pass brought into After Effect

Above images are from CG Tuts, which described in detail how to bring all the separated render passes from Maya back together using After Effect. This gave me a clear understanding of how to gather each pass together efficiently, which avoids going back and render from Maya time comsumingly, and also increase the quality of the final output.



Above image is an example used by Alex Alvarez to demonstrate the meaning and use of various render passes.


Above image demonstrates the tremendous level of controls gained over the scene at the compositing stage via rendering each scene element out separately using render passes.

This website provides detailed explanation over several commonly used render passes, Diffuse and Specular for example, as well as how to separate various elements in the scene to create the maximum level of controls over our scene by being able to colour-grade and edit each element individually.

Below images are the renders layers for frame 290 of our submission piece from renderfarm.

Clouds in the front

Clouds in the back

Ocean

Fog

Seafort 01

Seafort 02

Seafort 03

Seafort 04

Seafort 05

Seafort 06

Seafort 07

OcclusionPass

AlphaPass

ReflectionPass

ShadowPass

zDepthPass




Above images represent a test that I did in photoshop in order to figure out how to use the zdepth information generated from Maya to apply depth of field effects to 2D images.


I composited all the layers together in After Effect as the image shown above. Due to the textures resolution being too high, we had to render each seaforts out separately, this however created the problem of seaforts over lapping whenever the camera rotates around because they were all rendered out in the alpha channel. The alpha mask with all the seaforts would not solve this problem. As a result, I went back to Maya and created an alpha mask layer for each seaforts and rendered them out with renderfarm and it worked. There was only one bit when the camera rotates to the exact opposite direction from its original path, then the alpha mask had worked oppositely so the seaforts were overlapped. We only discovered it after we rendered it out and there was no time for us to do another render unfortunately. Apart from that everything else worked greatly for the first scene.


For the intro, we had Jure modeled the TV and had Ladji textured it. Then I replaced the TV plane texture with a green lambert shader so we can easily key out the screen in After Effect and place the footage of our first shot behind it, and use After Effect's 'Track Motion' utility to keep the main comp on the TV.


When the camera goes through the main seafort window, we planned to extend the camera path into the room and render them out separately. Again, we didn't have enough time to do that due to all the problems occurred previously, so I rendered out an alpha mask from the last frame of the shot and use that to composite the renders of the room behind it.


After all the compositing in After Effect, I exported everything and edited them all in Final Cut Pro together with the soundtracks and sound effects.

Saturday 27 November 2010

Camera and Cinematography

Camera Terminology

Camera work is extremely crucial to film making. The audience's view to the scene varies tremendously depending on the composition, camera angles and movement adopted because this directly affects the way in which they view the characters and events taking place in the scene, hence affects how they perceive the story. In a subconscious level, this affects the mood and increases the level of emotion portrayed in the scene.
This is a list of camera jargons used widely in the film industry for referring to various camera works.


Long shot:
Establishing shot: to possibly convey isolation or epic scale.

Mid shot:
Shows some part of the subject in more detail whilst still giving an impression of the whole subject. (From head to foot)

Medium close up:
Half way between a MS and a CU. (From head to shoulder)

Close up:
A certain feature or part of the subject takes up the whole frame. Conversational shot probably important to story. (From head to waist)

Extreme close up:
Head shot. Conveying extreme emotion possibly anger, despair or joy. Usually used for action sequences.


Two-Shot: 
A comfortable shot of two people, framed similarly to a mid shot.
Wide shot:
The subject takes up the full frame, or at least as much as possible. The same as a long shot.

Over-the-shoulder shot (OSS):
Looking from behind a person at the subject.

Noddy shot:
Usually refers to a shot of the interviewer listening and reacting to the subject, although noddies can be used in drama and other situations.

Point-of-view shot (POV):
Shows a view from the subject's perspective.

Weather shot:
The subject is the weather, usually the sky. Can be used for other purposes.

Very wide shot:
The subject is visible (barely), but the emphasis is still on placing her in her environment.

Extreme wide shot:
The view is so far from the subject that she isn't even visible. This is often used as an establishing shot.

High Angle:
Camera is raised up : the degree of angle will change the emotional message of the shot.

Low Angle:
Camera is by the floor looking slightly up. Again the degree of angle will change the emotional message of the shot. These shots tend to convey an emotion of fear or are dynamic, action shots. The wider the lens the more angular the shapes and therefore the more tense and dynamic.

Track and Pan:
An animation term
To convey the importance of an area or object in the shot or to define a general direction the film maker wants us to go.

Pan: ( in L.Action pan is a“dolly”)
Move again taking the audience where we want them to look.
Often this can reveal more of the environment and tell us more about the people in that environment or it could just suggest on coming danger – or happiness!

Whip Pan:
A fast pan usually following the main subjects of the film. Will convey action and excitement.

Fade:
Fade to black.

Flash Frame:
Image shown for a number of frames to convey urgency or fear.

Cross Cutting:
Cutting between two sequences to establish a relationship between them.

Cutaway (CA):
A shot of something other than the current action.

Cut-In:
Shows some part of the subject in detail.

Depth of Field:
How far open the lens is.

Large Depth of Field:
More objects in focus at one time (f/16)

Shallow depth of Field:
Limited area in focus – maybe just the foreground objects.

Deep Focus:
All objects are in focus – both foreground and background (f/8)

Racking Focus (Pull Focus):
Changing focus from one object to another to highlight the important part of the shot with the story.

Crossing the Line:
Convention that camera can be moved in any part of the shot as long as it stays on one side of the action.
If you don’t follow this rule you very often reverse the angle in consecutive shots and confuse the audience.

Dolly Zoom (Vertigo Shot):
Created by simultaneously zooming in and tracking backward; the result is that the foreground remains stable while the background expands backwards.



Reno and I created a motion path together using this tutorial have our camera moves along the path as we key-frame the aim of the camera to obtain different shots. Due to our theme of slow decay and the post war context, we feel that it would be nice to have only one shot through out the entire scene.


Above image was the first camera path Reno and I created. We tried to adjust the curve and have the camera looking viewing the seaforts at various angles and distances.


First Test Render from Reno Cicero on Vimeo.


Second Test Render from Reno Cicero on Vimeo.


This is the final camera path that we adjusted according to the information we got from the first and second renders from the renderfarm.

Monday 22 November 2010

HDRI Lighting and Global Illumination

Render of the seaforts fully lit.

In order to create realistic shadow that mimics the position of the sun, I decided to put in a 'point light' in the scene right in front of the sun on the 'skydome'. However, in order to reduce the render time, I placed a 'spot light' near the same area as a photon emitter for global illumination. I also reduced the intensity of the 'spot light' to zero so it doesn't add any unnecessary lights into the scene.

With the help of some third year students we have finally discovered the best HDR image that we would like to use for our render as images shown above. We used the chosen HDR image and played around with the settings a bit more, trying to create the best render result under the minimum amount of render time.


Through tutorials, we discovered various techniques of rendering in Mental Ray efficiently. Things like a 'photon map' (Image Above) with 'Map Visualiser' for example, this allows Maya to re-use the photons generated from previous renders. This allows Maya to skip the process of calculating the photons in every render and reduce the render time, as long as certain settings remain the same.


We also experimented with a few other HDR images that we can find to give different looks and atmospheres.


We then lit up the scene with the HDR image of the sky that we would like to use, but the lighting created via this image felt less effective than the previous one.


Despite, the nice looking renders, we feel that the colours of the sky from this HDR image is too mellow and does not achieve the kind of atmosphere we are looking for. So we replaced the sky with another image that we feel is closer to our goal and turned off the 'primary visibility' of the HDR image from its render stats. As a result, the lighting of the scene and colours of the sky do not match up as the image shown above.



Both images above were lit with the HDR image shown below. Through the reflection of light and colours, the scene already looks pretty nice with a few simple settings. However, image based lighting does require the activation of 'Mental Ray' and 'Final Gathering'. As a result, the render time does increase significantly depending on the settings.


HDR Maps - 3DTeachers.com


I came across the image above on the Internet while researching into HDRI. This could be a good example of the kind of HDR image that we might want to use regarding the lighting inside the seafort. However, it is very difficult to find a HDR image that is as specific as the way we want it, which is a military office inside a seafort. As a result, direct lighting with Maya is still the prior option for now.

Information from these links, a tutorial about HDRI and some demonstration from a couple of third year students provided me with a great understanding of HDRI and how it could be used in this project.
HDRI stands for High Dynamic Range Image, which is basically an image that consists a huge amount of colour data that is generated via taking photos of the same scene at multiple exposures and then combining them into a single image. Compare to a normal 8 bit image such as .jpg or .tiff, a HDR image has 32 bit of colour information stored in it. With these extra colour information, it allows 3D programs to closely mimic how the lights behave in real life and hence generate photo-realistic renders.
If we could find a HDR image that is generated from a scenery of dusk, which is how our weather condition is like, that would be incredibly efficient when it comes to lighting up the first scene with all the sea forts. However, it is still crucial to play around with direct lighting and global illumination in Maya since another side of the scene should be stormy.


Photographing this kind of light probe is one method of creating a HDR image. The benefit of this is that it captures a larger range of the scene due to its reflection property. However, there are problems such as having to remove the photographer in the reflection and the base of the tripod before the image can be used. Also, the scratches and reflectivity of the surface would also effect the image quality. 
One may also use a fish-eye lens to achieve the similar effect though it does not capture as much of the scene as a light probe does. However, this does avoid the problem of capturing the photographer and the tripod base in the image, as well as the distortion caused by the light probe surface.
For our project we are going to try and find a pre-made HDR image for our scene.


Poolball Texturing Interview (Wide3D.no)


Above image has been unwrapped into a flat image from spherical image taken through a light probe. As a result, Maya will wrap that image around a dome and project light into the scene according to the position of light sources and the exposure data stored in the image.


Above image is a comparison of a pool-ball that is lit with and without an HDR image. The result of photo-realisim achieved via HDRI lighting demonstrates that why it has been used widely to composite CGI with real life footage.


from Max-Realms.com

Above image consists a computer generated character from the famous game Half Life 2, which is nicely composited with a real life image through the use of HDRI as it captures the light information of the scene realistically and light up the computer generated character through it.

Wednesday 17 November 2010

Room Setup

Room Setup that we designed out of all the objects we found during research. We have picture references for each object in the room that we choose to use. Including even the structure of the room, such as beams and walls.


Above image is the layout of the room that we designed according to the objects that we found and decided to place into the scene. Each object in the scene has at least one high-res real life photo as reference.


Above image is the layout for all the small objects that we decided to put on the main desk in our scene. As all other objects in the room, we have at least have one high-res real life photo as reference.


Above image is the layout of structures on the ceiling. These metal pipes and plates play a big part in terms of showing the process of decay.