Concluding my Project
This project has proved to me that I am capable of creating 3D physics-based simulations in more than one software. Having already done a previous project that involved the use of blogposts, I was able to refer back to it, if I doubted any of the work I was in the process of doing. This is why I believe evidencing my work through the nature of blogposts is beneficial to not only me, but others that wish to learn. At the beginning of this project I had the intention to create a live action scene that involved the SFX that I was learning. However, as the scope of Houdini was larger than anticipated, I only managed to composite backgrounds in Blender. As much as this developed more skills, a step I want to consider taking in the future is the live action developments.
Experimenting with Smoke in Houdini
As I wanted to expand my knowledge more in Houdini, I decided to go back into the software to create a smoke simulation. For this experiment, I did not use any tutorials to follow as I wanted to challenge my current knowledge and see whether what I have learnt has improved the way I create.
The first step of this experiment was to add a sphere object into the scene view.
After this, I applied the shelf tool called ‘billowy smoke’ to the object, which created a domain and turned the object into a ball of smoke.
As I wanted my smoke to look more natural, I turned off ‘clamp to maximum’ as this prevents the domain from restricting the smoke from elevating more and gives a more freeing effect.
The following step I took was adding a distant light as it provides dynamic in the smoke, automatically making the smoke more natural and also giving further depth to the simulation.
The final step I had to do was go into the material palette and apply ‘basic smoke’ to the object, this may not have been a necessary step as it didn’t change much of the smoke however it created a more ‘whispy’ effect.
Frames Rendered: 200
Render Time: 25 minutes
Final Smoke Output:
This experiment was simpler than I thought it would be, as I challenged myself to not follow a tutorial I feel as though my skills have been sustained and I am beginning to be more confident with using Houdini. If I was to work further into this experiment, I would create a ‘wafting’ plane object to see how the smoke reacts.
Second Compositing Experiment
I decided I wanted to do a second experiment compositing background in Blender, but this time I did not want to rely on the uses of other tutorials/guidance to see whether my skills have been learnt thoroughly.
I wanted this experiment to use soft body physics as it creates a larger dynamic in my project.
Time-lapse of compositing background:
Render Frames: 120
Render Time: 2 hours
Final Output for Soft Body with Background:
Switching to Blender for Compositing Experiment
As I have created many experiments in Houdini, I felt it was appropriate to return back to Blender so that I can experiment with how to composite a background into moving objects and make the object look like it’s supposed to fit into that environment with adjusting the lighting. This project was originally going to show live action backgrounds, however I feel that because of how large the scope of learning Houdini had become, I decided that using still images would be more time efficient.
The outcome of this experiment will hopefully be three cubes falling on the ground of an underground carpark, and it will demonstrate the skills I have already learnt in Blender and develop my knowledge on compositing skills.
The first step of this experiment was to download a HDRI of the background image, and a section of the HDRI that is going to be seen in this view. After selecting the camera, the following step was to enable the background image to insert the carpark into the view by selecting ‘add image’ (highlighted in the screenshot above) and turning the opacity of the image from 50% to 100% to create a full effect.
After that, I selected camera view, and locked the camera to view so that I could realign the grid in order for the cube to look like it is on the ground of the carpark. Above is the before and after of this step.
I turned the engine used for rendering from ‘blender render’ to ‘cycles render’ to allow the simulation to move naturally. I added a plane to the scene and scaled it up and enable ‘Shadow Catcher’
The image above shows the shadow catcher affect on the object, which makes it look more natural in the environment. To see this, look in the rendered view and you can see that the cube has a shadow which makes it more realistic.
Go into active data settings and allow transparency to see the background in the render view.
The next step is to load in the HDRI image to light the scene appropriately (image above). To do this, you go to the world setting, select ‘use nodes’ then select the drop down for the ‘colour’ setting, and select ‘Environment Texture’, which will then take you to your files to allocate the correct HDRI, I then changed the strength of the colour to make it more prominent.
To make the cube look more natural in the environment, I changed the colour by making it more grey and dingy looking.
Above shows me adapting the size and shape of the lamp, as I wanted it to be directly above the cube, because this way the lighting looks more organic as it gives the illusion that is is coming from the light in the image.
From past projects, I remembered how to change the physics of the objects. I selected the cube and made it an active rigid body, but made the plane a passive rigid body, as this way when the cube falls it will collide with the plane that acts like the ground. (image above)
Image above shows experimenting with different sizes of objects and also changing the lighting setting of the ground plane, to make the cubes fit more naturally into the environment. I colour picked from both the floor and the light in this image.
Final Output of Compositing Rigid Body:
Frames Rendered: 220
Render Time: two hours
Something that I am unhappy with on this experiment is the level of noise it has. Without denoising, the objects look grainy and unnatural therefore in my next experiment I will turn denoising on.
Pyro Explosion Experiment
The following experiment that I want to test is creating a pyro explosion. As I have done previous experiments involving fluid, particles and soft body materials, I feel this is the appropriate next step.
https://www.youtube.com/watch?v=fTIeGob0Wuo
Although this tutorial dealt more with shaders, it is still an important step into learning how to make something look more realistic. Creating this was difficult as I found it hard to follow because it only had key words throughout the video at the bottom. Having said this, it also allowed me to be more independent with the image and create something that I feel is more of a reflection on my style of creativity.
The first step was to add a sphere into the viewport, and following this, add a mountain node. Ive learnt from previous experiments that adding a mountain node allows the object to look more organic, and as I will be creating a cloud of smoke, it will look more natural if it isn’t perfectly round.
The following step was to apply a pre-made explosion node that is located under the PYRO FX tab (top right). Adding nodes including a vortex will make the smoke look more organic, and changing the levels of confinement will adapt the size that the explosion can reach. The main reason for this step was to change the values of some nodes, in order to make it have a better rendered outcome.
When recreating this scene, I noticed that my smoke did not look as thick/natural as the one in the tutorial. This may have been down to the different versions of Houdini, but having read some of the comments below, there was a solution. By adding a ‘Volume Visualisation’ node inside the ‘pyro_import’ object, I was able to change the density, and ultimately create a more natural looking explosion.
The next step was to add a camera, but I didn’t adjust the resolution like the tutorial, as I was comfortable with where it had been positioned. After this, the main step to create a good explosion is to add shaders and change the lighting, I added two ‘distant light’ options to the scene, one was to keep the white to create more shadow and the other was changed to blue. This made the scene more natural.
The image above is what the simulation looked like in the rendered viewport. In comparison to the image in the tutorial, this does not look as affective. As I was not satisfied with the outcome, I decided to experiment further and managed to find a tutorial more suited to my version of Houdini.
Although this explosion looks less organic, hopefully by the end of this experiment I will have created something that I am more confident with, as it still created the plume of smoke which I want to reflect in my work.
The first few steps were similar to other previous experiments, as I had to create a sphere object and add a mountain node in order to make it look like the image above. Changing the rows and columns too 100 allows this object to have more edges when attributes are changed further into the experiment. Before adding the mountain node, I changed the values of the sphere by selecting the scale on the left hand panel to make the sphere flatter.
the next step was to play around with the attributes in the mountain node (element size and height) in order to create the image above.
After this, I added the PYRO FX presetting ‘explosion’ which created the smoke you can see above. The next few steps were just adjusting the attributes in each node, below is a time-lapse of these small changes which ultimately created the final render that will follow.
Editing the pyro nodes:
Pyro Explosion Output:
Frames Rendered: 180
Render Time: 2 hours
Rendering Settings
From the Houdini software, the default render setting is Mantra. As this setting is a sideFX product, it works alongside Houdini easily. This being said, after researching into the best output render setting many people also use Redshift. After accumulating information from a few sources:
- https://forums.odforce.net/topic/26640-mantra-vs-redshift/
- https://www.reddit.com/r/vfx/comments/963bmi/what_renderer_do_you_use_for_houdini/
- http://www.sidefx.com/docs/houdini/render/render.html
I narrowed down the best features of both render settings.
Redshift:
- GPU has a faster speed
- Supports shaders and textures
- Works in Maya, Houdini, Cinema 4D and Katana
Mantra:
- Supports volume and particles (i.e. pyro and fluid)
- a SideFX product
- Houdini is VEX based, and mantra shaders are written in VEX
Although the GPU of Redshift has a faster speed and can adapt in a few different softwares, mantra is a product of SideFX which leads me to believe that it is a more reliable and efficient way of rendering my outputs and therefore I am going to continue to render with Mantra.
The settings that I rendered the disintegration effect in were effective enough to show the experiment clearly – the only thing that I will change is the output device, as this could have been a reason for the initial malfunctions. Keeping this in mind for future experiments, I will change the output device to a PNG as this will work better on my laptop and make it easier to render out video sequences for my work.
I created a Render Present on Houdini that I feel works best for these experiments. The reason I did this is to make it easier when it comes to rendering and allow myself to have reference to my previous render settings for other experiments. If necessary, I can adapt the settings to suit the particular experiment but for now it creates a base setting that I am comfortable with.
Image of preset render setting:
The images above could change depending on the simulation, however this is a good base for my experiments to work off of as I know that the PNG setting is more accommodating to render the output in.
Something I have learnt from an online forum (https://forums.odforce.net/topic/26689-houdini-fx-how-to-render-out-correctly/) is that is an important factor to rendering out an image sequence is to add ‘$F4.png’ to the file name (or .exr depending on the type of image you have), as this give you a separate file for each frame, and in this case provides a sequence of separate images that can be put into a video sequence in PremierePro. When I originally rendered out the jellies experiment, this wasn’t added at the end, which meant that I was overwriting the same image at the end of each frame rendered.
Disintegration Effect in Houdini
First couple of steps were similar as had to create a geometry node with a sphere node inside – I’ve already started using new skills from recently learning Houdini’s interface through Lynda.com i.e. using the keyboard for quick transitions going in and out of nodes.
The image above is the beginning processing of me adding the geometry node, using ‘I’ on the keyboard to go inside the geometry and add a sphere. Adding a box node into the scene will be what allows me to dissolve the sphere
After changing the primitive type of both the sphere and the box to polygon mesh, I added a boolean. Before this tutorial I didn’t know what a boolean was however I now understand that I have to use the boolean in order to remove the geometry.
After adding a transform node and placing it between the box and the boolean, I learnt how to add key frames into the scene. The image above shows me adding a keyframe so that the sphere disappears after two seconds (frame 48). I did this by right clicking into the Y axis box and selecting add keyframe.
I added a blast node and connected it to the boolean, which means that when I create an ‘a inside b’ group in boolean and apply it to the blast node, it creates the image you can see in the scene above. This geometry is where the particles will be created from.
The following step was to merge the object together with the OUT_GEO node (image above), this way, the object that dissolves runs as a whole rather than separate entities. When the object was dissolving, it was very smooth and linear – which looks natural for a dissolving object. Therefore, the image below shows what the object looks like when a ‘mountain’ node is added in-between the transform and the boolean.
In the image above you can see that the sphere isn’t smooth round the edges, this is due to the box not being large enough which ultimately interferes with the sphere, to change this I went into the transform node and made the box slightly larger.
The two images above show me changing the way the particles move in the scene. The first image is the original particles moving with gravity force on approx -9 and the second image is when the force is changed to 1. It was necessary to add a ground plane so the particles have somewhere to land originally, but when I changed the gravity, I could remove the ground plane.
I added a POPaxisForce node as this allowed me to change speed in which the particles orbit around the object, this creates a more organic looking set of particles.
As I started getting into editing the particle nodes, I realised that it was easier to screen record my progress as that way I could concentrate on the work instead of screen shotting and explains the progress. In the video, you can see me adding a colour ramp to change the colour of the smoke emitting from the object. A binding node that allows the other nodes to coincide with the multiple node which fuses the other nodes together.
Time lapse of developing my particles :
Adding principle shaders to both the particles and the sphere allow the colours to be viewed in the rendered view. I then added a principle shader to the ground plane so that my final output would have a more professional look.
Final Disintegration Output:
This particular experiment took much longer than I had anticipated. I believe that it was down to the process of editing the particle nodes as there were many different elements that needed adjusting in order to make it look organic. This being said, I feel as though this was a good step into learning particles that are similar to dust/smoke as this is another experiment that I want to take forward for future and therefore ultimately have developed more skills
Initial Trouble with rendering video sequence after output left Houdini:
I had no trouble rendering out the ‘Disintegration effect’ from the software Houdini, however, when I went to put it into a video sequence and export it, the file kept crashing. Initially I thought it was fact that my laptop wasn’t advanced enough to export such a large file so attempted to render it out on a few other platforms, again each output failed. After explaining to my tutor what was happening, we decided to see if it would render out the video sequence in Blender. Originally, Blender kept crashing too. However, we experimented with changing the settings in Blender which eventually allowed the sequence to render out in a video.
Learning the Houdini Interface
As I have began the process of learning a few simulations (the jelly and water drop) through youtube tutorials, I am becoming more familiar with using the softwares interface as I go along. This being said, I felt that in order to create more intricate simulations, I want to be more confident when navigating around the interface of the software. In order to increase my knowledge on this, I went back onto the ‘Lynda.com’ website that provides information on this aspect (and more).
https://www.lynda.com/Houdini-tutorials/Navigation/571627/629709-4.html?autoplay=true
Below is a time lapsed video that I uploaded to youtube which evidences me learning the interface through Lynda. When I get stuck, I can referring back to this video/back to Lynda.com’s interface section to keep working efficiently on my work.
The headings that make up this sections of the interface contents is:
Navigation: explores the different ways to move around in the viewport
Viewport and display modes: shows the different types of geometry for the objects in the viewport
Panes: How to expand the different sections i.e. making the node section more visible and removing unnecessary tabs
Desktops: How to safe different desktops in the software
Preferences: How you wish to work and what you can adapt in the software depending on your project
Display options and visualisers: explores scene customisation and texture settings
Global animation options: setting up the overall parameters of the scene
Nomenclature: talks about the key words and names in the software i.e. ROP, VOP, VEX
Network view: Area where to build scene and explains how to go in and out of nodes (‘i’ to go inside and ‘u’ to go up)
Node flags: the areas surrounding the node on the right and left
Geometry spreadsheet: learning about points, verbs and primitives
Treeview: being able to see a breakdown of what in the scene
Shelf tools: exploring the tools displayed on the software (i.e. the box, sphere, grid etc)
I feel that when learning through Lynda, my work ethic is more efficient. Therefore, when I attempt a new simulation I will see whether there are any tutorials to follow on Lynda that can improve my work.
Simple Water Flip Fluid Simulation
The youtube tutorial linked above focuses on using Flip Fluids and explains how to cache out the simulation so it runs smoother when working in the software. I have decided to follow this tutorial as it is a simple starting point to learn how particle fluids may work, keeping in mind that I want to develop the skill into more intricate fluid particle simulations.
The first step was to place a sphere object into the viewport and select ‘flip fluid from object’ to apply to the sphere. This created the current image I have below
The following step is to create a ‘ground plane’ so that the particles have something to collide with when playing the simulation. Something I have learnt from creating a ground plane is the difference between the grid object and the plane. The grid object doesn’t automatically have any qualities applied to it, meaning that when I press play, my particles will fall through the ground. Whereas when adding a ground plane, the static collision qualities are already applied, meaning I do not need to add any more to the object.
In the image below I have changed the value of the particle separation from 0.1 to 0.01, which means that when the fluid falls, the particles stay closer together which creates a more natural effect. However, this also slows down the viewport simulation a considerable amount due to having more particles to calculate and cache out.
As this is an experiment, the only thing that needed to be added to this was a background plane and adding a light and camera in order to see it in the render viewport.
First Fluid Render:
Personally this was the simplest simulation to develop, however on thing that needs changing (if I come back to it) would be the values of the background plane, as when the fluid hits the ground it passes through the grid in the background, making the simulation less organic.
When going back into this simulation, the few things I needed to change were simple, and I did not require any assistance to do them, as I have become more acquainted with the tools in the software. I changed the background from a grid to a ground plane collision. This means that the attributes that allow the water to collide are already preset, the only thing I had to change was the positioning, so that it could form a background. I also changed the colour palette in order to see the fluid clearer on the simulation. I turned the water slightly bluer, and the ground white. Once I added a distant light, these colours were much more visible.
Second Fluid Experiment:
Frames Rendered: 140
Render Time: 48 hours