Categories
MA Visual Effects Term Three

SHOWREEL | TERM THREE

Categories
MA Visual Effects Term Three

FMP X NFT’s

Categories
Advance and Experimental VFX Animation and Techniques MA Visual Effects Personal Project Term Three

UNREAL ENGINE LEARNING PATH

When I started this project I started with the wrong idea in mind. At first, I wanted to start with the project MetaHumans, a tool from UE4 where you can modify a very good model of a person and is ready to animate and everything. I wanted to experiment with this tool.

I realised that this was the wrong approach and that I needed to properly learn UE4 to be able to use this tool how is intended to be used.

Lucky for me unreal have an online learning community. I started with the basics. I found an online course with a few videos that are specific for Virtual Production and I find it very interesting that I started from there, the first videos introduce me to the interface of Unreal.

After introducing you to the interface and how to move in it, we create our first project working with starter content and free content from the Marketplace to build our first level.

Then after we got a bit more used to unreal it’s more of a free learning path, there are a few more videos after this that explain the structure that unreal follows and what’s the best way to organize your folders. Also how to import content from outside of unreal and move content from one project to another.

Then we have some real-time rendering concepts explained with a few videos and examples so we can start to understand how it works.

After following this path I started to work on my project, my idea was to create a virtual set to use in a virtual production with an LED screen like the ones used in The Mandalorian.

I started exploring how to create a landscape with a procedural texture that depending on the altitude of the landscape would blend to a different texture.

We create material functions that are basically the base material where we feed him with the landscape coordinates.

This material then we create another material where we feed him the different materials we want to blend with a blend material attribute.

This will be the material that we apply to the landscape. The material going into node A will be at the bottom and the material going into node B will be at the top.

Once the material is created we then make an instance so we have a backup, and we apply this material to the landscape. Creating an instance is like duplicating the material. This is the material where we can see that we have created 4 nodes that we can modify: blend height, which will determine the altitude needed to change to the other material, blend sharp, which will determine how strong is the blend, scale, which will tile the texture and specular range to control how the light affects it.

After experimenting with this I discovered that with megascans you could blend 3 different materials exported from quixel bridge and use the paint tool to blend the materials and this way you could be more precise on what you want to achieve as a texture.

You can select 3 materials in order, the first one you select is gonna be the bottom one, the second in the middle and the last on top.

We can see how the material changes when we tweak some nodes on the material, it give us control over how we want the texture to look.

Once I decided which textures to use and how I started painting and placing assets from quixel. This is what I end up with.

I had some issues and limitations during the learning path. Is true that the main idea was to start learning unreal as much as I can to be able to use it on my FMP. That I think I have achieved my goal because now I’m confident enough to use it with a more clear and advanced idea.

The problems I faced were that I started this learning path wrong and lost a few weeks while I was deciding how to proceed. As I said first I wanted to start with metahumans, which made my schedule change because I used some of my time to play with this tool that I end up not using. It was to advance for what I wanted to do.

Then it took me some time to find a clear path of what to do and follow because I didn’t have someone to teach me unreal, and I knew that, I knew that the uni doesn’t teach unreal and I would be on my own for most things, I had the help of my teacher in a way of ideas and things I could try to do but I didn’t have a clear path on how to learn unreal. So it took me some time to find it and I started my project later than I wanted to.

Once I find it and start working on it, I had some hardware difficulties. My computer is not the best of all, is about 7-8 years old and everything I do takes a bit more time than usual with a newer computer, but unreal was very demanding especially to create what I wanted to do. That slowed my process.

I could use the computers at uni that are much better than mine but then my problem was that I was using Quixel Bridge to export the assets I was using in the project. And there is a weird policy with quixel that in a university you have to enroll for a license and someone already had it in my Univesity but we didn’t know who. So I could only work on my computer freely with all the space I needed for the assets.

But apart from these problems, I’m quite happy with the result. I expected more to be honest because I have experience with Unity another game engine, but as I was learning unreal I was realising that it wasn’t as easy as I thought it was. So the end result of the video could be better but overall the core aspect of this project was to learn UE4, I’m happy with what I achieved.

Now I’m excited to try the early access for UE5 that was announce a few days before the submission for this project. So the learning path continues!

Categories
Advance and Experimental VFX Animation and Techniques Collaborative Projects MA Visual Effects Term Three

WALK MAN

For this project we split the class in two groups. Each one had to present an idea for a short movie about 45 seconds long. My group was compose by Sean, Giulia, Jane and me.

The first steps for this were to come up with an idea a modboard and a mind map of what we need to do. We came up with a couple ideas.

FIRST IDEA:

Here you can find the mind map of it: https://www.zenmindmap.com/docs/view/gP8V4y1aK88RbQGw5Wvz

The idea was to create a short story about 3 or 4 people that are not related to one and other, but somehow one story continues from the last one explained. This is a modboard for this idea:

But we quickly move to a second idea that we had and like better.

SECOND IDEA:

This idea is the one that we end up creating. A post apocalyptic world where there is no life and a scavanger is exploring an area that he found abandoned and never been there.

This is the mindmap for this: https://www.zenmindmap.com/docs/view/k1oYq9wdLZZwajPXgEBG

Again after the mind map we created the modboard. We search some reference on Mad Max, Love and Monsters and more:

Because this was our final idea we proceed and created a storyboard to have a better understanding of what we wanted to do.

The scavenger finds and abandoned church and he decided to enters to see if she can fins something. Once inside the scavenger discovered and old robot made of old tech, like pen drives, cd’s, Walkman and more. The plot of the story is that at the end the robot sends and important and life changing message.

Next step was to creat the sript and a schedule to organize when and how we need to work for a good result.

This is the script:

WALK MAN

ACT ONE

FADE IN:


EXT. TOWN – SUINRISE

Aerial view of a destroyed town. Overground vegetation and rusty cars. There is a church in the distance.


EXT. CHURCH – SUNRISE
A church destroyed. There are no life signs apart from a SCAVENGER (21) who is dirty and sweaty with ripped clothes. The scavenger opens a war bottle and drinks his final drops of water before walking inside the abandoned church.


INT. CHURCH – SUNRISE
Empty and old. Overground vegetation.
SCAVENGER notices a robot among some rubble and walks closer. The scavenger is excited to see the robot, believing he will be able to lead him to salvation. The scavenger drops to his knees, the tension builds as he waits for the robot to say something. The robot is not able to talk, he is only able to communicate by music that comes from the old tech he’s made of. The robot opens his mouth and sings “CRAZY FROG” before powering off and dying.

END

This was the scheduled we followed:

And all the shots that we needed to create and work on:

Shot #Int / ExCameraCamera AngleShot TypeLocationShot SizeShot Description
1EXTDroneBird’s Eye ViewWSCGI, Drone, Stock Footage4k, 3840 x 2160AERIAL VIEW OF A DESTROYED TOWN. OVERGROUND VEGETATION AND RUSTY CARS.
2EXTBlack MagicPerspectiveMWSSt Mary’s Church4k, 3840 x 2160STILL HEAD-ON SHOT OF CHURCH, SCAVENGER WALKS INTO FRAME AND ENTERS CHURCH
3INTBlack MagicPerspectiveVWS/WSSt Dunstan in the East Church4k, 3840 x 2160INSIDE THE CURCH THE SCAVANGER LOOKS AROUND AND NOTICES A ROBOT AMONG SOME RUBBLE AND WALKS CLOSER. 
4INTBlack MagicRobot’s PerspMCU/WSSt Dunstan in the East Church4k, 3840 x 2160THE SCAVENGER IS EXCITED TO SEE THIS ROBOT.
5INTBlack MagicPerspectiveWS/MSSt Dunstan in the East Church4k, 3840 x 2160THE SCAVENDER DROPS TO HIS KNEES, THE TENSION BUILDS AS HE WAITS FOR THE ROBOT TO SAY SOMETHING
6INTBlack MagicPerspectiveMCUSt Dunstan in the East Church4k, 3840 x 2160MEDIUM CLOSE UP OF SCAVENGER’S FACE
7INT3D CameraPerspectiveMCUCGI, St Dunstan in the East Church4k, 3840 x 2160MEDIUM CLOSE UP OF ROBOT’S FACE
8INTBlack MagicPerspectiveCUSt Dunstan in the East Church4k, 3840 x 2160CLOSE UP OF SCAVENGER’S FACE
9INT3D CameraPerspectiveCUCGI, St Dunstan in the East Church4k, 3840 x 2160CLOSE UP OF ROBOT’S FACE
10INTBlack MagicPerspectiveECUSt Dunstan in the East Church4k, 3840 x 2160EXTREME CLOSE UP OF SCAVENGER’S FACE
11INT3D CameraPerspectiveECUCGI, St Dunstan in the East Church4k, 3840 x 2160EXTREME CLOSE UP OF ROBOT’S FACE. THE ROBOT OPENS HIS MOUTH AND SINGS CRAZY FROG BEFORE POWERING OFF AND DYING

Once we have all the shots listed and we know everything we need to shot we meet up outside in a location that we like and start shooting. Here are some behind the scenes images:

And here we can see the first edit with what we shoot that day.

We were very lucky that the location we chose was perfect for what we wanted to achieve and we were able to shoot everything we needed in one day. Once we had all our footage ready we start working on the 3d model of the robot and researching other 3d models that would fit the looks and fits the scene.

We used quixel Bridge with megascans to implement some 3d assets to create a more post-apocalyptic look.

After we had our assets and the movie sequences, we started to implement them, while trying some looks on with DaVinci. And then Jane made the background music and sounds for it. Once we had everything we put it together and this is the end result.

Categories
Advance and Experimental VFX Animation and Techniques Collaborative Projects MA Visual Effects Term Three

ARMAMENTUM – WEEK 03

This week we improved the models, made the walls a bit more interesting with some geometrical shapes so the light can bounce on it and give an interesting look. And we replace the sofa with a Rubik’s cube with some sort of transparency that made the scene much cooler I think.

Here we can see some of the changes. We also add a desk and a chair to fill a bit more the room.

With this now we rendered the scene to see the light reacting to the new walls and models. As you can see we are going for a basic/minimal look for the environment to focus more on the CG Arm that we are gonna replace so the audience does not distract from it with a very complicated background.

For next week we are gonna work on the back of the room, where we are gonna put a door maybe with some see-through window so we have another source of light to fill the room and also to close the room and people don’t ask where do people come in from. So this will help also the story of the scene.

After we finished the basic idea of the room we will be able to put on hold the modelling and start testing to shot in the studio to see if the camera movement works, and testing the lights we have access to.

Categories
Advance and Experimental VFX Animation and Techniques Collaborative Projects MA Visual Effects Term Three

ARMAMENTUM – WEEK 02

This week we focused on research and blocking for the environment. I used a web called Milanote to create a moodboard. As I said in the last post we are looking for a futuristic look. Something like the game cyberpunk with those bright and vibrant colours.

Here you can see the moodboard created. Follow the link for the Milanote project where you can see in detail all the references.

https://app.milanote.com/1Lsgu91aTMQK0d?p=lzQGlfs64sq

After we decided on the look we can start blocking in 3D the environment to see how the camera movement work and to have a better idea of what we need to do.

Here we can see the first blocking.

After this I started to play with lighting to achieve the look we want. I set up 2 neon lights to with blue and purple colour and some fill lights to create shadows and to have a brighter image.

Now we can see better the look we want. For next week we decided to get rid of the sofa and place a geometric/minimalistic sculpture with the lights around it. Something like this:

For next week we are gonna replace the sofa as I said, and we will have a rendered video to see how the lights works and effects the geometry.

Categories
Advance and Experimental VFX Animation and Techniques Collaborative Projects MA Visual Effects Term Three

ARMAMENTUM – WEEK 01

This project is gonna be a collaboration with Gherardo, I had an idea and ask him if he wanted to collaborate with me.

The main idea is to record ourselves in front of a green screen and we would also replace our forearm with a robotic/futuristic arm. This project will have different techniques, and that’s what we wanted to do, a short video where we could show our work with a lot of techniques. Some of them are, green screen extraction, rotoscope, modelling, texturing, lighting and more, all of them combined to produce a nice shot for our reels.

For this project, we ask Dom to help us since he is the one that showed us how to use 3DE4 for the tracking process. He was happy to help and overview each one of the steps for this project.

The basic plan is:

  • STEP 1: CAMERA MOVEMENT
  • STEP 2: BACKGROUND/ENVIRONMENT
  • STEP 3: DIFFERENT IDEAS EACH FOR THE BACKGROUND
  • STEP 4: CHARACTER ACTING
  • STEP 5: ANIMATIC IN 3D

When we are happy of every step we can start shooting the scene.

So the first step was the decision of the camera movement for the shot. For this, we created using Maya camera setups with different ideas.

After looking at them with Dom, we selected both cameras that we like the most and combine them together. This is the result of this combination.

First week of the project is quite simple and short, but is the first step and as important of the rest, this is the base of the project. I’m sure after a few weeks when we decide what background we use and other parts of the project, this could change just a bit, but the basic idea of this project is here now, and its a good start.

After each step, we have a meeting with Dom to supervise the work and give us the ok and more ideas on how to proceed. Now, as the plan is, the next step we are to work on the background style. We will see more on to the next post, but the basic idea we have is a futuristic/cyberpunk scene.

Categories
Collaborative Unit MA Visual Effects Term Two

RESEARCHER COLLAB

For this part of the course, we had to collaborate with the neighbor course, Computer Animation. I went to padlet where all the people were posting their projects idea and I saw this one called Researcher Collab. It was sci-fi so it immediately got my attention.

I liked the idea behind it and they needed VFX people for some cool effects and lighting. I wanted to try and work with them.

Here is the animatic of the project.

We created a discord channel for better communication. And we usually had a meeting every week to see how we were doing. First, we discussed the storyboard that was already created and if it needed any changes.

We also created a OneDrive folder for easy sharing files. So after we organized our self we started working.

My part on this project is to create some VFX and also set up the lighting of the scenes.

For the VFX part I decided to work with Houdini since we are learning at the moment and I could have the support of the teacher, and also I really like the program and wanted to learn more.

This is the general overview of the node editor in Houdini.

The general workflow is importing the object, scatter to points and then with this points project them on to the floor.

This points projectets are the ones that are gonna generate the simulation of the smoke/dust.

This is the first version and its only smoke, but for previz its more than fine, the idea would be to have smoke and also dust. We learn how to do that on week 6 and 7 of the Houdini class.

After that we create the density and with the pyro solver generate the smoke, touching the right attributes to make the simulation I want to do.

In the end, converting to vdb so I can Import it to Maya where we are gonna render the scene.

Here we can see a bit more of the process:

Importing the object and unpacking it.

After that we delete some of the faces that we don’t want or need to be projecting the points.

After that we scatter to create points and create a vop attribute that we are gonna use for the expanding of the points:

Here we can see how the vop attribute works. The bigger the attribute the closer are the points, and when we reduce the value we can see that the points are spreading.

After this we create density and a distance attribute where the farther away the ship is from the floor, the less points is gonna project.

After this is just a manner of tweaking the settings on the pyro solver to create the desired simulation. And this is the result I got:

After this, the process is to export it to Maya and then work from there, with the shaders for the render. This is a rough render of the result in Maya. The shading is not the best, but I was having issues with it that only it looks black.

Once we are finish with the work for previz, is time to render. We had some issues with the render farm at the University. All the shots that had VFX in it were crashing all the time. We reach the conclusion that the problem was because for the VFX we used Maya 2020 and its more buggy than 2018 the one that is in the computers at Uni.

And we render out the first Act without any of the VFX, here you can see the result:

After the problems we had with the renderfarm we thought that we would no be able to render the shots with the VFX, but our team work really hard to solve it and we end up with good renders to show. And with the VFX and all the sound desing, the result is this:

As you can see there are some parts that are not rendered in Maya, those are Act 02 and Act 03 that we did not have time to finish. But we knew it was a long project so we focused only on the first act to do it properly. We didn’t want to rush it and have a bad result in the end.

Categories
Advance and Experimental VFX Animation and Techniques MA Visual Effects Mehdi Workshop Term Two

HOUDINI – WEEK 06

Week six started with a small practice to make something cool. From a test geometry we scatter points to do an effect like if the geo is vanishing.

With the vop attribute we are able to randomize witch points are being affected by the simulation and also its used to make the geometry disappear when is very small, so it would stop simulating on those particles that we can not see.

Then we create as always the dopnet for the solver. Adding some wind resistance and also modifying the direction of the wind. In this case, we make them go up, against gravity.

Here is the final simulation:

After this small simulation, we learn how to make geometry follow the path of a smoke simulation. We create a smoke simulation like we did previously on different occasions. In this occasion, we create a null at the end of the smoke sim called VOL ADVECT that we are gonna use to make the geometry follow the same sim.

After this, we proceed to create a dopnet for the particles that will follow the sim. And we connected to the VOL ADVECT with the node popadvecbyvolumes. This connects it to the previous volume simulation.

After this, we just have to create a sphere and with copytopoints we replace the particles with spheres and then merge it with the previous sim, and we have the spheres following the smoke sim.

Categories
MA Visual Effects Term Two

SHOWREEL | TERM TWO

SHOWREEL

INDIVIDUAL CLIPS

3D PROJECTION – CLEAN PLATE

3D PROJECTION – 3D MODEL INTEGRATION

BEAUTY PRACTICE

MARKERS REMOVAL

GREEN SCREEN 01

GREEN SCREEN 02