Before I get into this blog I'd like you to take a couple of minutes to check out this video to get a better understanding of what I'm talking about and how I created this asset.
So the video you just watched is a time lapse of me creating a asset for my upcoming rainbow six siege scene. The asset I created was the bomb from the game. Unfortunately I couldn't record the whole texturing process as the software slowed my computer down too much. I also didn't record the small part where I modeled the frame however this time lapse shows the main process behind my work on this asset.
Concept
As you probably know I didn't come up with the concept of this asset. The model is a fan art piece for the game 'Rainbow Six Siege' created by Ubisoft. When creating the model I had fast amounts or reference to go off as it's a large game and the resources are everywhere. To get the correct reference I wanted however I launched the game myself and took many screenshots as reference. I almost always have my reference open on my second screen to refer to when modelling. If I was to come up with my own concept for an asset I would have done a really quick sketch. Although I'm bad at sketching it really helps to break down ideas. Before I start sketching I usually get a large reference library to pick inspiration from. For example, if I was creating my own bomb design I may get photos of bombs but also choose related parts such as a cell phone or C02 container. Any relevant parts that maybe a part of the object, you should collect reference for. You never know where your ideas will come from.
Modelling
Modelling is something I've been doing for a while. I fell in love with modelling 3d objects and environments because frankly I sucked at drawing. I wanted to be able to create but my drawing skills were bad and I'm a very logical thinker so naturally I was attracted to 3d modelling. To start modelling the asset, I opened my reference folder on my second screen and imported a side view of the asset in question into my modelling program (Autodesk Maya). This gave me a reference plane to work on. This way, if i enter a side/ orthographic view I could almost trace part of the geometry. This helped me get basic forms and proportions down correctly. If you use x-ray mode or lower the opacity of you base material then you can view your geometry and the reference overlapping each other. This means that you can trace the geometry to what its like in the image. Making the initial cage for the bomb was a bit of a challenge as I needed each side to be equal. To achieve this I just created one side, duplicated it and started moving, snapping and merging vertices to make a corner piece. It's also vital to make sure that the end of one corner piece will snap up to the other end nicely for when it comes to duplicating the corner piece. To do this I centered the pivot of my corner piece and then duplicated it with a 180 degree turn. Once I had two corner pieces I just continued to adjust vertices till the matched up nicely. To make my job easier I was only interested in one corner piece. This is because when it comes to UV mapping, I can use the same map for the other corner piece.
When it comes to creating the piece as a whole you just have to think ahead a bit and break each section down into little parts. Once you have a understanding of what parts you can break down into a separate model and what sections you can use for another it's time to break each bit down into its basic primitives. In 3D modelling you start off with basic primitives/ shapes such as: cubes, cylinders and spheres. When you start working on a model you must first determine which primitive is best suited/ closer to the final object. This process in generally very quick and self explanatory. For really advanced assets such as characters, artists usually start off with a plane and keep extruding the edges to create more faces. These faces are them manipulated individually to get the shape they want.
UV Mapping
UV mapping is the process that many artists hate but it's an essential part of the process and a very important one, especially when it comes to games (in regards to optimization). So first off, what is UV Mapping. Well, if we go back to low level maths we learned about nets of 3d objects. UV Mapping is the process of tacking a 3d object and unfolding the object out to create a 2d representation of it in a sense. UV maps are used to apply textures to an object. We need the object folded out in 2d space so the computer know how and where to display the textures. It is possible to completely separate parts of an object when creating the net but it's best to have as many of the faces/ uv's connecting as possible in the 2d space. This helps reduce seams. Seams are the part of a texture where you can see that the pattern doesn't match up. This isn't usually a problem if you have a different texture on that area or the location is hidden. When it comes to UV mapping and texturing you want to reduce the seams where possible and place the seams that are inevitable somewhere hidden or insignificant.
When working with UV maps many artists, including myself use a checkered pattern. This helps you see if the boxes/ checkers line up correctly together.. It also helps you view the texel density. This is how much resolution/ texture real estate is given to a certain section. Important parts need a higher texel density and other parts are best kept at the same density so the change in resolution or the scaling of the texture isn't noticeably strange.
Texturing
Texturing really helps tell a story and can bring a 3d model to life. When I first started out I would export my uv map into Photoshop and paint over it or place a photo texture where I wanted it. This works but doesn't look very appealing at all. You are also limited to what you find online or take yourself in regards to photo textures. Now programs such as substance painter are starting to make Photoshop somewhat obsolete in the texturing scene.
Substance painter is a PBR based texturing program. PBR stands for Physically based rendering. This means that textures react close to real life materials. They have different texture maps that handle different information and that information can be used my the game engine. An example of one of these map is a metallic map. This maps hold the metallic value of the texture. If you have a all white texture for the metallic map then the object will be really reflective like metal. Black would mean that the material will not be reflective at all. Based on information you input or paint these maps can be generated for further realism. Normal maps are another example. These carry data regarding shading. Game engines use normal map data when it comes to lighting. Normal maps can fake geometry. They can create shading to make it look like there is geometry there that isn't such as a small cutout. This helps with selling the realism without the limitations of geometry or having to think too much about optimization. It is possible to bake shading into a regular texture map however that does not factor for the players position or lighting in the scene. In a non pbr workflow you would have one texture per material that was responsible for the overall look (called a diffuse texture) however with pbr you have your albedo texture which has all the colour information then you can have height maps, normal maps, metallic maps, AO maps etc. These all work together to sell the realism of the material.
To texture the bomb I used substance painter for the whole asset apart from the mesh at the bottom. I used substance designer for the mesh. It allows you to create materials using node generators. For substance painter I just used the included materials and just changed the parameters. I painted some details and used layer types to get my desired effects. Although this article is about the bomb, for my reinforced wall I used photoshop to create height data. I then converted my black and wight height data into a normal map and imported that into substance painter to get the grooves on the reinforced wall. To create the grooves I just used the draw rectangle shape and repeated it. This allowed me to get the varying height. After painting details and changing parameters it was time to head into the game engine.
UE4 | Game Engines
Once I had my 3D model I combined all the sections into one object, deleted the object history and froze transformations. This is to prevent errors when importing into a different program. I then exported my final mesh as a .fbx and brought it into Unreal Engine (UE4).
I exported my textures as .png's and imported them into UE4. You can import them together as 1 file with a plug in however I prefer to just have the texture file formats. Although I don't get parameter controls I can easily control which texture maps are active and I can make small adjustments in photoshop if needed. PNG's aren't the best format for game development however I wasn't too worried about optimization as this environment is just for my portfolio piece. I will however make a separate article on optimization soon so keep an eye out for that. Follow my instagram for more awesome behind the scenes content and things I'm working on. Also subscribe to my free email list below to get notified about new articles and special content.
I hope you found this blog post helpful and have a great day!
Best Regards:
~Ross Hankinson~
Comments