Post by Admin on Feb 16, 2022 1:09:24 GMT
What types of player characters should be in the game?
Mice, for sure... maybe we just start with mice with gender / fur color / face changes, if not body type
options like Shrews, hedgehogs, squirrels, moles would be nice but maybe should come in the expansion, with characters as NPC's if not mice..
That's all well and good, but first we need to understand how character creation is involved in creating the MMORPG video game itself regardless!
www.gamedeveloper.com/programming/on-character-customization-part-0-
Jordi Rovira
Blogger
Mice, for sure... maybe we just start with mice with gender / fur color / face changes, if not body type
options like Shrews, hedgehogs, squirrels, moles would be nice but maybe should come in the expansion, with characters as NPC's if not mice..
That's all well and good, but first we need to understand how character creation is involved in creating the MMORPG video game itself regardless!
www.gamedeveloper.com/programming/on-character-customization-part-0-
Jordi Rovira
Blogger
Requirements
There are many requirements in tension in a character customization system. They will depend on the game going to use it, of course, but in some degree you will always need:
Performance in the construction process. It cannot take long to build a character and it cannot require a lot of memory.
Optimized data generation. You will want your data to be as optimal as data generated directly by your artists. Optimized geometry: with only the required triangles to avoid overdraw and z-fighting. Optimized textures: without wasting space, channels, and using compressed formats. Optimized draw calls: you cannot use more draw calls for your customized character than you would use for a static one.
Flexibility in the range of modifiers that your artists can use to define the customization of characters. These modifiers will probably include mesh merging, morphing and removal, and various image effects to change colours, blend in normalmap effects, projection, etc.
Reusability is not a usual requirement, since developers tend to focus on single projects when developing customization systems. However in the case of a general game engine, or a middleware like the one we develop, it is a key element.
Challenges
Giving the control to the artists
In APB we had a long pre-production stage, where two programmers and two artists worked together defining what would it be possible to customize in the game and how. This included the skin color effects, the skin layers for scars, moles, tattoos, etc., how this would affect the normals, specular and other material properties, etc. It also included how would we model the clothing accessories, the morphs in the body and the face, the hair-style etc. Then we did the same for the customization of the cars.
After that long phase, we threw away all the test assets, produced a many-pages document for artists, developed a tool to define and preview all this data and we implemented the system in the game engine with those effects in mind. It sounds short now, but it was a huge task in terms of man-months. The system was settled in stone and any change in the customization features like adding an extra layer in the skin, or a different morphing parameter would have serious implications in the programming side.
With time, I realized that it is very important to give the control of what can be customized to the the artists so that they can define all the construction process of the assets without requiring of additional programming work. The only way to do this is with a data-driven process: by turning the construction process of the objects into data itself. A little bit like what happened with programmable shading in the GPU: instead of adding stages to the rendering pipeline, at some point, the GPU designers realized it was much better to give us shaders.
Levels of detail
In an MMO you may have many characters on-screen, but only a few will be close enough and require many pixels in the final rendered frame. The traditional approach to reduce the cost of complex scenes is to use several levels of detail (LOD) for an object and use cheaper ones when they are far away. Cheaper objects have simpler meshes and smaller textures. In the case of customizable characters it is necessary to build this LODs specifically.
Imagine the case of a necklace. In the highest LOD you probably want to model it with a mesh and a special metal material. In the next LOD it may be enough to model it as a morph of the mesh and a blended path on the torso color and normal maps. In the last LOD you may want to ignore it completely. Having this support for LOD adds complexity to the customization system but it can greatly improve the performance of the resulting data and the build process.
The real-time updates in the lobby
Imagine the case in the customization lobby when the player is changing the skin color of complex character. The player is moving a slider handle, and he or her is looking at the 3D model to see how it looks, expecting real-time visual feedback. What is going on under the hood?
In this case you are using the maximum detail character and you are using the highest resolution textures, maybe a couple of materials with 2048×2048 texture sets including color, normal and specular. Whatever way you decide to use to customize the color it will involve some per-pixel operations like interpolations, soft-light or hard-light effects etc. Moreover, you probably have additional layers on top of the skin, like moles, hair, tattoos, garments modeled as texture effects (like socks, or tight t-shirts), etc, that you need to bake. This adds up to millions of arithmetical and memory operations that you need to do in a few milliseconds to sustain the frame rate.
What can you do? Well, the answer is obvious in the 21st century: use the GPU. It is not difficult to move this operations to a shader and just update its parameters while the player changes the skin color. Of course, you would only use this shader in the customization lobby, and you would bake everything when using the character in-game. But if you have complex customization it will not be possible to move all of it to the shader, so you will have to make several shaders depending on what parameters are being edited of your model. Moreover, you will have to specifically encode the process to generate the “partially baked” resources that your shaders will need, for every case.
This is what we did in some of the systems in the past, and it worked great. But any change in the customizable features of the object implied a lot of work in order to adjust all these processes and shaders, which makes this incompatible with giving the control to the artists as discussed in a previous point.
The memory constraints in the In-game use case
When you are in-game, you are probably using all of your resources, trying to push the quality to the maximum. Suddenly requiring 2048×2048 pixels x 4 bytes x 3 images to apply and image effect between two images onto a third one, for a character you need to build in the background because he is joining the area, may be a problem. On a PC requesting too much memory is not that terrible: you have a thick OS that will virtualize and swap in and out for you, but it will still be slow. In some consoles and smaller devices though, you will crash if you exceed the available memory.
You have to split all the operations into smaller tasks and organize your code and data to use the minimum amount of memory. This can take some time and will slow down the object construction, but it is not especially difficult. However, again, it depends on what operations you require for each object, and when these change, you may need to review these tasks as well.
A possible approach
My latest attempt to resolve this requirement tension is to use a kind of virtual machine approach. The artist define a diagram with blocks connecting player-controlled parameters and meshes and textures to create an object hierarchy. This is compiled into a set of operations and constant data. This “program” can then be reorganized automatically for the several scenarios described in this post: to have maximum performance (trying to generate shader fragments automatically), to use the minimum memory, and optimised for the different cases where subsets of parameters are modified at run-time.
The virtual machine runs this program in different ways for different scenarios, and it has operations like texture packing, image layer effects with small blocks, etc. It can easily run tasks in parallel and it can automatically apply memory constraints to the program execution.
There are many requirements in tension in a character customization system. They will depend on the game going to use it, of course, but in some degree you will always need:
Performance in the construction process. It cannot take long to build a character and it cannot require a lot of memory.
Optimized data generation. You will want your data to be as optimal as data generated directly by your artists. Optimized geometry: with only the required triangles to avoid overdraw and z-fighting. Optimized textures: without wasting space, channels, and using compressed formats. Optimized draw calls: you cannot use more draw calls for your customized character than you would use for a static one.
Flexibility in the range of modifiers that your artists can use to define the customization of characters. These modifiers will probably include mesh merging, morphing and removal, and various image effects to change colours, blend in normalmap effects, projection, etc.
Reusability is not a usual requirement, since developers tend to focus on single projects when developing customization systems. However in the case of a general game engine, or a middleware like the one we develop, it is a key element.
Challenges
Giving the control to the artists
In APB we had a long pre-production stage, where two programmers and two artists worked together defining what would it be possible to customize in the game and how. This included the skin color effects, the skin layers for scars, moles, tattoos, etc., how this would affect the normals, specular and other material properties, etc. It also included how would we model the clothing accessories, the morphs in the body and the face, the hair-style etc. Then we did the same for the customization of the cars.
After that long phase, we threw away all the test assets, produced a many-pages document for artists, developed a tool to define and preview all this data and we implemented the system in the game engine with those effects in mind. It sounds short now, but it was a huge task in terms of man-months. The system was settled in stone and any change in the customization features like adding an extra layer in the skin, or a different morphing parameter would have serious implications in the programming side.
With time, I realized that it is very important to give the control of what can be customized to the the artists so that they can define all the construction process of the assets without requiring of additional programming work. The only way to do this is with a data-driven process: by turning the construction process of the objects into data itself. A little bit like what happened with programmable shading in the GPU: instead of adding stages to the rendering pipeline, at some point, the GPU designers realized it was much better to give us shaders.
Levels of detail
In an MMO you may have many characters on-screen, but only a few will be close enough and require many pixels in the final rendered frame. The traditional approach to reduce the cost of complex scenes is to use several levels of detail (LOD) for an object and use cheaper ones when they are far away. Cheaper objects have simpler meshes and smaller textures. In the case of customizable characters it is necessary to build this LODs specifically.
Imagine the case of a necklace. In the highest LOD you probably want to model it with a mesh and a special metal material. In the next LOD it may be enough to model it as a morph of the mesh and a blended path on the torso color and normal maps. In the last LOD you may want to ignore it completely. Having this support for LOD adds complexity to the customization system but it can greatly improve the performance of the resulting data and the build process.
The real-time updates in the lobby
Imagine the case in the customization lobby when the player is changing the skin color of complex character. The player is moving a slider handle, and he or her is looking at the 3D model to see how it looks, expecting real-time visual feedback. What is going on under the hood?
In this case you are using the maximum detail character and you are using the highest resolution textures, maybe a couple of materials with 2048×2048 texture sets including color, normal and specular. Whatever way you decide to use to customize the color it will involve some per-pixel operations like interpolations, soft-light or hard-light effects etc. Moreover, you probably have additional layers on top of the skin, like moles, hair, tattoos, garments modeled as texture effects (like socks, or tight t-shirts), etc, that you need to bake. This adds up to millions of arithmetical and memory operations that you need to do in a few milliseconds to sustain the frame rate.
What can you do? Well, the answer is obvious in the 21st century: use the GPU. It is not difficult to move this operations to a shader and just update its parameters while the player changes the skin color. Of course, you would only use this shader in the customization lobby, and you would bake everything when using the character in-game. But if you have complex customization it will not be possible to move all of it to the shader, so you will have to make several shaders depending on what parameters are being edited of your model. Moreover, you will have to specifically encode the process to generate the “partially baked” resources that your shaders will need, for every case.
This is what we did in some of the systems in the past, and it worked great. But any change in the customizable features of the object implied a lot of work in order to adjust all these processes and shaders, which makes this incompatible with giving the control to the artists as discussed in a previous point.
The memory constraints in the In-game use case
When you are in-game, you are probably using all of your resources, trying to push the quality to the maximum. Suddenly requiring 2048×2048 pixels x 4 bytes x 3 images to apply and image effect between two images onto a third one, for a character you need to build in the background because he is joining the area, may be a problem. On a PC requesting too much memory is not that terrible: you have a thick OS that will virtualize and swap in and out for you, but it will still be slow. In some consoles and smaller devices though, you will crash if you exceed the available memory.
You have to split all the operations into smaller tasks and organize your code and data to use the minimum amount of memory. This can take some time and will slow down the object construction, but it is not especially difficult. However, again, it depends on what operations you require for each object, and when these change, you may need to review these tasks as well.
A possible approach
My latest attempt to resolve this requirement tension is to use a kind of virtual machine approach. The artist define a diagram with blocks connecting player-controlled parameters and meshes and textures to create an object hierarchy. This is compiled into a set of operations and constant data. This “program” can then be reorganized automatically for the several scenarios described in this post: to have maximum performance (trying to generate shader fragments automatically), to use the minimum memory, and optimised for the different cases where subsets of parameters are modified at run-time.
The virtual machine runs this program in different ways for different scenarios, and it has operations like texture packing, image layer effects with small blocks, etc. It can easily run tasks in parallel and it can automatically apply memory constraints to the program execution.