Continuing off on the topic of Unity WEBGL Optimization Techniques and approaches, we come to Part 2 in the series. We will continue to examine optimization approaches and look at different techniques to make sure you have the information you need to get the most performance out of Unity and your project.
Chances are high you have some audio files for sound effects in your project and if your like most people, you pay little attention to these and let Unity apply it’s defaults on import. What’s not so common is the knowing the right import settings. Having the wrong settings for your audio files can make or break a project. It’s often an area that often is overlooked throughout the development process and even can be pushed aside when looking for more optimizations if your not in the know.
When you don’t pay attention nor care about having the right compression and import settings you can find yourself with some pretty serious performance implications.
When it comes to the way “Audio Files” are being handled in Unity, there really is a best approach and practice.
Since this was a area we used right away we knew we needed to include this in the articles since we found some nice performance boosts in this area, and we’re certain you will too.
As is common with many projects, our projected uses both long and short audio clips, and it has a number of them. We have sound effects right from music to button clicks to weapon shots and hits to footsteps.
What compression settings you use on your audio files come to play an important role within a Unity project. When you have the incorrect compression settings for your audio files it can cause excessive memory usage. On top of that, you can also generate CPU spikes while your game is being played.
Taken from the Unity Doc’s:
“As a general rule of thumb, Compressed audio (or modules) are best for long files like background music or dialog, while PCM and ADPCM is better for short sound effects that contain some noise, as the artefacts of ADPCM are too apparent on smooth signals. You should tweak the amount of Compression using the compression slider. Start with high compression and gradually reduce the setting to the point where the loss of sound quality is perceptible. Then, increase it again slightly until the perceived loss of quality disappears.”
It’s important to note and quote from the Unity Doc’s: “Note that only one hardware audio stream can be decompressed at a time.” Since you can only have one audio file that can be decompressed at a time there is a way to overcome this. By using the setting “Decompress On Load“.
Load Type: Decompress on Load
- RAM usage: potentially high (decompression of all assets at the same time)
- Disk I/O: nominal
- Recommend Usage: For frequently played short PCM or ADPCM sounds
Note: The sound is decompressed as soon as it’s loaded. Used for smaller compressed sounds to avoid the performance overhead of decompressing on the fly. This option barely requires any CPU to play the audio, however it requires the most RAM for simultaneous decompression.
Load Type: Compressed in memory
- RAM usage: medium
- Disk I/O: nominal
- Usage scenario: rarely / frequently played medium ADPCM/Vorbis sounds.
Note: The sound is stored in RAM and is decompressed on the fly when prompted. It does not use any additional RAM when decompressing.
Load Type: Streaming
- RAM usage: low
- Disk I/O: medium
- Usage scenario: music and ambient tracks
Note: The audio clip is stored on users drive and then is played when prompted. It is not recommended to use more then a few audio clips with this LoadType.
What this means is the trade off here being that while your performance will increase, in turn so will your memory usage. Knowing this, it’s highly recommend that best way to go about this to have a “Use as Needed” approach.
More about audio files here.
Note: All Audio Clips when imported are set by default to”Decompress On Load” along with “Vorbis” audio compression format. This can lead to serious impacts to a game’s memory usage if your not careful.
Other Useful to know Audio Settings
Common practice is to override the default sample rate and set it to 44.1 kHz. This is done because soundcards on most devices is standard and supported.
Force to mono
This forces your audio to mono format. For Optimization, this setting should be enabled for all audio assets except music.
Note: Forcing an asset to mono will save more space, however the quality will suffer.
This setting only becomes available when Force to mono is checked. It normalizes the volume that could be altered by forcing the sound to mono.
Note: This setting should be enable when using “Force to mono”.
Load in background
When enabled, the audio clip will be loaded in the background. This will not cause stalls on the main thread. By default all the sounds have finished loading when the scene starts playing.
Note: Disabled by default and should remain this way unless there’s an exception.
Preload audio data
This setting is enabled by default and should remain this way unless there’s an exception.
Note: If enabled, scripts have to call AudioClip.LoadAudioData() to load the data before the clip can be played.
Summary of Best practices adopted by developers concerning audio clips:
- General audio files – Compression format: “Vorbis/MP3”.
Why? This is the default compression and results in smaller files which in turn have lower quality when compared to PCM audio. However, the amount of compression is configurable via the Quality slider. This format is best for medium length sound effects and music.
- Short audio files – Compression Format: “ADPCM”.
Why? This is best suited for audio files that contain a fair bit of noise and need to be played in large quantities, such as footsteps, impacts, weapons since the compression ratio is 3.5 times smaller than PCM however the CPU usage is much lower than the MP3/Vorbis formats which makes it the best choice for these types of sounds.
- Long/Looping audio files – “Compressed In Memory”.
Why? By keeping sounds compressed in memory and than decompressing them while playing will provide a slight performance overhead (especially for Ogg/Vorbis compressed files) so it is best suited for for bigger files where decompression on load would use a high amount of memory.
- Music – “Steaming”.
Why? Since this uses a minimal amount of memory since the memory is used to buffer the compressed audio data that is than incrementally read from the disk and in turn, decoded on the fly.
- Small Audio Clips – “Decompress on load”.
Why? To avoid CPU spikes caused by decompressing them on the fly since audio files will be decompressed as soon as they are loaded, this option is best suited for smaller compressed sounds which will avoid the performance overhead of decompressing them on the fly.
- Uncompressed audio clips will use less CPU on playback so this setting is best used for small clips since they will not use a lot memory.
It is pretty much guaranteed that you will be using textures in your project. Most times, importing and leaving the default’s seems to be the common way to go.
When you start adding texture’s to your project, they go through an importation process which Unity will convert the texture into a suitable format based on whatever your current texture import settings are.
Once that’s done, you then begin the steps of setting up your textures for compression. This is a very important and crucial step when trying to achieve smaller build sizes which as you know can impact projects that specifically target mobile or WEBGL.
There are even some cases where if you fail to set the proper texture type, Unity will warn you, for example: using a cursor but not setting it cursor type, worse yet, other times you will simply lose precious performance and not have any warning whatsoever.
Does this mean to only use low resolution textures? While it’s true, when your project is using high resolution textures, you can find that they can easily become a bottleneck in your project and generally pose a nightmare for older and slower hardware.
Knowing the right information for your target platform and how you deal with your textures can have a major impact on performance.
When you properly configure this, you will effectively be telling Unity what the texture will be used for in the project. Since you have the ability to select different options you can adjust the texture internal settings to be optimal as possible.
Generally types are defined for the texture’s purpose. For example, the texture type “Texture” is a simple diffuse texture used in 3D space and for 3D game and is generally used the most.
The “Default” setting is the most commonly used setting for Textures and applied by default on import. By using the “Default” setting, it will provide access to most of the properties for Texture importing.
It’s always best to set the texture setting to how it will be used, for example, a texture that will be used as a cursor should be marked as “Cursor” and when used on the UI Canvas, “Sprite (2D and UI)”. As we mentioned earlier, sometimes Unity will warn you if you choose the wrong settings, other times it eats away at your performance.
Current Types in Unity:
Enable or Disable MipMaps
What are “MipMaps”? The easiest way to understand what this does is to of “MipMaps” like a LOD for textures. (LOD: Level of Detail).
When “MipMaps” are enabled for textures, it will assist in reducing the texture memory overhead.
How does it do this?
Consider this, when an object that is being rendered is at a fair distance away from the camera it is obviously not necessary to use a higher resolution texture for that object since being at such a far distance from the camera will result in texture detail not be seen. You can actually see this effect being done by your own eyes in real life, the farther away from a object you are, the more blurry it appears and harder to see the details. On the contrary, the closer you get the object, the more detailed it will become.
When an object is at a distance, Unity can render the with the setting “MipMaps” enabled which will render the texture in a way that it will only be covered by a few pixels, so any details of the texture are not seen. Essentially enabling “MipMaps” will reduce the resolution of textures by blurring them and lowering the resolution which will in turn result in saved memory.
The UnityEditor by default has “MipMaps” enabled on textures that are imported. That being said, “MipMaps” should be enabled unless you are using a camera with a fixed distance at all times.
For example UI sprites do not need “MipMaps” enabled since the camera that is viewing the UI Canvas is not changing, and even if it moves, it likely does not move super far away from the UI Canvas.
Something often overlooked when dealing with artwork assets is even though the usual and popular compressed formats like “PNG” and “JPG”, which most art assets come in and both are commonly used all over, not just in projects, They cannot be directly decoded by the GPU.
What is “Texture Compression“? It’s commonly defined as:
“Texture compression is a specialized form of image compression designed for storing texture maps in 3D computer graphics rendering systems. Unlike conventional image compression algorithms, texture compression algorithms are optimized for random access.”
What does this mean?
First of all, they will actually need to be decompressed before they can be copied to the GPU memory.
Naturally, decompressing these types of textures will take more time and in turn this can lead to increased overhead.
Thankfully, Unity has this covered by offering us a variety of compression types.
Texture Compression Types
Unity provides many types of “Texture Compression”, for example: “ETC1”, “ASTC”, DXT1″, PVRTC and the list goes on.
The process of converting those commonly compressed formats such as “PNG” and “JPG” to hardware accelerated formats like ETC1″, “ASTC is known as “Texture compression”.
When working with texture types such as “PNG” or “JPG” you can work around this issue by using hardware accelerated formats.
Note: While these types of hardware accelerated formats are “lossy types“, they have the advantage of being designed by the GPU which in turn means they do not need to be decompressed before being copied and this results can lead to a clear boost to performance.
So now, armed with this knowledge, we now know by using the correct hardware accelerated formats to our artwork within our projects we can gain a boost in performance overall by the way a device is handles your project. This in-turn translates to better performance, less battery usage (where applicable) and reducing the heat generated by the device along with consuming less energy. This essentially means your project will require less bandwidth which will overall reduce your projects overhead.
How to Choose Texture Compression Type
The way to answer this questions falls back to the previous answer to the question “Who is our projects target audience?“.
By knowing who will being using your project and what devices your targeting will go a long way in providing the right information for the best approaches not only for “Texture Compression”, but overall optimization of your project.
Standard Compression Formats
“ETC1” Supported on all Android devices with OpenGL ES 2.0 and above. Does not support alpha channel. “ETC2″ Requires OpenGL ES 3.0 and above. “ASTC” Higher quality than “ETC1” and “ETC2”. Supported with the Android Extension Pack.
“ASTC” Higher quality than ETC1 and ETC2. Supported with the Android Extension Pack.
Proprietary Compression Formats
“ATC” Available with Adreno GPU.
“PVRTC” Available with a PowerVR GPU.
“DXT1″ S3 DXT1 texture compression. Supported on devices running Nvidia Tegra platform.
“S3TC” S3 texture compression, nonspecific to DXT variant. Supported on devices running Nvidia Tegra platform.
For a full list of formats available check the Unity Docs.
Texture Compression ‘Gotcha’
After taking all this information in your probably thinking to yourself that this sounds great. You know your target audience and what hardware they will be using, so it’s easy to pick the right format? Right?
Not quite, the problem is, especially with android devices, is not all of them are created equally. The hardware in the devices will very greatly.
What does this all mean?
Taken from Untiy Docs:
“By default, Unity uses ETC1 for compressed RGB textures and ETC2 for compressed RGBA textures. If ETC2 is not supported by an Android device, the texture is decompressed at run time. This has an impact on memory usage, and also affects rendering speed”
Essentially what means is that if for example, you choose the “ETC2” compression format, and it runs on a device that has a GPU that does not supported this format, Unity will then decompress the textures into RGBA 32 and then will be stored in memory along with the compressed one. If this happens, you will notice a loss in performance since Unity will be decompressing textures during runtime and on top of that you will also lose additional memory since you are now storing the textures twice.
This can definitely lead to excessive overhead and have an impact on rendering performance.
Now your probably thinking “We will just use ‘ETC1’ then, Problem solved”.
While this certainly sounds like a good and wasy solution the problem lies in the fact that even though the ETC1 format is supported by all GPU’s, it does not however support an alpha channel.
What can we do?
Unity has provided two possible solutions:
- It is possible to create separate Android archives (.apk) for each of the DXT/PVRTC/ATC formats and let the Android Market’s filtering system select the correct archives for different devices for you. Translation: Lot’s of work.
- Convert to RGBA 16-bit which seems to be the best trade-off between size, quality and rendering speed where use of the alpha channel is required. Best results are found when this format is used with a combination of ETC1 applied instead when the alpha channel is not required.
Power of 2
This is essentially a set of simple criteria ensuring textures conform to regulated sizes and dimensions, which typically is:
- width/height being divisible by “8”.
- width/height that can be doubled-up or divided-down by “2”.
Basically any image dimensions that are “8”, “16”, “32”, “64”, “128”, “256”, “512”, “1024”, “2048”…etc. These textures are then optimized for fast loading and processing into memory. This covers texture dimensions such”64 x 64″, “128 x 64”, “2048 x 512” etc.
While it is possible to use other “Non Power of Two” (NPOT) texture sizes within Unity there are some things to know.
When your using “Non Power of Two” (NPOT) textures, they are generally utilized on GUI Textures, however if it used on anything else, Unity will convert them to an uncompressed RGBA 32 bit format. It is also true that this can cause the texture to require more space, however the extra space is usually worth it for the tradeoff in render performance.
There are techniques, such as compression or packing multiple images into a single texture space that can alleviate some of the storage waste since these cannot be compressed.
What this means is while they will take up more video memory (compared to PVRT(iOS)/DXT(Desktop) compressed textures), they will be slower to load and in-turn, slower to render. In general you’ll use non power of two sizes only for GUI purposes.
Note: “Non power of Two” textures can actually be scaled up at import using the Non Power of 2 option in the advanced texture type in the import settings. Unity will then scale texture contents as requested, and they will behave just like any other texture in your project. This means they can still be compressed and will be very fast to load.
When you ensure the texture dimensions are a “Power of Two”, the graphics pipeline can take advantage of optimizations related to efficiencies in working with powers of two. Power of 2 textures are typically requested by the Game Engine strictly for the “MipMaps” purpose.
Note: If the texture is not in the dimensions of power of 2, no processing for “MipMaps” will take place.
Consider this, on older devices, ones from the days before they had any sort of dedicated GPU, they had some clever optimizing compilers which it was faster to divide and multiply by powers of two. Working in powers of two also simplified operations within the graphics pipeline, such as computation and usage of mipmaps (a number that is a power of two will always divide evenly in half, which means you don’t have to deal with scenarios where you must round your mipmap dimensions up or down).
As we know compression settings can be hardware dependent, but so can “Power of 2” sizes and also in the algorithm used (such as PVRTC on iOS). “Non Power of Two” (NPOT) texture capability can vary, say from the PowerVR chipset in the iPhones to something else entirely on an Android device.
When your texture is using a compressed format, you will notice that they are scaled automatically to a square “Power of 2” texture dimensions.
When using a compressed format setting for a texture, the UnityEditor will convert to a “Power of 2” automatically because compression rates will generally only work well with “Power of 2” sizes. If they are not, they will be essentially be double the size.
Summary of Texture Compression Points
- Size compression depends not only on the format but also the type of texture.
- Lowering the size of the texture does not directly translate to performance optimization.
- When “MipMaps” are enabled, the size of the texture will approximately be about a third larger.
- MipMaps should not be enabled for UI Textures and sprites.
- Most compression formats require file resolution as Power of 2 for compression to work.
Armed with this information you will be able to make the best decision’s regarding textures for your project and get the best performance out of texture compression.
Optimizing the Physics in your project is an ongoing debate. A common saying is “Good Physics require a high-end, fast CPU”.
While this certainly would help it is not necessarily a hard fact which is why it is debated.
When you understand how physics will behave and impact your project along with better knowledge of how Unity handles physics, you can use this to squeeze out more performance from your project.
As per the Unity Docs: “A framerate-independent interval that caps the worst case scenario when frame-rate is low. Physics calculations and FixedUpdate() events will not be performed for longer time than specified.”
When you have a lot of rigidbodies along with colliders, you can find yourself in a bottleneck with somemajor loss of performance.
An easy way to try and solve this is by increasing the “fixedTimeStep”. The default value is 0.02 (seconds) which means every 20ms a physic update will be executed. When you increase the timestep you are effectively decreasing the number of times physics simulations are updated.
This is done by going to: Edit >> Project Settings >> Time
However we highly recommend you take control of this and make changes by using scripting when necessary. We approached this by taking control of the timestep and changing it on the fly as required by code with translated to a decent performance boost.
The reason it works is because the Physics.Simulate takes longer which is actually called several times per frame. Keep in mind that it can sometimes it can take some trial and error to find what works best.
Scaling Objects will Impact Physics
If you have purchased models from the Unity Asset Store or other markets you have likely came across some things that are not the correct size and a common “gotcha” that people find later on is scaling objects to different scales has some unintended side effects.
Changing the scale on objects that interact with physics can cause unintended issues like collisions that are off or weird, and how the object reacts to gravity.
For Example: “Consider that you have an object that is not impeded by any resistance and is falling to the ground. If everything else around the object is scaled up it will like the object is falling in slow motion.”
That being said it’s always highly recommended to keep the scale 1-1-1 to avoid issues.
Note: Scaled meshes require the GPU re-normalization state be set (otherwise your lighting can be unpredictable), this is why scaled and non-scaled meshes cannot be batched together.
Ideally, usage of mesh colliders should be avoided as much as possible since they require considerably a lot higher processing overhead than collisions that involve primitive colliders such as sphere, box, capsule, etc. That is why it is best to use Mesh Colliders sparingly.
As per Unity Doc’s:
“Be aware of any MeshColliders you add to the game. It is very easy to simply use the visual mesh for collision, but that can cause significant performance degradation, and not always in obvious ways.”
A simple alternative is to group primitive colliders in the objects. This will then result in lesser calculations for this object, since primitive colliders handled much faster.
As per Unity Doc’s:
“There are some limitations when using the Mesh Collider. Non-convex Mesh Colliders are only supported on GameObjects without a rigidbody. If you want to use a Mesh Collider on a rigidbody, it needs to be marked as Convex.”
When using 3rd party assets for you project you will find a lot of 3D designers tend to put mesh colliders on everything. It’s very easy to do and it accomplishes enough for them to demo their work. It simply provides a fast way to throw in a First Person Controller and walk around. While this may be fine for demo scenes, it is not practical for a project.
An easy way, but tedious work around is using primitive colliders like Box, and Sphere. When your dealing with a complex mesh it is always better to pull it into 3d Modeling software and than break it apart into smaller meshes. When you can’t do that you can simply apply primitive colliders to replace the mesh colliders.
Note: Whenever your constantly modifying a mesh, ensure you call MarkDynamic() on the mesh. This will tell Unity to optimize the mesh after each change.
On the topic of meshcollider’s, it’s good to know they will not work with animated models since the colliders cannot follow the animations. This is again can be overcome by using primitive colliders since they will operate just fine under this circumstance.
As always, it’s best to take in this information as a base point since results will vary from mesh to mesh, project to project.
This impacted us more so since we use a lot of model assets from the Unity Asset Store where we often find everything was given a mesh collider like no tomorrow. By combining meshes we killed our high draw calls and increased the frame rate significantly.
MeshColliders & Physics
Whenever there is a moving object is near a meshcollider, it will cause collisions checks against potentially every triangular face of the mesh.
That being said you can see where a detailed mesh with a lot of players for instance will get expensive in the performance depart quickly.
This also leads to another expensive performance bill when you have a meshcollider that’s moving near another meshcollider, you will pay this cost combinatorically.
Note: Raycasting against mesh colliders will cause the physics engine to work harder each frame.
These expensive performance costs will hit during the physics update step, which is part of its collision resolution process.
Consider that since the physics engine will not know where to position objects for the next frame until it’s worked out the calculations of which objects need to bounce or stop due to these collisions. So in extreme cases, this can make the physics update process take longer and delay rendering of the next frame, impacting the game’s frames per second rate.
A summary of best approaches for MeshColliders:
- Minimize the number of faces of meshes that are used for collision.
- Use primitive collder types for dynamic objects, or terrain colliders for environments.
- When possible, make mesh colliders convex, or break them into convex pieces. (convex meshcolliders can use faster algorithms than an exhaustive face-by-face check, though the complexity will still increase for each face in the mesh)
- Avoid moving big mesh colliders, and when possible make everything static.
- All static moving objects must remain static in the game, enable Static batching when possible.
- Use Dynamic for movable objects.
- Movable objects that move in any way (change size, position, orientation, disable/enable) should be made Kinematic with Rigibodies.
- Always combine meshes when you can. This will dramatically lower the draw calls.
- Each Flare in your scene causes Unity to perform Raycast from the Camera position, so only enable the Flares that should be active at any given time.
- Use ParticleSystems for rendering sprites and billboards (e.g grass)
For more in depth information see this article.
This concludes Part 2 in our series on optimization techniques and approaches. We will soon follow up with part3.
Thanks again for your interested and again if anything should be changed or reworded either send us a comment or an email.