There was an error in the other branch of the refactored function where the size of the array was not properly multiplied by the size of the float to check against the buffer size. This was only an error in the error-checking itself and not the functionality. There was also an error where the proper notification was not emitted whenever the buffer for the multimesh is recreated to invalidate the previous references the renderer might've created to it. This fixes CPU Particles getting corrupted when they're created without emission being enabled.
Direct buffer copies are required to perform certain operations more efficiently, as the only current alternative is to download the buffer to the CPU and upload it again. As the first use case, the new function is used when enabling motion vectors on multimeshes.
Fixes#67287. There was a subtle error where due to how enabling motion vectors for multi-meshes was handled, only the first instance would have a valid transforms buffer and the rest would point to an invalid buffer. This change moves over the responsibility of enabling motion vectors only when changes happen to the individual 3D transforms or the entire buffer itself. It also fixes an unnecessary download of the existing buffer that'd get overwritten by the current cache if it exists. Another fix is handling the case where the buffer was not set, and enabling motion vectors would not cause the buffer to be recreated correctly.
This is needed to allow 2D to fully make use of 3D effects (e.g. glow), and can be used to substantially improve quality of 2D rendering at the cost of performance
Additionally, the 2D rendering pipeline is done in linear space (we skip linear_to_srgb conversion in 3D tonemapping) so the entire Viewport can be kept linear.
This is necessary for proper HDR screen support in the future.
The code wanted to divide and round up:
- 0 / 64 = 0
- 63 / 64 = 1
- 64 / 64 = 1
- 65 / 64 = 2
However when the dividend was exactly 0 it would underflow and produce
67108864 instead.
This caused TDRs on empty scenes or extremely slow performance
Fix#80286
See Issue #69528. When building with precision=double, the TAA pass would break due to the motion vectors being corrupted. It was apparent the origin of the camera itself was corrupted in the UBO for the previous frame because the camera origin was only being split correctly for the current block but not for the previous block (to effectively support the double precision float on the shader).
The first time a shader is compiled Godot performs the following:
```cpp
for (uint32_t i = 0; i < SHADER_STAGE_MAX; i++) {
if
(spirv_data.push_constant_stages_mask.has_flag((ShaderStage)(1 << i))) {
binary_data.push_constant_vk_stages_mask |=
shader_stage_masks[i];
}
}
```
However binary_data.push_constant_vk_stages_mask is never initialized to
0 and thus contains garbage data or'ed with the good data.
This value is used by push constants (and many other things) thus it can
be a big deal.
Fortunately because the relevant flags are always guaranteed to be set
(but not guaranteed to be unset), the damage is restricted to:
1. Performance (unnecessary flushing & over-excessive barriers)
2. Overwriting push descriptors already set (this would be serious,
doesn't seem to be an issue)
3. Driver implementations going crazy when they see bits set they don't
expect (unknown if this is an issue)
This uninitialized value is later saved into the binary cache.
Valgrind is able to detect this bug on the first run, but not on the
subsequent ones because they data comes from a file.
cache_file_version has been bumped to force rebuild of all cached
shaders. Because the ones generated so far are compromised.
discard was being included in all shaders set to depth pass opaque, which is the majority of shaders
Instead it should only be used with alpha prepass materials
This allows us to specify a subset of variants to compile at load time and conditionally other variants later.
This works seamlessly with shader caching.
Needed to ensure that users only pay the cost for variants they use
Commit 2c000cb72f changed the interpolation limits from (0.04, 1.0) to (0.04, 0.37). This is incorrect, as we want to have an F0 of 0.04 for dielectrics (materials with metalness of 0.0) and an F0 of 1.0 for metals.
The Schlick approximation uses an F0 of 0.04 for all dielectrics and it's good enough.
Having it lower than 1.0 leads to an incorrect application of the Fresnel effect for metals and leads to bugs like #79549
Explicitly clear the separate specular buffer when the background mode is canvas and screen space effects (and thus a separate specular buffer) are used.