nanosleep returns 0 or -1 not the error code.
The error code "EINTR" (if encountered) is placed in errno, in which case nanosleep can be safely recalled with the remaining time.
This is required, so that nanosleep continues if the calling thread is interrupted by a signal.
See manpage nanosleep(2) for additional details.
(cherry picked from commit 1107c7f327)
Happy new year to the wonderful Godot community!
2020 has been a tough year for most of us personally, but a good year for
Godot development nonetheless with a huge amount of work done towards Godot
4.0 and great improvements backported to the long-lived 3.2 branch.
We've had close to 400 contributors to engine code this year, authoring near
7,000 commit! (And that's only for the `master` branch and for the engine code,
there's a lot more when counting docs, demos and other first-party repos.)
Here's to a great year 2021 for all Godot users 🎆
(cherry picked from commit b5334d14f7)
Ninepatch code has a check to prevent use of zero sized textures. This didn't deal properly with animated textures, which use a proxy (link to another texture).
This PR uses a generalised method of getting textures, with built in support for proxy textures and protection against infinite loops.
These were only put in for the betas, in order to test hypotheses for stalling on Macs. It seems that most of the problems in the Mac editor have been solved by fixing the excessive redraw_requests.
As a result no one has reported any results from these options, but in future we will be able to refer users to try the beta versions, so there is no need to include them in the stable release. Indeed they are only likely to cause confusion.
The root cause of the issue is that OpenGL ES 2 does not support the `textureCubeLod` function.
There are (optional) extensions to support this, but they don't appear to be exposed with the ES2 renderer (even though the hardware needed to support LOD features are certainly available.)
The existing shim in `drivers/gles2/shaders/cubemap_filter.glsl` just creates a macro:
```
#define textureCubeLod(img, coord, lod) textureCube(img, coord)
```
But the third parameter of `textureCube` is actually a mip bias, not an absolute mip level.
(And it doesn't seem to work regardless.)
In this specific case, the `cubemap_filter` should only sample from the first level of the "source" panorama cubemap.
In lieu of a method to force a lod level of zero, I've chosen to comment out the switchover from a 2D equirectangular panorama to the cubemap version of the same image, therefore always sampling roughness values from the 2D equirectangular panorama.
This may cause additional artifacts or issues across the seam, but at least it prevents the glaringly obvious black areas.
---
This same issue (no fragment texture LOD support) has rather large repercussions elsewhere too; it means materials with larger cubemap density (i.e. planar or distant objects) will be far rougher than expected.
Since GLES 3 appears to properly support fragment `texture*Lod` functions, switching to the GLES 3 backend would solve this problem.
---
Root cause discovered with help from @KaadmY.
Image::resize_to_po2() now takes an optional p_interpolation parameter
that it passes directly to resize() with default value INTERPOLATE_BILINEAR.
GLES2: call resize_to_po2() with interpolate argument
Call resize_to_po2() in GLES2 rasterizer storage with either
INTERPOLATE_BILINEAR or INTERPOLATE_NEAREST depending on TEXTURE_FLAG_FILTER.
This avoids filtering issues with non power of two pixel art textures.
See #44379
Although the minimum size of ninepatches is set to the sum of the margins in normal use (through gdscript etc) it turns out that it is possible to programmatically create ninepatches that are small than this minimum - in particular zero size is used in sliders to not draw items.
This PR deals with zero sized ninepatches by not drawing anything, and has some basic protection for ninepatches smaller than the margins. Whether these occur in the wild is not clear but is put in for completeness.
When using the ALSA driver, corruption would occur if `snd_pcm_writei`
was unable to consume the entire sound buffer. This would occur
frequently on the Raspberry Pi 3 which uses the `snd_bcm2835` audio
driver.
This bug resulted from incorrect pointer math on line 187, resulting in
the sample source pointer being advanced by `total * ad->channels` bytes
instead of `total * ad->channels` samples. In my opinion, the best fix
is to change `*src` to type `int16_t`, since that is the sample type in
use.
Fixes#43927.
(cherry picked from commit 25b2f82ccf)
See #43689.
Also 'fixed' some spelling for behavior in publicly visible strings.
(Sorry en_GB, en_CA, en_AU, and more... Silicon Valley won the tech spelling
war.)
(cherry picked from commit a655de89e3)
Lights with bake mode set to "All" were behaving erratically because of a
faulty check in the renderer. This should be the correct way to check if
a geometry instance is using baked light.
For fixing a previous issue state.canvas_texscreen_used was reset to false at the start of each render_joined_item. This was causing a later shader that used SCREEN_TEXTURE to force recapturing the back buffer immediately prior to use, which we don't want.
This PR preserves the state across joined items, and also prevents joining of items that copy the back buffer as this may be problematic.
It turns out that the original issue that needed the line is now fixed, and the later issue is also fixed by removing it.
While adding more debug checks to legacy renderer, I closed 2 types of vulnerabilities:
* TYPE_PRIMITIVE would previously read from uninitialized data if only specifying a single color
* Other legacy draw operations would fail in debug AFTER accessing out of bounds memory rather than before
Many calls to glBufferSubData are wrapped in a safe version which checks for out of bounds and exits the draw function if this is detected.
Large FVF allows batching of many custom shaders, but should not join items which have shaders that utilize BUILTINs which would change for each item, because these will not be sent individually, and all joined items would wrongly use the values from the first joined item.
Polys that have no texture assigned contain no UVs in the poly command. These were previously not blanked, leading to random values if read from a custom shader.
This PR just blanks them.
In some situations where polygons were scaled, existing software skinning was producing incorrect results.
The transform inverse needed to use an affine inverse rather than a cheaper inverse to account for this scaling.
This adds support for custom shaders for polys, and properly handles modulate in the case of large FVF and modulate FVF.
It also fixes poly vertex colors not being sent to OpenGL.
Antialiased polys work by drawing a smoothed line around the poly after the main drawing. Batching draws polys as a series of triangles with no concept of 'edge', and when 2 polys are joined it becomes impractical to back calculate the edges from the triangles.
For this reason batching is disabled for antialiased polys in this PR.
As a result of the GLES specifications being vague about best practice for how buffers should be used dynamically, different GPUs / platforms appear to have different preferences.
Mac in particular seems to have a number of problems in this area, and none of the rendering team uses Macs. So far we have relied on guesswork to choose the best usage, but in an attempt to pin this down, this PR begins to introduce manual selection of options for users to test their configurations.
Lines are batched using the simplest fvf 'BatchVertex', however when used in an item with a custom shader material, it may attempt to translate to large_fvf without the required extra channels. To prevent this a special case in flushing is made to deal with lines.
In small batches using hardware transform, vertices would be drawn in incorrect positions due to the item transform being applied twice - once in the transform uniform, and once from the transform passed as a vertex attribute.
This PR alters the shader to ignore uniform transforms when using large FVF.
Due to my less than eagle-like view over these functions I had assumed they were passing in a single buffer input for the changes to make buffer uploading more efficient. They aren't, which is less than ideal.
So these particular changes should be reverted. When I have some more time I'll see whether the API for these calls can be changed, because as is the multiple glSubBufferData calls could be causing stalls on some hardware.
It can be enabled in the Project Settings
(`rendering/quality/filters/use_debanding`). It's disabled
by default as it has a small performance impact and can make
PNG screenshots much larger (due to how dithering works).
As a result, it should be enabled only when banding is noticeable enough.
Since debanding requires a HDR viewport to work, it's only supported
in the GLES3 backend.
This is part of effort to make more efficient use of the API for devices with poor drivers. This eliminates multiple calls to glBufferSubData per update.
Batching is mostly separated into a common template which can be used with multiple backends (GLES2 and GLES3 here). Only necessary specifics are in the backend files.
Batching is extended to cover more primitives.
Don't apply lighting to objects when they have a lightmap texture and
the light is set to BAKE_ALL. This prevents applying the same direct
light twice on the same object and makes setting up scenes with mixed
lighting much easier.
Option in MeshInstance to enable software skinning, in order to test
against the current USE_SKELETON_SOFTWARE path which causes problems
with bad performance.
Co-authored-by: lawnjelly <lawnjelly@gmail.com>
Fixes: #28683, #28621, #28596 and maybe others
For iOS we enable pvrtc feature by default for both GLES2/GLES3
Etc1 for iOS doesn't have any sense, so it disabled.
Fixed checks in export editor.
Fixed pvrtc ability detection in GLES2 driver.
Fixed pvrtc encoding procedure.
Add __NetBSD__ to `platform_config.h` so that it can find `alloca`
and use the proper `pthread_setname_np` format.
Rename RANDOM_MAX to avoid conflict with NetBSD stdlib.
Fixes#42145.
(cherry picked from commit 5f4d64f4f3)
- Fade reflection towards inner margin and clip it at screen edges instead of external margin.
- Round edges of the fade margin if both are being cut off to prevent sharp corners.
Previously VS::INSTANCE_NONE was returned for Lightmap data, this caused `visual_server_scene.cpp` to assert in `instance_set_use_lightmap()`
Now `dummy_rasterizer.h` returns `VS::INSTANCE_LIGHTMAP_CAPTURE` for lightmap capture data thus satisfying `visual_server_scene`
When not using TEXTURE_RECT path, flips have to sent via another method to the shader, to ensure that normal maps are correctly adjusted for direction. This PR adds an extra vertex attribute, LIGHT_ANGLE.
For nvidia workarounds, where the shader still has access to the final transform and extra matrix, the LIGHT_ANGLE can be 0 (no adjustment), 180 degrees for a horizontal flip, and negative indicates a vertical flip.
For batching path, the LIGHT_ANGLE can be used to directly specify the light angle for normal mapping, even when the final transform and extra matrix have been baked into vertex positions, so the same shader can be used for both.
Normal mapping previously took no account of rotation or flips in any path except the TEXTURE_RECT (uniform draw) method. This passed flips to the shader in uniforms.
In order to pass flips and rotations to the shader in batching and nvidia workaround, a per vertex attribute is required rather than a uniform. This introduces LIGHT_ANGLE which encodes both the rotation of a quad (vertex) and the horizontal and vertical flip.
In order to optionally store light angles in batching, we switch to using a 'unit' sized array which can be reused for different FVF types, as there is no need for a separate array for each FVF, as it is a waste of memory.
In rare circumstances an item would issue multiple transform commands before a (non rect) draw command. The command syncronization would incorrectly start from first transform, instead of the current transform in these circumstances, which could have the result of missing drawing some commands from the end of the batch.
This had been shown in the wild occuring in debug collision polys. It was a benign error (sometimes visual elements would be lost), but did not cause any serious problems.
This PR fixes this synchronization error.
Compiler is usually in the best position to decide whether to inline functions. Great care must be taken using FORCE_INLINE because it can have unforeseen consequences with recursion, loops and bloat to the executable.
Here some FORCE_INLINES are removed in order to allow the compiler to make best choice and remove a compilation warning where unable to inline during a recursive function.
Fixes#41226
On platforms that don't report support for GL_REPEAT for non power of two textures, the FORCE_REPEAT conditional is used instead. However for rect batches, the conditional was being set AFTER binding the shader, which meant it wasn't being activated.
This PR simply shifts setting the conditional to before the shader bind.
1. Removed errors in mesh_surface_get_array as it's supported now
2. More accurate errors in mesh_surface_get_blend_shapes
(cherry picked from commit e19a3df98f)
For textures that were imported as wrapping, the legacy renderer relied on GL repeat state being set as a once off during load, and didn't alter the GL wrapping state at runtime.
Batching was setting wrapping according to the CANVAS_RECT_TILE flag on rects, however this reset GL wrapping to clamp after use, which was conflicting with later drawcalls that relied on the default wrapping being preserved.
In this PR we only set the wrapping in GL if the texture has not been imported with wrapping. This duplicates the logic in the legacy renderer and solves the state bug.
Using the operator += in a shader is classified as an 'assign', and so is classified as a write rather than a read. This means that we need to prevent vertex baking on either a write or read (i.e. on usage), rather than just on reads.
The old logic was incorrect, the first item with lights would prevent joining the next item in case it didn't have lights. Now the check is deferred so that items without lights check to see if the previous item had lights, and if so they prevent a join.