1. Make handling of user tokens atomic:
Loads started with the external-facing API used to perform a two-step setup of the user token. Between both, the mutex was unlocked without its reference count having been increased. A non-user-initiated load could therefore destroy the load task when it unreferenced the token.
Those stages now happen atomically so in the one hand, the described race condition can't happen so the load task life insurance doesn't have a gap anymore and, on the other hand, the ugliness that the call to load could return `ERR_BUSY` if happening while other thread was between both steps is gone.
The code has been refactored so the user token concerns are still outside the inner load start function, which is agnostic to that for a cleaner implementation.
2. Clear ambiguity between load operations running on `WorkerThreadPool`:
The two cases are: single-loaded thread directly started by a user pool task and a load started by the system as part of a multi-threaded load.
Since ensuring all the code dealing with this distinction would make it very complex, and error-prone, a different measure is applied instead: just take one of the cases out of the dicotomy. We now ensure every load happening on a pool thread has been initiated by the system.
The way of achieving that is that a single-threaded user-started load initiated from a pool thread, is run as another task.
Benefits:
- Simpler code. The main load function is renamed so it's apparent that it's not just a thread entry point anymore.
- Cache and thread modes of the original task are honored. A beautiful consequence of this is that, unlike formerly, re-issued loads can use the resource cache, which makes this mechanism much more performant.
- The newly added getter for caller task id in WorkerThreadPool allows to remove the custom tracking of that in ResourceLoader.
- The check to replace a cached resource and the replacement itself happen atomically. That fixes deadlock prevention leading to multiple resource instances of the same one on disk. As a side effect, it also makes the regular check for replace load mode more robust.
Changes to reduce the latency between changing node selection in the editor and seeing the new node reflected in the Inspector tab
- Use Vector instead of List for ThemeOwner::get_theme_type_dependencies and related functions
- Use Vector instead of List for ThemeContext::themes, set_themes(), and get_themes()
- Add ClassDB:get_inheritance_chain_nocheck to get all parent/ancestor classes at once, to avoid repeated ClassDB locking overhead
- Update BIND_THEME_ITEM macros and ThemeDB::update_class_instance_items to use provided StringNames for call to ThemeItemSetter, instead of creating a new StringName in each call
These changes reduce the time taken by EditorInspector::update_tree by around 30-35%