- Batch dma transfers whenever possible and do them in one go
- vk: Always ensure that queued dma transfers are visible to the GPU before they are needed by the host
Requires a little refactoring to allow proper communication of the commandbuffer state
- vk: Code cleanup, the simplified mechanism makes it so that its not necessary to pass tons of args to methods
- vk: Fixup - do not forcefully do dma transfers on sections in an invalidation zone! They may have been speculated correctly already
- Properly wait for the buffer transfer operation to finish before map/readback!
- Change vkFence to vkEvent which works more like a GL fence which is what is needed.
- Implement supporting methods and functions
- Do not destroy fence by immediately waiting after copying to dma buffer
* Implement both RawSPU and threaded SPU page fault recovery
* Guard page_fault_notification_entries access with a mutex
* Add missing lock in sys_ppu_thread_recover_page_fault/get_page_fault_context
* Fix EINVAL check in sys_ppu_thread_recover_page_fault, previously when the event was not found begin() was erased and
CELL_OK was returned.
* Fixed page fault recovery waiting logic:
- Do not rely on a single thread_ctrl notification (unsafe)
- Avoided a race where ::awake(ppu) can be called before ::sleep(ppu) therefore nop-ing out the notification
* Avoid inconsistencies with vm flags on page fault cause detection
* Fix sys_mmapper_enable_page_fault_notification EBUSY check
from RE it's allowed to register the same queue twice (on a different area) but not to enable page fault notifications twice
- Avoids blindly reusing blit dst sections as they may contain garbage.
If a section was unlocked for a flush, just discard it as its reuse introduces potential data corruption.
Since the data needs to be reuploaded anyway (for now), its better to start afresh
- In case of format mismatch, reset the calculated dst block
- Add a bounds check to determine if data contained in an atlas is good enough for sampling the cache.
If not enough data is provided, fall back to full upload
- Blit operations do format conversion automatically which is NOT what we want!
- Scale onto temp buffer with similar format before performing data cast.
- Use a 5-point tap with an X pattern across the target's memory space to reduce chances of false positives
- TODO: Potential false positives identified, requires some minor
restructuring of surface_store
TODO: From hw testing, it seems like sys_memory_get_page_attribute and sys_rsx_context_iomap check page size a little differently
get_page_attribute() always go by area flags, sys_rsx_context_iomap checks page by the page granularity
This means that if the area page size 64k, but shared memory is mapped with SYS_MEMORY_GRANULARITY_1M
It can be mapped for rsxio, but the page attribute will indicate 64k page size :thonk:
rsxio memory is verified to need 1m pages.
- From RE, only protocols SYS_SYNC_FIFO and SYS_SYNC_PRIORITY are valid
- Use conditional atomic op store in a few places
- Properly revert changes in sys_event_flag_set when aomic op fails