In the Linux kernel, the following vulnerability has been resolved:
PCI/PM: Drain runtime-idle callbacks before driver removal
A race condition between the .runtimeidle() callback and the .remove() callback in the rtsxpcr PCI driver leads to a kernel crash due to an unhandled page fault [1].
The problem is that rtsxpciruntimeidle() is not expected to be running after pmruntimegetsync() has been called, but the latter doesn't really guarantee that. It only guarantees that the suspend and resume callbacks will not be running when it returns.
However, if a .runtimeidle() callback is already running when pmruntimegetsync() is called, the latter will notice that the runtime PM status of the device is RPMACTIVE and it will return right away without waiting for the former to complete. In fact, it cannot wait for .runtimeidle() to complete because it may be called from that callback (it arguably does not make much sense to do that, but it is not strictly prohibited).
Thus in general, whoever is providing a .runtimeidle() callback needs to protect it from running in parallel with whatever code runs after pmruntimegetsync(). [Note that .runtimeidle() will not start after pmruntimegetsync() has returned, but it may continue running then if it has started earlier.]
One way to address that race condition is to call pmruntimebarrier() after pmruntimegetsync() (not before it, because a nonzero value of the runtime PM usage counter is necessary to prevent runtime PM callbacks from being invoked) to wait for the .runtimeidle() callback to complete should it be running at that point. A suitable place for doing that is in pcideviceremove() which calls pmruntimegetsync() before removing the driver, so it may as well call pmruntimebarrier() subsequently, which will prevent the race in question from occurring, not just in the rtsxpcr driver, but in any PCI drivers providing .runtime_idle() callbacks.