In the Linux kernel, the following vulnerability has been resolved:
dropmonitor: replace spinlock by rawspinlock
tracedropcommon() is called with preemption disabled, and it acquires a spinlock. This is problematic for RT kernels because spinlocks are sleeping locks in this configuration, which causes the following splat:
BUG: sleeping function called from invalid context at kernel/locking/spinlockrt.c:48 inatomic(): 1, irqsdisabled(): 1, nonblock: 0, pid: 449, name: rcuc/47 preemptcount: 1, expected: 0 RCU nest depth: 2, expected: 2 5 locks held by rcuc/47/449: #0: ff1100086ec30a60 ((softirqctrl.lock)){+.+.}-{2:2}, at: _localbhdisableip+0x105/0x210 #1: ffffffffb394a280 (rcureadlock){....}-{1:2}, at: rtspinlock+0xbf/0x130 #2: ffffffffb394a280 (rcureadlock){....}-{1:2}, at: _localbhdisableip+0x11c/0x210 #3: ffffffffb394a160 (rcucallback){....}-{0:0}, at: rcudobatch+0x360/0xc70 #4: ff1100086ee07520 (&data->lock){+.+.}-{2:2}, at: tracedropcommon.constprop.0+0xb5/0x290 irq event stamp: 139909 hardirqs last enabled at (139908): [<ffffffffb1df2b33>] _rawspinunlockirqrestore+0x63/0x80 hardirqs last disabled at (139909): [<ffffffffb19bd03d>] tracedropcommon.constprop.0+0x26d/0x290 softirqs last enabled at (139892): [<ffffffffb07a1083>] _localbhenableip+0x103/0x170 softirqs last disabled at (139898): [<ffffffffb0909b33>] rcucpukthread+0x93/0x1f0 Preemption disabled at: [<ffffffffb1de786b>] rtmutexslowunlock+0xab/0x2e0 CPU: 47 PID: 449 Comm: rcuc/47 Not tainted 6.9.0-rc2-rt1+ #7 Hardware name: Dell Inc. PowerEdge R650/0Y2G81, BIOS 1.6.5 04/15/2022 Call Trace: <TASK> dumpstacklvl+0x8c/0xd0 dumpstack+0x14/0x20 _mightresched+0x21e/0x2f0 rtspinlock+0x5e/0x130 ? tracedropcommon.constprop.0+0xb5/0x290 ? skbqueuepurgereason.part.0+0x1bf/0x230 tracedropcommon.constprop.0+0xb5/0x290 ? preemptcountsub+0x1c/0xd0 ? rawspinunlockirqrestore+0x4a/0x80 ? _pfxtracedropcommon.constprop.0+0x10/0x10 ? rtmutexslowunlock+0x26a/0x2e0 ? skbqueuepurgereason.part.0+0x1bf/0x230 ? _pfxrtmutexslowunlock+0x10/0x10 ? skbqueuepurgereason.part.0+0x1bf/0x230 tracekfreeskbhit+0x15/0x20 tracekfreeskb+0xe9/0x150 kfreeskbreason+0x7b/0x110 skbqueuepurgereason.part.0+0x1bf/0x230 ? _pfxskbqueuepurgereason.part.0+0x10/0x10 ? marklock.part.0+0x8a/0x520 ...
tracedropcommon() also disables interrupts, but this is a minor issue because we could easily replace it with a local_lock.
Replace the spinlock with rawspin_lock to avoid sleeping in atomic context.