Skip to content

Commit

Permalink
ipc/sem.c: avoid using spin_unlock_wait()
Browse files Browse the repository at this point in the history
a) The ACQUIRE in spin_lock() applies to the read, not to the store, at
   least for powerpc.  This forces to add a smp_mb() into the fast path.

b) The memory barrier provided by spin_unlock_wait() is right now arch
   dependent.

Therefore: Use spin_lock()/spin_unlock() instead of spin_unlock_wait().

Advantage: faster single op semop calls(), observed +8.9% on x86.  (the
other solution would be arch dependencies in ipc/sem).

Disadvantage: slower complex op semop calls, if (and only if) there are
no sleeping operations.

The next patch adds hysteresis, this further reduces the probability
that the slow path is used.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Manfred Spraul <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: <[email protected]>
Cc: kernel test robot <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
manfred-colorfu authored and torvalds committed Feb 28, 2017
1 parent 0886551 commit 27d7be1
Showing 1 changed file with 3 additions and 22 deletions.
25 changes: 3 additions & 22 deletions ipc/sem.c
Original file line number Diff line number Diff line change
Expand Up @@ -278,24 +278,13 @@ static void complexmode_enter(struct sem_array *sma)
return;
}

/* We need a full barrier after seting complex_mode:
* The write to complex_mode must be visible
* before we read the first sem->lock spinlock state.
*/
smp_store_mb(sma->complex_mode, true);
sma->complex_mode = true;

for (i = 0; i < sma->sem_nsems; i++) {
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
spin_lock(&sem->lock);
spin_unlock(&sem->lock);
}
/*
* spin_unlock_wait() is not a memory barriers, it is only a
* control barrier. The code must pair with spin_unlock(&sem->lock),
* thus just the control barrier is insufficient.
*
* smp_rmb() is sufficient, as writes cannot pass the control barrier.
*/
smp_rmb();
}

/*
Expand Down Expand Up @@ -361,14 +350,6 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
*/
spin_lock(&sem->lock);

/*
* See 51d7d5205d33
* ("powerpc: Add smp_mb() to arch_spin_is_locked()"):
* A full barrier is required: the write of sem->lock
* must be visible before the read is executed
*/
smp_mb();

if (!smp_load_acquire(&sma->complex_mode)) {
/* fast path successful! */
return sops->sem_num;
Expand Down

0 comments on commit 27d7be1

Please sign in to comment.