Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is Release memory order in Arc::make_mut only necessary to prevent out-of-thin-air value? #61

Open
xmh0511 opened this issue Nov 21, 2024 · 0 comments
Labels

Comments

@xmh0511
Copy link

xmh0511 commented Nov 21, 2024

The content that the question is about

https://marabos.nl/atomics/building-arc.html#optimizing-arc

    pub fn get_mut(arc: &mut Self) -> Option<&mut T> {
        // Acquire matches Weak::drop's Release decrement, to make sure any
        // upgraded pointers are visible in the next data_ref_count.load.
        if arc.data().alloc_ref_count.compare_exchange(
            1, usize::MAX, Acquire, Relaxed
        ).is_err() {
            return None;
        }
        let is_unique = arc.data().data_ref_count.load(Relaxed) == 1;
        // Release matches Acquire increment in `downgrade`, to make sure any
        // changes to the data_ref_count that come after `downgrade` don't
        // change the is_unique result above.
        arc.data().alloc_ref_count.store(1, Release);  // #1
        if !is_unique {
            return None;
        }
        // Acquire to match Arc::drop's Release decrement, to make sure nothing
        // else is accessing the data.
        fence(Acquire);
        unsafe { Some(&mut *arc.data().data.get()) }
    }

    pub fn downgrade(arc: &Self) -> Weak<T> {
        let mut n = arc.data().alloc_ref_count.load(Relaxed);
        loop {
            if n == usize::MAX {
                std::hint::spin_loop();
                n = arc.data().alloc_ref_count.load(Relaxed);
                continue;
            }
            assert!(n <= usize::MAX / 2);
            // Acquire synchronises with get_mut's release-store.
            if let Err(e) =
                arc.data()
                    .alloc_ref_count
                    .compare_exchange_weak(n, n + 1, Acquire, Relaxed)
            {
                n = e;
                continue;
            }
            return Weak { ptr: arc.ptr };
        }
    }

If we had used Relaxed for the store, it would have been possible for the preceding load to observe the result of a future Arc::drop for an Arc that can still be downgraded.

The question

The Release memory order at #1 presumably prevent data-race in this example

// thread 1:
let rf = get_mut(& mut my_arc).unwrap();
*rf = 1;

// thread 2:
let weak = downgrade(&another_arc);
drop(another_arc);
// upgrade to weak  to obtain an arc is problematical

Assuming #1 uses the Relaxed memory order, and this example can be equivalently simplified to

// thread 1:
alloc_ref_count= 0;
data_ref_count = 2;
// thread 1:
if data_ref_count .load(Relaxed) ==  1{  // #1
   alloc_ref_count.store(1, Relaxed);  // #2
}
// thread 2:
while alloc_ref_count.load(Relaxed)!=1{} // #3
data_ref_count.store(1); // #4

#4 is executed only if #3 reads #2, and #2 is executed only if #1 reads #4, a dependency chain. The C++ standard, https://eel.is/c++draft/atomics.order#8 explicitly disallow OOTA

Implementations should ensure that no “out-of-thin-air” values are computed that circularly depend on their own computation.

So, is the Release memory order in make_mut preventing OOTA?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant