Boost中的多个读者,单个写入器锁

时间:2010-11-17 10:29:45

标签: c++ multithreading boost mutex

我正在尝试在多线程场景中实现以下代码:

Get shared access to mutex
Read data structure
If necessary:
   Get exclusive access to mutex
   Update data structure
   Release exclusive lock
Release shared lock

Boost线程有一个shared_mutex类,专为多读者,单一作家模型而设计。关于这个类,有几个stackoverflow问题。但是,我不确定它是否符合任何读者可能成为作家的情况。文档说明:

  

UpgradeLockable概念是一个   SharedLockable的细化   允许可升级的概念   所有权和共享所有权   和独家所有权。这是个   扩展到多个阅读器/   由...提供的单写模型   SharedLockable概念:单个   线程可能具有可升级的所有权   与其他人分享的同时   所有权。

从单词“single”我怀疑只有一个线程可以持有可升级的锁。其他人只持有共享锁,无法升级为独占锁。

你知道boost::shared_lock在这种情况下是否有用(任何读者都可能成为作家),或者是否还有其他方法可以达到这个目的?

3 个答案:

答案 0 :(得分:15)

是的,您可以按照接受的答案here中的说明完成您想要的操作。升级到独占访问权限的呼叫将阻止,直到所有读者都完成。

boost::shared_mutex _access;
void reader()
{
  // get shared access
  boost::shared_lock<boost::shared_mutex> lock(_access);

  // now we have shared access
}

void writer()
{
  // get upgradable access
  boost::upgrade_lock<boost::shared_mutex> lock(_access);

  // get exclusive access
  boost::upgrade_to_unique_lock<boost::shared_mutex> uniqueLock(lock);
  // now we have exclusive access
}

答案 1 :(得分:5)

boost::shared_lock在这种情况下没有帮助(可以成为编写者的多个读者),因为只有一个线程可能拥有可升级的锁。这可以通过问题中的文档引用以及查看代码(thread \ win32 \ shared_mutex.hpp)来暗示。如果一个线程试图获取一个可升级的锁,而另一个线程持有一个,它将等待另一个线程。

我决定对所有读者/作者使用常规锁,这在我的情况下是可以的,因为关键部分很短。

答案 2 :(得分:4)

你知道LightweightLockLightweightLock_zip的相同 完全符合你的要求。 我用了很久了。

[编辑] 这是来源:


/////////////////////////////////////////////////////////////////////////////
//
//  Copyright (C) 1995-2002 Brad Wilson
//
//  This material is provided "as is", with absolutely no warranty
//  expressed or implied. Any use is at your own risk. Permission to
//  use or copy this software for any purpose is hereby granted without
//  fee, provided the above notices are retained on all copies.
//  Permission to modify the code and to distribute modified code is
//  granted, provided the above notices are retained, and a notice that
//  the code was modified is included with the above copyright notice.
//
/////////////////////////////////////////////////////////////////////////////
//
//  This lightweight lock class was adapted from samples and ideas that
//  were put across the ATL mailing list. It is a non-starving, kernel-
//  free lock that does not order writer requests. It is optimized for
//  use with resources that can take multiple simultaneous reads,
//  particularly when writing is only an occasional task.
//
//  Multiple readers may acquire the lock without any interference with
//  one another. As soon as a writer requests the lock, additional
//  readers will spin. When the pre-writer readers have all given up
//  control of the lock, the writer will obtain it. After the writer
//  has rescinded control, the additional readers will gain access
//  to the locked resource.
//
//  This class is very lightweight. It does not use any kernel objects.
//  It is designed for rapid access to resources without requiring
//  code to undergo process and ring changes. Because the "spin"
//  method for this lock is "Sleep(0)", it is a good idea to keep
//  the lock only long enough for short operations; otherwise, CPU
//  will be wasted spinning for the lock. You can change the spin
//  mechanism by #define'ing __LW_LOCK_SPIN before including this
//  header file.
//
//  VERY VERY IMPORTANT: If you have a lock open with read access and
//  attempt to get write access as well, you will deadlock! Always
//  rescind your read access before requesting write access (and,
//  of course, don't rely on any read information across this).
//
//  This lock works in a single process only. It cannot be used, as is,
//  for cross-process synchronization. To do that, you should convert
//  this lock to using a semaphore and mutex, or use shared memory to
//  avoid kernel objects.
//
//  POTENTIAL FUTURE UPGRADES:
//
//  You may consider writing a completely different "debug" version of
//  this class that sacrifices performance for safety, by catching
//  potential deadlock situations, potential "unlock from the wrong
//  thread" situations, etc. Also, of course, it's virtually mandatory
//  that you should consider testing on an SMP box.
//
///////////////////////////////////////////////////////////////////////////

#pragma once

#ifndef _INC_CRTDBG
#include 
#endif

#ifndef _WINDOWS_
#include 
#endif

#ifndef __LW_LOCK_SPIN
#define __LW_LOCK_SPIN Sleep(0)
#endif


    class LightweightLock
    {
    //  Interface

    public:
        //  Constructor

        LightweightLock()
        {
            m_ReaderCount = 0;
            m_WriterCount = 0;
        }

        //  Destructor

        ~LightweightLock()
        {
            _ASSERTE( m_ReaderCount == 0 );
            _ASSERTE( m_WriterCount == 0 );
        }

        //  Reader lock acquisition and release

        void LockForReading()
        {
            while( 1 )
            {
                //  If there's a writer already, spin without unnecessarily
                //  interlocking the CPUs

                if( m_WriterCount != 0 )
                {
                    __LW_LOCK_SPIN;
                    continue;
                }

                //  Add to the readers list

                InterlockedIncrement((long*) &m_ReaderCount );

                //  Check for writers again (we may have been pre-empted). If
                //  there are no writers writing or waiting, then we're done.

                if( m_WriterCount == 0 )
                    break;

                //  Remove from the readers list, spin, try again

                InterlockedDecrement((long*) &m_ReaderCount );
                __LW_LOCK_SPIN;
            }
        }

        void UnlockForReading()
        {
            InterlockedDecrement((long*) &m_ReaderCount );
        }

        //  Writer lock acquisition and release

        void LockForWriting()
        {
            //  See if we can become the writer (expensive, because it inter-
            //  locks the CPUs, so writing should be an infrequent process)

            while( InterlockedExchange((long*) &m_WriterCount, 1 ) == 1 )
            {
                __LW_LOCK_SPIN;
            }

            //  Now we're the writer, but there may be outstanding readers.
            //  Spin until there aren't any more; new readers will wait now
            //  that we're the writer.

            while( m_ReaderCount != 0 )
            {
                __LW_LOCK_SPIN;
            }
        }

        void UnlockForWriting()
        {
            m_WriterCount = 0;
        }

        long GetReaderCount() { return m_ReaderCount; };
        long GetWriterConut() { return m_WriterCount; };

    //  Implementation

    private:
        long volatile m_ReaderCount;
        long volatile m_WriterCount;
    };