Browsing by Author "Higham, Lisa"
Results Per Page
Sort Options
Item Open Access Ambiguous memory consistency models(1998) Verwaal, Nathaly Maria; Higham, LisaItem Open Access BOUNDS FOR MUTUAL EXCLUSION WITH ONLY PROCESSOR CONSISTENCY(1999-12-15) Higham, Lisa; Kawash, JalalMost weak memory consistency models are incapable of supporting a solution to mutual exclusion using only read and write operations. Processor Consistency-Goodman's version is an exception. Ahamad et al.[1] showed that Peterson's mutual exclusion algorithm is correct for PC-G, but Lamport's bakery algorithm is not. In this paper, we derive a lower bound on the number and type (single- or multi-writer) of variables that a mutual exclusion algorithm must use in order to be correct for PC-G. We show that any such solution for n processes must use at least one multi-writer and n single-writers. This lower bound is tight when n = 2, and is tight when n >_2 for solutions that do not provide fairness. We show that Burn's algorithm is an unfair solution for mutual exclusion in PC-G that achieves our bound. However, five other known algorithms that use the same number and type of variables are incorrect for PC-G. A corollary of this investigation is that, in contrast to Sequential Consistency, multi-writers cannot be implemented from single-writers in PC-G.Item Open Access Capturing Register and Control Dependence in Memory Consistency Models with Applications to the Itanium Architecture(2006-07-11) Higham, Lisa; Jackson, LillAnne; Kawash, JalalA complete framework for modelling memory consistency that includes register and control dependencies is presented. It allows us to determine whether or not a given computation could have arisen from a given program running on a given multiprocessor architecture. The framework is used to provide an exact description of the computations of (a subset of) the Itanium instruction set on an Itanium multiprocessor architecture. We show that capturing register and control dependencies is crucial: a producer/consumer problem is solvable without using strong synchronization primitives on Itanium multiprocessors, but is impossible without exploiting these dependencies. Keywords: Multiprocessor Memory consistency, register and control dependency, Itanium, process coordination.Item Open Access Comparing dynamic software transactional memory and locking implementations of xml documents(2008) Kamaluddeen, Nuha; Higham, LisaItem Open Access Complete framework for memory consistency with applications to the itanium architecture(2007) Jackson, LillAnne Elaine; Higham, LisaItem Open Access CRITICAL SECTIONS AND PRODUCER/CONSUMER QUEUES IN WEAK MEMORY SYSTEMS(1997-06-01) Higham, Lisa; Kawash, JalalIn machines with a weak memory consistency model, the ordering constraints on memory accesses are few. In order to properly program these machines, certain powerful or explicit synchronization instructions are additionally provided in their architecture. We show that although a solution to the critical section problem (also referred to as the mutual exclusion problem) is impossible without such powerful synchronization, certain versions of the producer/consumer problem can be solved even in the weakest systems without the need for any such powerful instructions. These instructions are expensive, and avoiding their use is desirable for better performance.Item Open Access DEFINING AND COMPARING MEMORY CONSISTENCY MODELS(1997-06-01) Higham, Lisa; Kawash, Jalal; Verwaal, NathalyBecause multiprocessors implement sophisticated memory structures in order to enhance performance, processes can have inconsistent views of memory which may result in unexpected program outcomes. A memory consistency model is a set of guarantees that describes constraints on the outcome of sequences of interleaved and simultaneous operations in a multiprocessor. In this paper, we present a unifying framework to describe, understand, and compare memory consistency models. The framework is used to redefine and compare several widely used memory consistency models.Item Open Access Dining philosophers and other process coordination problems: algorithms, tools and stabilization(2003) Wang, Lixiao; Higham, LisaItem Open Access Dynamic and Self-stabilizing Distributed Matching(2003-01-28) Chattopadhyay, Subhendu; Higham, Lisa; Seyffarth, KarenSelf-stabilization is a unified model of fault tolerance. A self-stabilizing system can recover from an arbitrary transient fault without re-initialization. Self-stabilization is a particularly valuable attribute of distributed systems since they are tipically prone to various faults and dynamic changes. In several distributed applications, pairing of processors connected in a network can be viewed as a matching of the underlying graph of the network. A self-stabilizing matching algorithm can be used to build fault tolerant pairing of clients and servers connected in a network. First contribution of this report is an efficient, dynamic and self-stabilizing mazimal matching algorithm for arbitrary anonymous networks. The algorithm implements a locally distinct label generation technique that can be used by other applications. The second contribution of this report is a dynamic and self-stabilizing maximum matching alrogithm for arbitrary biparite networks. This is the first distributed amximum matching algorithm for networks containing cycles.Item Open Access Dynamic and self-stabilizing distributed matching(2003) Chattopadhyay, Subhendu; Higham, LisaItem Open Access Efficient Shared Memory Algorithms for Bounding Space(2018-04-26) Aghazadeh, Zahra; Woelfel, Philipp; Higham, Lisa; Raynal, Michel; Yanushkevich, Svetlana; Mohassel, Payman; Li, ZongpengIt is the state of the art that computer systems use multi-core architectures. In order to exploit the benefits of parallelism, multiple processors have to cooperate and concurrently communicate by executing operations on shared objects. It is challenging in shared memory algorithms to deal with the inherent asynchrony of processes. To overcome this difficulty and to achieve high efficiency, algorithm designers often assume unbounded space. This work focuses on designing time-efficient shared memory algorithms while avoiding unbounded space. One example is that of making shared objects writable: Many standard primitives (for instance, compare-and-swap or fetch-and-add) do not provide a Write() operation that unconditionally changes the object's state to the one provided as a parameter. Adding Write() operations without sacrificing efficiency is challenging if space is limited. We provide a space-efficient solution for making synchronization primitives writable with optimal step complexity. A special case of making an object writable is to augment non-resettable objects with Reset() operations. We show how our general transformation can be improved to achieve optimal space implementations of long-lived test-and-set objects with time-efficient Reset() operations. Another example concerns the ABA problem: Even though a process retrieves the same value twice in a row from a shared object, it is still possible that the value of the object has changed multiple times. We investigate the time and space complexity of detecting ABAs in shared memory algorithms for systems with bounded base objects. To that end, we propose a new primitive called an ABA-detecting register, and we give an efficient implementation of this type using asymptotically optimal number of bounded registers. Finally we deal with a more general unbounded space problem: Many applications employ the tagging technique, where shared objects get augmented with additional values, called tags. Unbounded tags are mainly used in those applications, because bounding them is too complicated and error prone. We introduce a new primitive that provides an abstraction for avoiding unbounded tags. Also, we propose optimally time-efficient implementations of this primitive from bounded objects. In addition to straightforward applications that use tags directly, our implementations can also often be used for memory reclamation.Item Open Access Evaluating links in network information flows(2009) Ramanujam, Parthasarathy; Higham, Lisa; Li, ZongpengItem Open Access Exploiting Interference in Wireless Networks(2015-01-23) Mollanoori Shamsi, Mohsen; Higham, LisaTraditionally, interference in wireless networks is considered a destructive phenomenon and treated as random noise. If multiple packets are transmitted simultaneously on the same channel, the assumption is that these packets collide and therefore all of the overlapping packets need to be retransmitted. As a consequence, to reduce interference, MAC protocols are typically designed to schedule a single transmission at a time on a channel. In contrast, modern decoding techniques such as successive interference cancellation (SIC) and physical-layer network coding (PNC) aim to extract information from overlapping signals by exploiting the additive nature of electromagnetic waves. The resulting structure at the physical layer transforms interference into a potential advantage. Higher-layer protocols, however, need to be redesigned to make optimal use of this structure. The goal of this thesis is to develop new higher-layer protocols or adapt existing protocols to optimally use the advanced decoding techniques available at the physical layer. Special focus is on the MAC layer. The main body of the thesis is about designing protocols over a SIC-capable physical layer, but in one of the chapters, it is assumed that the physical layer is also capable of doing PNC. The effect of SIC over several types of networks, namely, networks with centralized scheduling with certain goals (such as maximum throughput scheduling and proportional fair scheduling), random access (i.e., ALOHA like) networks and networks with linear coding capability are studied.Item Open Access Formal Models and Implementations of Distributed Shared Memory(2009) Cheng Hum Yuen, Steven William; Higham, Lisa; Kawash, JalalItem Open Access Impact of Instruction Re-Ordering on the Correctness of Shared-Memory Programs(2005-08-26) Higham, Lisa; Kawash, JalalSequential consistency is an intuitive consistency model that simpli_es reasoning about concurrent multiprocessor programs. Most implementations of high-performance multiprocessors, however, utilize mechanisms that allow instructions to execute out of order, resulting in consistency models that are weaker than sequential consistency and further complicating the job of programmers. This paper investigates all possible combinations of re-ordering of read and write instructions and their effects on the correctness of programs that are designed for sequential consistency. It shows that with certain combinations of re-orderings, any program that accesses shared memory through only reads and writes and that is correct assuming sequential consistency, can be transformed to a new program that does not use any explicit synchronization, and that remains correct in spite of the instruction re-ordering. With other combinations of re-ordering, such transformations do not exist, and even solutions to the mutual exclusion problem are impossible without resorting to explicit synchronization.Item Open Access Implementing Sequentially Consistent Programs on Processor Consistent Platforms(2004-02-17) Higham, Lisa; Kawash, JalalDesigners of distributed algorithms typically assume strong memory consistency guarantees, but system implementations provide weaker guarantees for better performance and scalability. This motivates the study of how to implement programs designed for sequential consistency on platforms with weaker consistency models. Typically, such implementations are impossible using only read and write operations to shared variables. For example, Sparc's Total Store Order machines (and their even weaker Partial Store Order and Relaxed Memory Order machines), the PowerPC, Alpha, Java and most variants of processor consistency all require the use of strong and expensive built in hardware synchronization primitives to implement mutual exclusion. One variant of processor consistency originally proposed by Goodman and called here PC-G is an exception. It is known that PC-G provides just enough consistency to implement mutual exclusion using only reads and writes. This paper extends the study of Goodman's version of processor consistency from specific problems (such as mutual exclusion) to arbitrary ones. That is, we investigate the existence of compilers to convert programs that use shared read/write variables with sequentially consistent memory semantics, to programs that use read/write variables with PC-G consistency semantics.
We first provide a simple program transformation, and prove that it compiles any 2-process program with only single-writer variables from sequentially consistency to PC-G consistency. We next prove that no similar simple compiler exists for even a very restricted class of 2-process programs.
Even though our program transformation is not a general compiler for 3 or more processes, it does correctly transform some specific n-process programs from sequentially consistency to PC-G consistency. In particular, for the special case of the (necessarily randomized) Test&Set algorithm of Tromp and Vitanyi, our transformation extends to any number of processes. Thus, one notable outcome is an implementation of Test&Set on PC-G that uses only reads and writes of shared variables. This is the first expected, wait-free implementation of Test&Set on any weak memory model, and illustrates the use of randomization with a weak memory model.
Item Open Access JAVA: MEMORY CONSISTENCY AND PROCESS COORDINATION(1998-04-01) Higham, Lisa; Kawash, JalalIn Java, some memory updates are necessarily visible to some threads but never to others. A definition of Java memory consistency must take this fact into consideration to capture the semantics of non-terminating systems, such as a Java operating system. This paper presents a programmer-centered formal definition of Java memory behavior that captures those semantics. Our definition is employed to prove that it is impossible to provide fundamental process coordination in Java, such as critical sections and producer/consumer coordination, without the use of the synchronized and volatile constructs. However, we show that a weaker form of synchronization suffices to solve some of these problems in Java.Item Open Access Limitation and capabilities of weak memory consistency systems(2000) Kawash, Jalal Y.; Higham, LisaItem Open Access MAINTAINING B-TREES ON AN EREW PRAM(1991-08-25) Higham, Lisa; Schenk, EricEfficient and practical algorithms for maintaining general B-trees on an EREW PRAM are presented. Given a B-tree of order b with m distinct records, the search (respectively, insert and delete) problem for n input keys is solved on an n-processor EREW PRAM in $0( log n + b log sub b m)$ (respectively, $0(b( log n + log sub b m))$ and $0(b sup 2 ( log sub b n + log sub b m))$) time.Item Open Access Meeting times of random walks on graphs(1993) Warpechowska-Gruca, Jolanta; Higham, Lisa