Contention Management For Shared Resources In Multi-Core Processors

Read Complete Research Material



Contention Management for Shared Resources in Multi-core Processors

Contention Management for Shared Resources on Multi-core Processor

Soplex, Sphinx and Namd

The author shares a model with a structure of two memory domains, where there are two cores for each of these domain. In the shared domain, the threads participates for same resources running on the cores. Which in turn degrade their performance than in a contention free environment. There are four applications Sphinx. Gamess, Soplex and Namd, simultaneously be running on the Intel Quad-Core Xeon system. This application ran on three schedules several times while sharing memory in two different pairings each time. And with the three pairing each application ran with all other applications in the same memory domain. a) Soplex paired with Sphinx to share domain, and Gamess with Namd ran in another domain. b) Sphinx and Gamess shared domain while Soplex along with Namd shared a domain.c) Sphinx and Namd shared domain while Soplex and Gamess shared the same domain.

The example shows the best and the worst performance of each application. With the best schedule, the work load performed was 20% better than the worse schedule. This shows that applications should be assigned according to their best possible schedule.

The author further describes the thread scheduler which on multicore processors mitigate recourses contention. Implementing with a modern operating system. Using scheduling method, which is easy to implement online. Assuming the threads works only on its data because they belong to different applications. Threads can benefit from running in the shared domain, if, they share data. For that, threads may access resources cooperatively. The author focused on managing resource contention. (Blagodurov.S, Zhuravlev.S,Dashti, & Fedorova.A).

The Current and Future Trends

As the number of cores increase the performance of running the application also becomes better. In order to avoid a deadlock at the off-chip interconnect. Multithreading and Multi programming are done by chip multiprocessor. There is sharing behavior might be inter-thread communication or no communication. Multiprocessor chip maximizes the capacity to achieve the sharing behavior. In a multiprocessor chip design, across the cores and function of distance on the chip (latency of access) share the last level cache. Sharing is to avoid replicas at the last level cache.Sharing also helps access on-chip cache space from any core. Due to the large number of cores, the pressure on on-chip cache increases, therefore, access to a large amount of data. In the future, these multi-core processors with comprise of single-threaded, multithreaded and multi programmed processors for workload for sharing behavior and memory access. This behavior is a frequent communication through inter-threading. (Chandra.D , Guo.F, Kim.S, & Solihin.Y, 2005) The Estimated Best Schedule

The author gives the example of a system which consists of two pairs of core sharing two cache, to discover the best scheduler for threads. This may take all possible permutations on the given structure. All the permutations are different from one another with respect to thread pairing in each memory domain. Threads to be considered ; 'X', 'Y', 'Z' & ...