Publisher:

Boston : Morgan Kaufmann, 2011.

Call Number:

005.275 P116I 2011

Pages:

xix, 370 pages : illustrations ; 25 cm.

Subject:

Computer Science

Summary:
Author Peter Pacheco uses a tutorial approach to show students how to develop effective parallel programs with MPI, Pthreads, and OpenMP. The first undergraduate text to directly address compiling and running parallel programs on the new multi-core and cluster architecture, An Introduction to Parallel Programming explains how to design, debug, and evaluate the performance of distributed and shared-memory programs.
Publisher:

Boston : Morgan Kaufmann, 2011.

Call Number:

005.275 P116I 2011

Pages:

xix, 370 pages : illustrations ; 25 cm.

Subject:

Computer Science

Summary:
Author Peter Pacheco uses a tutorial approach to show students how to develop effective parallel programs with MPI, Pthreads, and OpenMP. The first undergraduate text to directly address compiling and running parallel programs on the new multi-core and cluster architecture, An Introduction to Parallel Programming explains how to design, debug, and evaluate the performance of distributed and shared-memory programs.
Publisher:

Boston, MA : Elsevier, 2017.

Call Number:

005.74 C518B 2017

Pages:

870 pages.

Subject:

Computer Science

Summary:
Big Mechanisms in Systems Biology: Big Data Mining, Network Modeling, and Genome-Wide Data Identification explains big mechanisms of systems biology by system identification and big data mining methods using models of biological systems. Systems biology is currently undergoing revolutionary changes in response to the integration of powerful technologies. Faced with a large volume of available literature, complicated mechanisms, small prior knowledge, few classes on the topics, and causal and mechanistic language, this is an ideal resource. This book addresses system immunity, regulation, infection, aging, evolution, and carcinogenesis, which are complicated biological systems with inconsistent findings in existing resources. These inconsistencies may reflect the underlying biology time-varying systems and signal transduction events that are often context-dependent, which raises a significant problem for mechanistic modeling since it is not clear which genes/proteins to include in models or experimental measurements.The book is a valuable resource for bioinformaticians and members of several areas of the biomedical field who are interested in an in-depth understanding on how to process and apply great amounts of biological data to improve research.
Publisher:

Boston, MA : Elsevier, 2017.

Call Number:

005.74 C518B 2017

Pages:

870 pages.

Subject:

Computer Science

Summary:
Big Mechanisms in Systems Biology: Big Data Mining, Network Modeling, and Genome-Wide Data Identification explains big mechanisms of systems biology by system identification and big data mining methods using models of biological systems. Systems biology is currently undergoing revolutionary changes in response to the integration of powerful technologies. Faced with a large volume of available literature, complicated mechanisms, small prior knowledge, few classes on the topics, and causal and mechanistic language, this is an ideal resource. This book addresses system immunity, regulation, infection, aging, evolution, and carcinogenesis, which are complicated biological systems with inconsistent findings in existing resources. These inconsistencies may reflect the underlying biology time-varying systems and signal transduction events that are often context-dependent, which raises a significant problem for mechanistic modeling since it is not clear which genes/proteins to include in models or experimental measurements.The book is a valuable resource for bioinformaticians and members of several areas of the biomedical field who are interested in an in-depth understanding on how to process and apply great amounts of biological data to improve research.
Publisher:

Hoboken, NJ : J. Wiley & Sons, 2014

Call Number:

005.43 S582O 2014

Pages:

xvii, 856 pages : illustration ; 26 cm.

Subject:

Computer Science

Summary:
By staying current, remaining relevant, and adapting to emerging course needs, Operating System Concepts by Abraham Silberschatz, Peter Baer Galvin and Greg Gagne has defined the operating systems course through nine editions. This second edition of the Essentials version is based on the recent ninth edition of the original text. Operating System Concepts Essentials comprises a subset of chapters of the ninth edition for professors who want a shorter text and do not cover all the topics in the ninth edition. The new second edition of Essentials will be available as an ebook at a very attractive price for students. The ebook will have live links for the bibliography, cross-references between sections and chapters where appropriate, and new chapter review questions. A two-color printed version is also available.
Publisher:

Hoboken, NJ : J. Wiley & Sons, 2014

Call Number:

005.43 S582O 2014

Pages:

xvii, 856 pages : illustration ; 26 cm.

Subject:

Computer Science

Summary:
By staying current, remaining relevant, and adapting to emerging course needs, Operating System Concepts by Abraham Silberschatz, Peter Baer Galvin and Greg Gagne has defined the operating systems course through nine editions. This second edition of the Essentials version is based on the recent ninth edition of the original text. Operating System Concepts Essentials comprises a subset of chapters of the ninth edition for professors who want a shorter text and do not cover all the topics in the ninth edition. The new second edition of Essentials will be available as an ebook at a very attractive price for students. The ebook will have live links for the bibliography, cross-references between sections and chapters where appropriate, and new chapter review questions. A two-color printed version is also available.
Publisher:

Cambridge, Massachusetts : The MIT Press, 2015

Call Number:

004.35 P964 2015

Pages:

xxv, 458 pages : illustrations ; 23 cm.

Subject:

Computer Science

Summary:
With the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today.
Publisher:

Cambridge, Massachusetts : The MIT Press, 2015

Call Number:

004.35 P964 2015

Pages:

xxv, 458 pages : illustrations ; 23 cm.

Subject:

Computer Science

Summary:
With the coming of the parallel computing era, computer scientists have turned their attention to designing programming models that are suited for high-performance parallel computing and supercomputing systems. Programming parallel systems is complicated by the fact that multiple processing units are simultaneously computing and moving data. This book offers an overview of some of the most prominent parallel programming models used in high-performance computing and supercomputing systems today.
Publisher:

Cambridge, Massachusetts : The MIT Press, 2014

Call Number:

005.275 G876U 2014

Pages:

xxiv, 308 pages ; 23 cm.

Subject:

Computer Science

Summary:
This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition of Using MPI reflects these changes in both text and example code. The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets. For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature. A companion volume, Using Advanced MPI, covers more advanced topics, including hybrid programming and coping with large data.
Publisher:

Cambridge, Massachusetts : The MIT Press, 2014

Call Number:

005.275 G876U 2014

Pages:

xxiv, 308 pages ; 23 cm.

Subject:

Computer Science

Summary:
This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition of Using MPI reflects these changes in both text and example code. The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets. For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature. A companion volume, Using Advanced MPI, covers more advanced topics, including hybrid programming and coping with large data.
Publisher:

Cambridge, Massachusetts : The MIT Press, 2014

Call Number:

005.711 G876U 2014

Pages:

xxii, 364 pages ; 23 cm.

Subject:

Computer Science

Summary:
This book offers a practical guide to the advanced features of the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. It covers new features added in MPI-3, the latest version of the MPI standard, and updates from MPI-2. Like its companion volume, Using MPI, the book takes an informal, example-driven, tutorial approach. The material in each chapter is organized according to the complexity of the programs used as examples, starting with the simplest example and moving to more complex ones. Using Advanced MPI covers major changes in MPI-3, including changes to remote memory access and one-sided communication that simplify semantics and enable better performance on modern hardware; new features such as nonblocking and neighborhood collectives for greater scalability on large systems; and minor updates to parallel I/O and dynamic processes. It also covers support for hybrid shared-memory/message-passing programming; MPI_Message, which aids in certain types of multithreaded programming; features that handle very large data; an interface that allows the programmer and the developer to access performance data; and a new binding of MPI to Fortran.
Publisher:

Cambridge, Massachusetts : The MIT Press, 2014

Call Number:

005.711 G876U 2014

Pages:

xxii, 364 pages ; 23 cm.

Subject:

Computer Science

Summary:
This book offers a practical guide to the advanced features of the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. It covers new features added in MPI-3, the latest version of the MPI standard, and updates from MPI-2. Like its companion volume, Using MPI, the book takes an informal, example-driven, tutorial approach. The material in each chapter is organized according to the complexity of the programs used as examples, starting with the simplest example and moving to more complex ones. Using Advanced MPI covers major changes in MPI-3, including changes to remote memory access and one-sided communication that simplify semantics and enable better performance on modern hardware; new features such as nonblocking and neighborhood collectives for greater scalability on large systems; and minor updates to parallel I/O and dynamic processes. It also covers support for hybrid shared-memory/message-passing programming; MPI_Message, which aids in certain types of multithreaded programming; features that handle very large data; an interface that allows the programmer and the developer to access performance data; and a new binding of MPI to Fortran.