Data

Permanent link for this collection

Browse

Recent Submissions

Now showing 1 - 4 of 4
  • Item
    Full-Spectrum Prediction of Peptides Tandem Mass Spectra using Deep Neural Network
    (2019-10-20) Kaiyuan, Liu; Haixu, Tang
    Datasets for 'Full-Spectrum Prediction of Peptides Tandem Mass Spectra using Deep Neural Network', part 2, prediction examples.
  • Item
    Full-Spectrum Prediction of Peptides Tandem Mass Spectra using Deep Neural Network
    (2019-09-14) Kaiyuan, Liu; Haixu, Tang
    Datasets for 'Full-Spectrum Prediction of Peptides Tandem Mass Spectra using Deep Neural Network'
  • Item
    Bundled VM artifact to accompany paper entitled SC-Haskell: Sequential Consistency in Languages That Minimize Mutable Shared Heap
    (Sheridan Communications on behalf of ACM. Proceedings of 22nd annual ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP17), 2017-02) Vollmer, Michael; Scott, Ryan G.; Musuvathi, Madanlal; Newton, Ryan R.
    A core, but often neglected, aspect of a programming language design is its memory (consistency) model. Sequential consistency~(SC) is the most intuitive memory model for programmers as it guarantees sequential composition of instructions and provides a simple abstraction of shared memory as a single global store with atomic read and writes. Unfortunately, SC is widely considered to be impractical due to its associated performance overheads. Perhaps contrary to popular opinion, this paper demonstrates that SC is achievable with acceptable performance overheads for mainstream languages that minimize mutable shared heap. In particular, we modify the Glasgow Haskell Compiler to insert fences on all writes to shared mutable memory accessed in nonfunctional parts of the program. For a benchmark suite containing 1,279 programs, SC adds a geomean overhead of less than 0.4\% on an x86 machine. The efficiency of SC arises primarily due to the isolation provided by the Haskell type system between purely functional and thread-local imperative computations on the one hand, and imperative computations on the global heap on the other. We show how to use new programming idioms to further reduce the SC overhead; these create a virtuous cycle of less overhead and even stronger semantic guarantees (static data-race freedom).