logo
首页技术栈工具库讨论
pipes-core
pipes-core
Deprecated in favor of pipes This library offers an abstraction similar in scope to iteratees/enumerators/enumeratees, but with different characteristics and naming conventions. This package is a fork of the original pipes package by Gabriel Gonzalez. See https://github.com/pcapriotti/pipes-core/wiki/pipes-core-vs-pipes for a comparison between the two packages. Differences with traditional iteratees: Simpler semantics: There is only one data type (Pipe), two basic primitives (await and yield), and only one way to compose Pipes (>+>). In fact, (>+>) is just convenient syntax for the composition operator in Category. Most pipes can be implemented just using the Monad instance and composition. Different naming conventions: Enumeratees are called Pipes, Enumerators are Producers, and Iteratees are Consumers. Producers and Consumers are just type synonyms for Pipes with either the input or output end closed. Pipes form a Category: that means that composition is associative, and that there is an identity Pipe. "Vertical" concatenation works on every 'Pipe': (>>), concatenates Pipes. Since everything is a Pipe, you can use it to concatenate Producers, Consumers, and even intermediate Pipe stages. Vertical concatenation can be combined with composition to create elaborate combinators, without the need of executing pipes in "passes" or resuming partially executed pipes. Check out Control.Pipe for a copious introduction (in the spirit of the iterIO library), and Control.Pipe.Combinators for some basic combinators and Pipe examples.
antlrc
antlrc
ANTLR is a LL(*) parser generator that supports semantic predicates, syntax predicates and backtracking. antlrc provides a Haskell interface to the ANTLR C runtime. ANTLR generates the lexer and/or parser C code, which can call Haskell code for things such as: semantic predicates which may look up entries in the symbol table, creating symbol table entries, type checking, creating abstract syntax trees, etc. The C source code for the lexer and/or parser are generated from the ANTLR grammar file, which by convention has a .g filename extension. The generated C files can be compiled as C or C++. The main entry point to the program can be written in C or C++, which calls the generated parser and lexer. The parser can make calls to Haskell to build the AST and symbol table, and to implement dis-ambiguating semantic predicates if necessary (for context sensitive languages). The ANTLR parser generator is written in Java. It is necessary to use the same ANTLR parser generator version as the ANTLR C runtime version. antlrc is tested with ANTLR 3.2 and libantlr3c 3.2. In addition to creating the GrammarLexer.c and GrammarParser.c files, the antlr parser generator creates a Grammar.tokens file which contains a list of lexer token identifier numbers and any associated name that is is specified in the tokens section of the Grammar.g file. The antlrcmkenums is run specifying the input Grammar.tokens file, and generates a GrammarTokens.h C/C++ header file containing an enum with enum members for those tokens that have user specified names. This enum is then processed by c2hs to create a Haskell enum for the token identifiers. Examples are provided on github: https://github.com/markwright/antlrc-examples Documentation for the ANTLR C runtime library is at: http://www.antlr.org/wiki/display/ANTLR3/ANTLR3+Code+Generation+-+C Documentation for the ANTLR parser generator is at: http://www.antlr.org/wiki/display/ANTLR3/ANTLR+v3+documentation
ghcjs-websockets
ghcjs-websockets
Documentation online at http://mstksg.github.io/ghcjs-websockets/JavaScript-WebSockets.html Deprecated in favor of ghcjs-base's native websockets. ghcjs-websockets aims to provide a clean, idiomatic, efficient, low-level, out-of-your-way, bare bones, concurrency-aware interface with minimal abstractions over the Javascript Websockets API http://www.w3.org/TR/websockets/, inspired by common Haskell idioms found in libraries like io-stream http://hackage.haskell.org/package/io-streams and the server-side websockets http://hackage.haskell.org/package/websockets library, targeting compilation to Javascript with ghcjs. The interface asbtracts websockets as simple IO/file handles, with additional access to the natively typed (text vs binary) nature of the Javascript Websockets API. There are also convenience functions to directly decode serialized data (serialized with binary http://hackage.haskell.org/package/binary) sent through channels. The library is mostly intended to be a low-level FFI library, with the hopes that other, more advanced libraries maybe build on the low-level FFI bindings in order to provide more advanced and powerful abstractions. Most design decisions were made with the intent of keeping things as simple as possible in order for future libraries to abstract over it. Most of the necessary functionality is in hopefully in JavaScript.WebSockets; more of the low-level API is exposed in JavaScript.WebSockets.Internal if you need it for library construction. See the JavaScript.WebSockets module for detailed usage instructions and examples. Some examples:
linux-perf
linux-perf
This library is for parsing, representing in Haskell and pretty printing the data file output of the Linux perf command. The perf command provides performance profiling information for applications running under the Linux operating system. This information includes hardware performance counters and kernel tracepoints. Modern CPUs can provide information about the runtime behaviour of software through so-called hardware performance counters http://en.wikipedia.org/wiki/Hardware_performance_counter. Recent versions of the Linux kernel (since 2.6.31) provide a generic interface to low-level events for running processes. This includes access to hardware counters but also a wide array of software events such as page faults, scheduling activity and system calls. A userspace tool called perf is built on top of the kernel interface, which provides a convenient way to record and view events for running processes. The perf tool has many sub-commands which do a variety of things, but in general it has two main purposes: Recording events. Displaying events. The perf record command records information about performance events in a file called (by default) perf.data. It is a binary file format which is basically a memory dump of the data structures used to record event information. The file has two main parts: A header which describes the layout of information in the file (section sizes, etcetera) and common information about events in the second part of the file (an encoding of event types and their names). The payload of the file which is a sequence of event records. Each event field has a header which says what general type of event it is plus information about the size of its body. There are nine types of event: PERF_RECORD_MMAP: memory map event. PERF_RECORD_LOST: an unknown event. PERF_RECORD_COMM: maps a command name string to a process and thread ID. PERF_RECORD_EXIT: process exit. PERF_RECORD_THROTTLE: PERF_RECORD_UNTHROTTLE: PERF_RECORD_FORK: process creation. PERF_RECORD_READ: PERF_RECORD_SAMPLE: a sample of an actual hardware counter or a software event. The PERF_RECORD_SAMPLE events (samples) are the most interesting ones in terms of program profiling. The other events seem to be mostly useful for keeping track of process technicalities. Samples are timestamped with an unsigned 64 bit word, which records elapsed nanoseconds since some point in time (system running time, based on the kernel scheduler clock). Samples have themselves a type which is defined in the file header and linked to the sample by an integer identifier. Below is an example program which reads a perf.data file and prints out the number of events that it contains.