Re: BASIN meeting today

From: bchar <bchar@cs.drexel.edu>
Date: Thu Sep 22 2005 - 14:46:56 EDT

1. Multi-threaded programming in python:
http://ldp.paradoxical.co.uk/LDP/LGNET/107/pai.html

2. Parallel lisp and "futures":
http://www.cs.indiana.edu/~tanaka/GEB/fmvParLisp/

3. Parallel matlab: http://supertech.lcs.mit.edu/~cly/survey.html See
in particular the section on "backend".

4. Thread support in mpich2:
http://www-unix.mcs.anl.gov/mpi/mpich2/downloads/mpich2-doc-install.pdf.
>From what I gather from a brief scan, with mpi2 the way to get threads
is to use an independent package such as posix threads. The state of
the implementation seems to support simple things.

 From http://www.csl.mtu.edu/cs6091/www/NewUPCproj.htm: "In a
thread-compliant implementation, an MPI process is a process that may be
multi-threaded. Each thread can issue MPI calls; however, threads are
not separately addressable: a rank in a send or receive call identifies
a process, not a thread. A message sent to a process can be received by
any thread in this process." (MPI-2 spec) The MPICH2 implementation has
new features, such as limited thread safety (single, funneled,
serialized, multiple) and a perverse version of one-sided communication..."

4. Linda is an approach used for adding parallelism into a language
(C, Fortran, etc.) that allows one to write programs that spawned other
computations and went on to do further work before going back to
retrieve the answer. About 15 years ago I wrote a version of Maple that
had Linda operations in it; at the Maple command line one could spawn
Maple computations on other processes/processors and continue using the
Maple process connected to the users. Results are put into "tuple
space" and there are functions to do tuple space queries that return
false if the tuple is not there yet. The notion of tuplespace has
continued to persist over the years, making iit into Javaspaces and
Jini, or with distributed Ruby
(http://www.devsource.com/article2/0,1759,1778698,00.asp). See
http://www.lindaspaces.com/downloads/lindatoday.pdf for blatant Linda
propaganda

Doing things this way is sort of antagonistic to MPI/message passing
style of parallel programming, but one could assign different roles to
the two kinds of computations. I thought Linda was great for spawning
subcomputations and writing master programs to queue and adaptively
manage many different subcomputations at once, but I didn't really care
if MPI, Linda, or something else was used to write the underlying
parallel math code. Others have thought about combining MPI and
tuplespaces: http://www.ncni.net/fellowships/NCSU_byrd.pdf, but I don't
see any nice handy implementation out there we could snap up and
immediately install in basin.
Received on Thu Sep 22 14:46:51 2005

This archive was generated by hypermail 2.1.8 : Fri Aug 08 2008 - 19:25:03 EDT