next up previous
Next: Distributed Resources and Up: InternetExecutable Content, Previous: Advanced Collaboration via

Networked Institutes

If our future is to be truly found in networked and distributed scientific institutes, we must start to solve some basic problems. A place of research must conform to the interactions between the researchers: we cannot force them into unnatural modes of working simply because we wish to use a technology that does not support them appropriately. It is the job of the institute, in particular the research and system management of that institute, to ensure that it is placed to be able to provide needed services quickly to the user base, which, in this case, is a group of researchers.

To ensure that the institute can truly be an institute, and respond to the calls of its members, it needs to be able to provide flexible solutions to difficult problems. This is true, even for an old-fashioned non-distributed institute. In these days of rapid change, an inflexible science facility rapidly finds itself bereft of leading-edge research: younger scientists go elsewhere to work, and the remaining scientists find themselves without the tools needed to compete in the modern world. Flexibility is what allows a research centre to attract young researchers, and provide them with an appropriate work environment.

If we wish this flexibility in any new research facilities, we must be willing to provide communication and computation facilities that can adapt to the work habits of the members. If they wish to use certain specific computers, in certain specific ways, we must be able to provide the software for them to do so. In particular, when scientific collaboration, which is obviously the key reason for creating these facilities, becomes our most important function, we must be able to provide tools that work for collaborating researchers who are using different computational platforms.

There are two primary ways to provide links between researchers using different hardware and software: One can provide new, similar software to the different machines, or one can provide access to central machines running that software. The first solution is often restrictive, because multiplatform software development is difficult. The second solution goes against the entire concept of truly distributed research, because we are forcing our users to effectively use one kind of computer, thus slowing down their research, and forcing them away from a platform that they are happy with.

Obviously, multiplatform computing will remain a problem in the future, but one solution is to use executable content as much as possible, and provide a common interface on many platforms to similar software. It is even possible to use central computing facilities, using, say Java, to provide a transparent link to them, but hiding the differences, so that the users feel comfortable. With executable content, we can write a program once, and then make it available to all the machines that have the capability to run third-generation web software. Finally, we do not find ourselves in the awful situation of having a majority use of one particular type of computer at an institute, and having to often give up on users of other machines. This is obviously a central point when we consider any facility that is truly trying to position itself for the twenty-first century.



next up previous
Next: Distributed Resources and Up: InternetExecutable Content, Previous: Advanced Collaboration via



Stephen Braham
Mon Nov 27 16:48:20 AST 1995