Computer Science

Nicolas Roard

Research Interests

  • Distributed Visualisation
  • Agents Systems
  • Autonomic Computing

Teaching Duties

  • CS-228 Operating Systems: Lab Classes
  • CS-171 Introduction to Computing II: Lab Classes
  • CS-161 Introduction to Computing: Lab Classes

Other Interests

  • Check my home page

Contact Information

(+44) (0)1792 205678 ext. 4566
nicolas at roard . com
Nicolas Roard, Department of Computer Science, University of Wales Swansea, Singleton Park, Swansea, SA2 8PP, UK.

Current research

I am working on distributed visualization systems and architectures -- that is, how to take advantage of multiple computers (clusters...) to improve visualization systems (dealing with big datasets, etc).

Specifically, I am experimenting with a reflexive architecture for visualization that I wrote, using volumetric datasets. I am a strong advocate for dynamic systems and reflexive features in general, and believe that a lot can be gained in cluster applications with this kind of approach. Visualization provides a very interesting test case for a distributed system, as you need to meet harder requirements -- low latency, real-time, interactivity. Volumetric datasets are by nature big and usually take time to render, and as such are good candidates for distribution.

Research documents

Broadcast Visualization (MIP rendering of CThead) Here is some old videos showing the visualisation system in action. The system is rendering the CT head volumetric dataset, displaying the results in a java applet and on a PDA. The PDA side is implemented in Squeak.
  • The first video shows how a PDA can control the viewpoint: PDA as controller
  • The second video shows the PDA used also as a visualisation client: Visualisation on the PDA
  • The third video shows an example of an added visualisation strategy, here a progressive rendering, on a visualisation pipeline: Progressive Rendering

The image on the left shows my current system, doing a MIP rendering of the CThead dataset, 512x512 image. The system is mostly done in Java (here, even the renderer is, although I use C++ renderers usually), using a lightweight UDP communication system. The image is computed on a cluster of machines, then sent through a udp tunnel to a local machine, which then broadcast it on the local network. The nokia tablet is connected wirelessly through the laptop. The camera position is computed on one of the local machine (following a reproductible scenario) and sent to the cluster.




Several articles for Linux Magazine France: