Home / Issues / 1.2004 / ARTHUR: A Collaborative Augmented Environment for Architectural Design and Urban Planning
Document Actions

HC 2004

ARTHUR: A Collaborative Augmented Environment for Architectural Design and Urban Planning

  1. Wolfgang Broll Fraunhofer Institute for Applied Information Technology (FIT)
  2. Irma Lindt Fraunhofer Institute for Applied Information Technology (FIT)
  3. Jan Ohlenburg Fraunhofer Institute for Applied Information Technology (FIT)
  4. Michael Wittkämper Fraunhofer Institute for Applied Information Technology (FIT)
  5. Chunrong Yuan Fraunhofer Institute for Applied Information Technology (FIT)
  6. Thomas Novotny Fraunhofer Institute for Applied Information Technology (FIT)
  7. Ava Fatah gen. Schieck The Bartlett Universtiy College
  8. Chiron Mottram The Bartlett Universtiy College
  9. Andreas Strothmann Linie 4 Architekten


Projects in the area of architectural design and urban planning typically engage several architects as well as experts from other professions. While the design and review meetings thus often involve a large number of cooperating participants, the actual design is still done by the individuals in the time in between those meetings using desktop PCs and CAD applications. A real collaborative approach to architectural design and urban planning is often limited to early paper-based sketches.In order to overcome these limitations, we designed and realized the ARTHUR system, an Augmented Reality (AR) enhanced round table to support complex design and planning decisions for architects. WhileAR has been applied to this area earlier, our approach does not try to replace the use of CAD systems but rather integrates them seamlessly into the collaborative AR environment. The approach is enhanced by intuitiveinteraction mechanisms that can be easily con-figured for different application scenarios.

  1. submitted: 2004-10-01,
  2. accepted: 2004-11-19,
  3. published: 2004-12-13


1. Introduction

Architectural design and urban planning - at least for sophisticated projects - have always involved highly cooperative tasks. Individual phases within a project often change between close cooperative situations, for instance, during design and review meetings, and individual work carried out by the participants or third parties. While between these meetings, people involved of course have a common goal they contribute to, they will however focus on individual parts or aspects of it. During the design and review meetings problems are discussed and solutions or alternatives are proposed. However, the actual preparation of particular solutions is once more performed by the individuals (leaving the final decision to one of the following meetings). Real collaboration is often limited to the creation of very early design sketches. From an architects′ point of view it would be desirable to have an additional support tool allowing to improve the cooperation in a way that supports real collaboration within the meetings. This would in turn allow for much faster design and review cycles.

In this paper we will present our approach to an AR system supporting the different phases of architectural design and urban planning. The goal of our approach was to provide an environment and tools to support the collaboration between the experts involved in these particular meetings, without radically altering or even replacing established working procedures, or accepted tools and mechanisms. Rather, we tried to enhance the meeting situations by integrating them into the work of the individual contributors. This approach is reflected by the use of intuitive interaction mechanisms, which allow even untrained users to benefit from the enhancements provided by the AR environment. Additionally, existing tools such as CAD systems and simulation programs already used by the people involved were integrated to provide a rather seamless transition between their individual daily work and the collaborative work at the round table meetings.

Our paper is structured as follows: in section 2 we will discuss related work, before we will provide a brief overview of the ARTHUR system in section 3. In section 4 we will describe the major application areas, and how the system has been tailored to support them. In section 5 we will present the feedback we received from user tests, followed by the conclusion and a look into future work in section 6 .

2. Related Work

Recently, several approaches to deploy AR in the area of architectural design and urban planning have been made. For construction sites, for instance, it has been proposed that an AR system might provide users with an "X-ray vision" inside a building, allowing them to see, for instance, where the pipes, electric ducting, and structural supports are situated inside walls and above ceilings [ FWK96, WFMMK96 ]. Such systems visualize hidden features of a building and are well suited to support maintenance and repair. AR is also useful to get a realistic impression of existing plans. Thomas et al. [ TPG99 ] developed, for example, a system for an outdoor visualization of construction plans. Designs are exported from a CAD application and displayed in their physical outdoor context using the TINMITH2 system.

An approach that goes beyond the visualization of spatial designs is the support of collaborative design and planning tasks. Common meetings are enhanced with AR technology to allow for a joint view and collaborative manipulation of complex spatial problems. The following subsections summarize AR systems for the collaborative design of products and production lines, as well as for architecture and urban planning.

An early prototype for collaborative planning is BUILD-IT [ RFK97 ]. BUILD-IT supports engineers in designing assembly lines and building plants. The technical infrastructure is based upon a table top interaction area, enhanced by a projection of a 2D computer scene on the table top. Additionally, a video camera is used to track small, specialized bricks, that can be used as "universal interaction handlers". A second, vertical projection screen provides a 3D view of the virtual scene.

MagicMeeting [ RWB02 ] supports product review meetings by augmenting a real meeting location. Instead of real mock-ups, virtual 3D models are used, which may be loaded into the environment from usual desktop applications or from Personal Digital Assistants (PDAs). The MagicMeeting system explores several interaction techniques, such as the MagicBook metaphor [ BKP01 ], annotations and a clipping plane tool. The 3D models are linked to physical placeholder objects and realize a tangible interface.

AR-Planning Tool [ GFM02 ] supports the collaborative planning of production lines. Machines are modelled as virtual building blocks and can be positioned by the user with a visually tracked paddle. The system checks the validity of the planned production line, using a database with metadata for each machine. The user wears a video-augmented HMD to see the virtual machines.

The Luminous Table [ IBJU02 ], developed by the Tangible Media Group, integrates sketches, physical models, and computational simulations into a single workspace. 2D drawings and 3D physical models are augmented with a 2D video projection to simulate sunlight shadows, wind patterns, and traffic. The physical objects are tracked with cameras.

ARVIKA [ Fri02 ] realized AR applications for development, production and servicing. One of the applications developed within ARVIKA allows for collaborative plant design. Paper-based floor plans are augmented with virtual objects to get a better impression of the current planning. Placeholder objects are used to position virtual objects; menus can be used to insert or delete objects.

AR systems for collaborative design and planning typically support the spatial composition of larger designs from existing building blocks (compare BUILD-IT). They integrate planning rules (compare AR Planning Tool) and sophisticated interaction metaphors (compare MagicMeeting), and they have mature concepts for integrating physical and digital workspaces (compare Luminous Table and ARVIKA).

But still, they are restricted regarding intuitive interaction mechanisms and functionality. ARTHUR tries to preserve the natural communication and collaboration between meeting participants. We are using optical augmentation and wireless computer-vision based trackers to allow for a natural 3D collaboration. Virtual objects are displayed using stereoscopic visualization to seamlessly integrate them into the physical environment.

Another important issue is the amount of modifications which may be applied to the original model using the facilities of the AR system. In ARTHUR geometry is not just imported from modelling software, but can be directly created within the AR environment. Additionally, ARTHUR integrates a full professional CAD system to support more advanced functionality, such as 3D sketching and extrusions.

3. The ARTHUR System

The ARTHUR system is based on several individual components, including the MORGAN AR framework, AR displays, computer vision (CV) based input mechanisms, as well as a Graphical User Interface configuration. The major system components are discussed in more detail in the following subsections. For further information on the individual components of the ARTHUR system see also [ MSL04 ] and [ GMS03 ].

3.1. The MORGAN Framework

Our AR framework MORGAN consists of three major parts: the 3D visualization component, the distribution and communication framework, and the developers′ application interface [ OHL04 ].

The 3D visualization component allows the user to see the ARTHUR scenarios in 3D using head mounted displays or other output devices. Rendering can be either stereoscopic (quad-buffered or dual screen) or monoscopic. Augmentation within ARTHUR is usually done using an optical see-through augmentation, i.e. the virtual image optically superimposes the real environment. The 3D visualization component, additionally supports see-through video augmentation, whereby the image from a head-mounted camera is superimposed by the virtual scene components. This technique is used for screen-based, or projection-based, presentations as well as for non-see-through head mounted displays.

The 3D visualization component is based on a component-based scene graph architecture. While an optimized internal scene graph is used to perform the actual rendering, external scene graphs are attached to support and preserve individual native scene graph structures. This architecture allows us to use, for instance, VRML′97 for the description of the 3D user interface elements, as well as for objects created within ARTHUR, while e.g. CAD objects may use their own individual external scene graph (see also section 4.3 ). The scene graph has been enhanced in order to specifically support AR (phantom objects, video and image backgrounds) and fundamental AR/VR interaction techniques (highlighting of objects, universal picking, and collision detection). In the overall system, there exists one visualization component for each individual user, thus, rendering is performed locally. In order to achieve this, the scene graphs are replicated among the individual visualization components and kept synchronized upon changes.

Our AR framework MORGAN provides the distribution and communication mechanisms required to connect the input devices, such as head tracking (i.e. InterSense IS900 or InertiaCube2), computer vision input (placeholder tracking, pointer and finger tracking, gesture recognition, alternative head tracking) to other system components (e.g. the 3D stereo visualization components) (see Figure 1 ).

The MORGAN API is a C++ based interface to the AR framework. It provides application programmers with an interface to the input devices connected to the system (tracking, object and gesture recognition), as well as to standard devices, such as mouse and keyboard. Information on current scene graph objects can be queried; objects may be created, modified, replaced or deleted. Additionally, the application programmer can access more advanced scene graph operations required to realize user interface operations, such as picking of objects along a ray, or collision detection between objects.

3.4. Graphical User Interface Configuration

A graphical language (GRAIL) was developed to allow users to configure the relationships between the input mechanisms (3DOF PHO, the 5DOF pointers, and the command gestures) and the virtual objects. GRAIL also provides the user with the ability to link the ARTHUR system to external applications, such as pedestrian and environmental analysis applications and project planning and cost estimating applications, using scripting commands. The GRAIL application resides on top of the MORGAN API (see section 3.1 ) and acts as a tool building tool to develop AR user interfaces that define the properties and characteristics of the overall interaction environment. It allows users to create a range of tools by graphically defining the relationship between input mechanisms and virtual objects (see Figure 4 ). For instance, a two-finger gesture could be used as a tool for creating virtual boxes in the AR environment.

4. Application Scenarios

In this section we will describe three application scenarios, which are typical for the use of the ARTHUR environment. The first scenario represents a typical design or review meeting situation. In the second scenario the presentation of complex data enhanced by simulations is demonstrated. The third scenario finally demonstrates the close integration with existing CAD software.

The task of creating a system such as ARTHUR demands a high degree of communication and collaboration with potential users, in this case, trained architects. In order to ensure the success of this undertaking, the technique of scenario-based design [ RC02 ] was adopted for developing the system. The three scenarios were designed collaboratively by the application partners as well as designers, in order to provide for feedback on a user as well as on a technical level.

4.1. Supporting Architectural Design and Review Meetings

In this scenario, an urban context model of the City of London served as testing ground for both architectural as well as urban design. The scenario is based on a real-world example, namely the recently completed Swiss-Re high-rise building. Issues involved in designing and planning this building, and which can be supported by the ARTHUR environment, included the decision about the building area, the basic shape of the building, the adaptation of this shape with respect to the environment (i.e. surrounding buildings), the overall review of the final model, and the final presentation.

For the first task, the model of London was displayed on the table-top, allowing the users to find a suitable building space. This included the possibility to remove existing buildings, in order to create an appropriate freed-up space within the model. Users were then free to introduce new basic 3D objects and manipulate them as a matter of 3D sketching. While the final basic shape ("′the cigar′") was created outside the ARTHUR environment, the system allowed users to manipulate this shape interactively by using placeholder objects on the table or the 3D wand. Once the final shape was decided upon, an additional fine tuning regarding the size and exact location within the desired space was performed. For the final review, the rather coarse design model was replaced by a more refined model of the actual building (see Figure 5 ).

4.2. Integration with Simulation Software

Urban planning decisions typically require a thorough consideration of different alternatives, involving aspects, such as the overall design, the affordability, or the impact on the local environment. While costs can be easily calculated, design decisions are more complex. They are often based on real models, and their impact need to be investigated by studies or simulations. The decision process itself, however, is often lacking a comprehensive mechanism to combine all relevant information. In our approach, we integrate the various aspects into the ARTHUR system, facilitating a more efficient decision finding. We will demonstrate the approach with a pedestrian simulation within a cityscape scenario.

We have implemented spatial agents that respond dynamically to the changes in the locations of objects on the table. The movement rules for these agents are based on space syntax theory. Space syntax deals with the configurational properties of environments. Spatial agents use vision to assess the configuration, and move towards open space by a stochastic process: i.e. by choosing a destination at random from the available space, and walking towards it.

In this way, they are configurational explorers. The rules are: walk 3 steps, look around and choose a new destination, walk 3 steps, and so on. If their field of view is set to 170° (approximately human vision), the agents start to move, on aggregate, in a human like manner. In an experiment within a 1.5 km square area of the City of London, there is a correlation coefficient of R²=0.67 between agents and actual pedestrian numbers measured at 78 "gates" randomly located within the area [ Tur03,MSL04 ]. Thus, while the movement of an individual agent will typically not that closely resemble those of a human equivalent that much, the overall movement of a large number of agents is quite insightful and allow the users to draw conclusions regarding the position of new buildings.

The system allows placement of a specified number of agents within the cityscape scenario. The MORGAN API allows the agents simulation engine to talk to the augmented reality application. A simple object model feeds the new agents′ locations to the front end whilst the front end feeds current building locations to the simulation engine. Agent animations are then executed locally at the front-end, using a dead-reckoning algorithm. The actual walking speed is determined by the distance of the new position. As soon as a new position update is received, the new direction is locally re-calculated for the individual agent, and the new travel path is started. All agents share a single walking animation in order to keep the visualization scalable to even several hundred agents simultaniously. It is possible to set up the agends′ starting locations and directions, for instance, for simulating the effect of transportation terminals within the system. Buildings are usually attached to a PHO in order to be moved (e.g. the cathedral, see Figure 6 and Figure 7 ). The agent simulation program will prevent agents from choosing a path laeding through dynamic objects (see Figure 8 ). Currently, the representation of these dynamic objects within the agent simulation is limited to the bounding boxes of these objects.

4.3. Seamless Integration with a CAD System

Architectural design without the possibility to make changes to the virtual model does not provide any major advantages compared to the use of real static models, since the results of a review have to be integrated into the virtual model individually using CAD software. Frequently, designers actually carrying out this integration task, do not even take part in the review process.

So far, CAD system integration into AR or VR usually meant exporting the CAD data into a specific 3D graphics format, such as VRML′97. This method has several drawbacks, including the loss of geometrical details or precision, and the loss of object semantics. The major drawback however, is that an interactive modification of the original model is not possible, either because it results in multiple back and forth conversions between the individual formats, or, because it requires all changes applied to be repeated within the original CAD system.

To overcome these limitations, a development prototype of the forthcoming version of the Microstation CAD system has been integrated into the ARTHUR system. This allows users to model their objects in 3D above the table-top (see Figure 10 ) without the need to switch back to the 2D CAD desktop (see Figure 9 ). Users may create or interact with objects either by using the 3D pointer or finger gestures. Virtual menus located above the augmented round table allow for performing different operations directly within the CAD software, e.g. for creating 3D geometry (such as drawing a sphere, a torus, or a b-spline), changing colors, extruding surfaces, and moving, copying or deleting objects. Basically, all operations available within the CAD software can be integrated into the ARTHUR system. Therefore 3D menus providing appropriate entries representing the commands or user interface elements of the CAD system have to be supplied to the ARTHUR system (see Figure 11 ). This step is rather simple, as it only requires the specification of an appropriate user interface description file using XML (see Figure 12 ). By selecting or executing the 3D user interface elements, the appropriate commands in conjunction with relevant 3D input data are then sent to the CAD software.

Only a single instance of the CAD system is executed at a time, ensuring a consistent CAD model. Each user′s input is sent to the CAD integration component, which ensures the synchronization of the views of the individual users. Using the update mechanism of the CAD system, the component distributes the results of the interaction to the attached visualization components. Since the CAD data is not converted, all semantic information is preserved. This is also true for behavior attached to objects, such as kinematics.

User tests revealed that experienced CAD users where able to use the integrated system even without training. The main reasons were the similarity in the menu structure as well as its look and feel, and the natural and intuitive 3D input facilities provided by the ARTHUR input mechanisms.

5. User Experience

Although CAD-systems have been widely used by architects, their potential has never truly exceeded that of a powerful drawing and visualization tool. Nonetheless, design itself is still predominantly achieved by means of hand drawings and physical sketch models.

ARTHUR provides designers with a new instrument that links digital 3D-models to interaction mechanisms similar to those of the real world. Interaction is simple and intuitive. Contrary to existing, highly complex, and often confusing CAD interfaces that stand in the way of design creativity, ARTHUR provides a simple and intuitive interface for design creation.

Furthermore, it enables designers to truly enter into a collaborative form of design, which is beyond the mode of taking turns or creating individually, thus far not provided by any other design tool. Apparently, one of the most interesting aspects of the ARTHUR development lies in what it seems to reveal us about the way that designers collaborate.

However, it was also noticed that such a system probably will not replace existing CAD or desktop-based 3D-modelling software in the near future. For several tasks, especially as far as a high precision is required, traditional user interfaces still provide significant advantages regarding the quality and the time needed.

Two different kinds of behavior were noticeable in our user tests of collaboration. In the first, one member of the team would take charge of the process, and direct actions. This is common in design teams in most architects′ offices. In the second, collaborators began to play games, particularly when users were faced with simulated pedestrian movement. We believe that creating architectural forms and working on a task collaboratively became a game that users enjoyed, and, as a result, this enhanced the level of collaboration [ PMgS04, gSPM04 ]. "The actual making of decisions about forms in space - had a strong and inevitable social dimension, and as such was influenced by the way in which involved parties interacted" [ HG88 ]. In addition, the agents′ appearance on the design table encouraged the users to understand structures within space as a dynamic experience rather than a static one (through agents moving between spaces), and the interaction with the agents became an integral part of the designers′ conversation with the emerging design on the design table [ gSPM04 ].

Our findings to date suggest this and indicate that the ARTHUR-system provides new chances, but also challenges, for human-computer-interaction in design, whilst staying true to core requirements of architectural form creation based on visual perception, spatial relations [ Aic91 ] and social behavior.


[ Aic91] Otl Aicher, Analog und Digital , Ernst & Sohn: 1991, isbn 3-433-02176-7.

[ Fri02] Wolfgang Friedrich, arvika - Augmented Reality for Development, Production and Service , Proceedings of the ieee/acm International Symposium on Mixed and Augmented Reality ismar 2002, 2002, pp. 3—4, ieee Computer Society, isbn 0-7695-1781-1.

[ FWK96] Steven K. Feiner, Anthony C. Webster, Theodore Krueger, Blair McIntyre, and Edward J. Keller, Architectural Anatomy”, Presence: Teleoperators and Virtual Environments 1995, 4 (3), 318—325, issn 1054-7460.

[ GFM02] Jürgen Gausemeier, Jürgen Fründ, and Carsten Matysczok, AR-Planning Tool - Designing Flexible Manufacturing Systems with Augmented Reality , Proceedings of the Eighth Eurographics Workshop on Virtual Environments, 2002, 19—25, Barcelona,Spain, Eurographics Association.

[ gSPM04] Ava Fatah gen. Schieck, Alan Penn, and Chiron Mottram, Interactive Space Creation Through Play , Proceedings of the 8th International Conference: Information Visualization iv04, 2004.

[ HG88] N. John Habraken, and Mark D. Gross, Concept design games”, Design Studies, (1988), no. 3, 150—158, 9.

[ IBJU02] Hiroshi Ishii, Eran Ben-Joseph, John Underkoffler, Luke Yeung, Dan Chak, Zahra Kanji, and Ben Piper Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation , Proceedings of the ieee/acm International Symposium on Mixed and Augmented Reality ismar 2002, 2002, 203-211, ieee Computer Society , isbn 40-7695-1781-1.

[ PMgS04] Alan Penn, Chiron Mottram, Ava Fatah gen. Schieck, Michael Witkämper, Moritz Störring, Odd Romell, Andreas Strothmann, and Francis Aish, Augmented Reality meeting table: a novel multi-user interface for architectural design , Recent Advances in Design and Decision Support Systems in Architecture and Urban Planning, 2004, Kluwer Academic Publishers, Jos P. van Leeuwen, and H. Timmermans, 213—231, isbn 1-4020-2409-6.

[ RC02] Mary Beth Rosson, and John M. Carroll, Scenario-based usability engineering , Proceedings of the Symposium on Designing Interactive Systems, 2002, p. 413, isbn 1-58113-515-7.

[ TPG99] Bruce Thomas, Wayne Piekarski, and Bernard K. Gunther, Using Augmented Reality to Visualise Architecture Designs in Outdoor Environment, International Journal of Design Computing: Special Issue on Design Computing on the Net (dcnet'99), 2, (1999), issn 1329-7147.

[ Tur03] Alan Turner, Analysing the Visual Dynamics of Spatial Morphology”, Environment and Planning B: Planning and Design, 30, (2003), no. 5, 657—676.

Additional Material


Type Video
Filesize 72Mb
Length 2:40 min
Language English
Videocodec DivX5.0
Audiocodec PCM Audio, no Codec required
Resolution 720x576

This video demonstrates the integration of a comercial CAD system into the ARTHUR environment

Arthur Video 1

Type Video
Filesize 45Mb
Length 1:38 min
Language English
Videocodec DivX5.0
Audiocodec PCM Audio, no Codec required
Resolution 720x576

Documentational video of the Augmented Round-Table for Architecture and Urban Planning that demonstrates the use of interactive simulation of pedestrian movement.

Arthur Video 2


Supplementary material

  • This video demonstrates the integration of a comercial CAD system into the ARTHUR environment
    Arthur Video 1 [ view | download ]
  • Documentational video of the Augmented Round-Table for Architecture and Urban Planning that demonstrates the use of interactive simulation of pedestrian movement.
    Arthur Video 2 [ view | download ]


Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html.