Home / Issues / 4.2007 / Methods and Applications in Interactive Broadcasting
Document Actions

EuroITV 2006

Methods and Applications in Interactive Broadcasting

  1. Konstantinos Chorianopoulos Bauhaus University Weimar
  2. George Lekakos Athens University of Economics & Business (AUEB)

Abstract

Interactive TV technology has been addressed in many previous works, but there is sparse research on the topic of interactive content broadcasting and how to support the production process. In this article, the interactive broadcasting process is broadly defined to include studio technology and digital TV applications at consumer set-top boxes. In particular, augmented reality studio technology employs smart-projectors as light sources and blends real scenes with interactive computer graphics that are controlled at end-user terminals. Moreover, TV producer-friendly multimedia authoring tools empower the development of novel TV formats. Finally, the support for user-contributed content raises the potential to revolutionize the hierarchical TV production process, by introducing the viewer as part of content delivery chain.

  1. published: 2007-07-31

Keywords

1.  Introduction

In this special section, we investigate the opportunities offered by interactive TV (ITV) systems, with a particular focus on applications and methods in interactive broadcasting. Interactive broadcasting is defined as the production process that prepares the audiovisual assets and the computer programs for an interactive TV end-user experience. Contemporary ITV research is focused on application development for ITV systems and it has treated the audiovisual assets separately, but the articles in this special section are a showcase of alternative applications and approaches for the widely deployed ITV systems. ITV systems is a class of applications that run on video and multimedia servers, advanced set-top boxes (STBs), home media computers, and mobile terminals. Still, the term ITV has been a buzzword with as many supporters as opponents. One explanation is that interactivity has been used to describe a technological feature of the media as much as it has been used to characterize a way of using the media [ WCGH99 ].

Before we start the presentation of interactive broadcasting methods and applications, we should define explicitly what interactive TV is. The answer depends on who is asked: 1) An engineer would assume digital broadcast and return channel, 2) a content producer would refer to interactive graphics and dynamic editing, 3) a media professional would describe new content formats such as betting, interactive storytelling and play-along quiz games, and 4) a sociologist′s definition would focus on the interaction between people about TV shows. While, none of the above definitions seems to agree with each other, all of them are right. However, in this article, we focus on the technical and content production aspects, which are represented by the first two definitions. Moreover, we assume that ITV applications are not limited to the traditional linear video capturing, editing, and broadcast delivery. In this context, we examine alternative and complementary production and distribution methods, such as virtual TV studios, augmented reality, cross media distribution, and user contributed content. In the rest of this article, we explore methods and applications in interactive broadcasting. First, we present a background on interactive broadcasting, then we analyze the state of the art and, finally, we conclude with directions for further research in this important area.

2.  Interactive Broadcasting

ITV research has been focused on application development for ITV systems and treated the audiovisual content separately. However, the articles in the special section are a showcase of alternative applications and emerging approaches over widely deployed ITV systems. In the past, ITV technology has been employed for broadcasting, but the focus was on the low level, such as Asyncronous Transfer Modulation (ATM), bi-directional cable [ Fur96 ], internet multicasting [ FWI98 ], and video compression [ FWI99 ]. At a higher level, emerging interactive broadcasting research proposes an integrated approach for the development of audiovisual assets and the computer programs. For example, augmented reality studio and advanced STB technology provide real-time rendering of interactive computer graphics with video feeds. Moreover, the availability of multiple TV distribution networks and user terminals emphasize the role of cross media publishing, where ITV productions are deployed over TV sets, mobile phones, and personal computers. Finally, the consideration of the end-user as an active role of the production is based on the technological convergence of the web and TV.

One significant advance of ITV technology is Augmented Reality (AR) that blends real scenes with interactive computer graphics that are controlled either at the TV studio or at end-user terminals. In contrast to traditional virtual reality (VR), the real environment is not completely suppressed, but plays a dominant role. Rather than immersing a person into a completely synthetic world, AR attempts to embed and seamlessly integrate dynamic computer graphics into the captured video feed of the real environment [ Pos01 ]. Thus, the user interfaces of both the television production team and of the viewer are changed dramatically with AR technology. This research direction complements previous work on user interface programming toolkits that smooth-out the learning curve of the development process and that encapsulate the details of the implementation [ CS04 ]. An additional application of AR ITV is in studio technology. For example, how to provide information to moderators, actors and participants during a live broadcast or a recording, such as step-sequences that are marked on the TV studio floor ground.

The concept of a more immersive television experience has been a theme of previous research. Inhabited TV [ CBG00 ] involves the development of collaborative virtual environments within which viewers can interact. This extends the TV experience by enabling social interaction among participants and by offering them new forms of control over narrative structure and greater interaction with content. In an inhabited TV application, the television becomes part of a group interaction within the virtual online world as well as in the living room. In this situation, the television becomes not only a social actor, but also a place to be [ Ada92 ].

3.  State-of-the-Art

Here, we examine the contribution of each article in the special section. The contributions are categorized along the following dimensions: 1) an application domain that frames the practical implications of the respective contribution, 2) research issues that have not been addressed by previous research and 3) methodology that the respective authors employed to approach the research issue. Given the broad range of interactive broadcasting research, it is appropriate to consider a broad range of research themes, issues and methodologies. Accordingly, the selection of articles in this special section is representative of the diversity of research in the ITV field.

3.1.  Application Domains

In the past, the broadcast content was mainly edited and enhanced during the post-production, but the introduction of AR technology in the studio addresses several contemporary problems in real-time (during the recording). There are many practical issues addressed by the AR technology in TV studio, such as: 1) dynamic re-illumination of studio settings and actors without physical modification of the lighting equipment, 2) marker-based in-shot tracking of studio cameras without visible markers, 3) dynamic presentation of un-recorded direction, moderation and other information anywhere within the studio and 4) integration of imperceptible coded patterns that support continuous online-calibration, camera tracking, and acquisition of scene depth. Bimber et al. investigate a unified and flexible solution for the above, which is the application of digital light projection for studio illumination - either exclusively, or in combination with analog lighting. In addition, digital illumination opens new opportunities for novel television productions.

Olaizola et al. discuss the concept of integrating a real scene with spatially rendered objects at the end-user terminal. They describe the implementation of a sports game (the traditional Basque ball-game ′pelota′) for AR ITV, where interactivity and AR techniques are used, providing the viewer control over the final video feed. A virtual 3D rendering of the real scene allows observation of the sport game from any angle, while the viewer may switch to a real camera point of view at any time. The authors argue that this option makes it possible to render points of view, which would be impossible with real cameras. For instance, a camera could be placed behind the ball, or on top of one player.

The proliferation of multimedia content and the growth of digital video libraries have motivated research and development of video retrieval systems [ Vor00 ]. Video search technology has traditionally been conceived as a way to enable efficient access to large video data collections. The growth of bandwidth and the availability of diverse end-user multimedia terminals (e.g. mobile phones) raised the issue of end-user cross media video search. Pastra and Piperidis elaborate on technologies crossing the boundaries between traditional broadcast and the internet, and between traditional television and computers. The cross media characteristics of this application domain broaden the scope of developing video-search technologies.

User interface (UI) design and development is a multidisciplinary practice, in which different fields complement each other. In the context of TV, graphic and industrial designers prepare the visual layout and the wire-frames as a mock-up. Next, the software engineers employ the prototype as a guide for the implementation of a functional UI. Crossing the chasm between UI conceptual design and actual implementation is not a trivial process, especially when the UI has to scale dynamically between different screen sizes. Indeed, the main concern in the work by Rauterberg et al. is how to improve the production process by closing the gap between industrial designers and software engineers of TV-based UIs.

The introduction of hybrid broadband wireless networks raises the demand for access to the same high-quality interactive services on a diverse range of terminal devices. The development and maintenance of these services require significant resources, when these are implemented with traditional application development methods. Tsekleves and Cosmas propose a methodological framework that encompasses a number of tools and aims to reduce the production cost. In doing so, they contribute to an evolving research field [ Spi03 ] that promises to fulfill the media industry′s vision of COPE (′Create Once, Publish Everywhere′).

The wide availability of broadband access to the home and the growth of home networking have positioned internet protocols as a complementary solution to digital broadcast systems. In this domain, advanced set-top boxes could be an entry point to information society services, besides being only a terminal for digital TV broadcasts. Redondo et al. investigate the role of residential gateways, which they consider as a bridge between the networked home and the broadcast infrastructure. Although there is not a wide consensus about their functionalities, they argue that residential gateways could be extended to be interoperable with multimedia and entertainment features, like those provided by digital TV terminals.

Cesar et al. propose a paradigm that is new for television, but has been very successful on the web: end-user content enrichment. Contemporary web technologies such as style-sheet customization, social tagging, and user-contributed content have raised the expectations of end-users, who have become accustomed to participate more actively in content production. Traditional multimedia and web authoring tools do not fit the television paradigm. For this purpose, Cesar et al. propose a simple editing interface for use in a relaxed setting. The interface can range from a simple remote control to a tablet computer. The remote control communicates with the set-top box, which stores content enrichments locally. This intermediate file is based on the World Wide Web Consortium′s (W3C) Synchronized Multimedia Integration Language (SMIL). The enriched content can be viewed and shared directly within co-located or distributed user groups.

Despite the gradual adoption of interactive digital TV systems, there are very few examples of new genres that combine audiovisual content with interactive elements. In fact, most contemporary ITV systems offer interactive overlays that are loosely related to broadcast content, or, imitate desktop computer applications on the TV [ Cho04 ]. Wages et al. describe an interactive broadcasting framework for the production of live events, which facilitates interactivity and the potential for end-user personalization. In doing so, they describe new professions for future TV production workflows, namely the ′video composer′ and the ′live video conductor′.


Table 1.  Practical application domains addressed by the respective papers in the special section

Paper

Application Domain

Bimber et al.

Augmented reality TV studio

Cesar et al.

End-user content enrichment

Olaizola et al.

Aumented reality sports

Pastra and Piperidis

Cross media video distribution

Rauterberg et al.

Co-operation between industrial designers and software engineers

Redondo et al.

Home networking

Tsekleves and Cosmas

Digital TV application development

Wages et al.

Broadcasting of live events

3.2.  Research issues

Synthetic re-illumination of environments and actors has been an active topic in computer graphics and computer vision. There are methods that re-illuminate recorded content and methods that re-illuminate a scene with controlled lighting. Bimber et al investigate the latter category. In particular, they propose projector-based illumination techniques that re-illuminate a physical environment synthetically by employing a number of video-projectors. Thereby, the projectors illuminate the real environment directly and on a per-pixel basis, rather than indirectly by projecting onto diffuse screens that scatter light into the environment. The main research issue is that images have to be computed for each projector with the following objectives: First, they must neutralize the physical illumination effects that are caused by each projector as a real point-light source. Second, they have to produce the defined virtual lighting situation synthetically.

The ability to perform real-time video mixing between a real-scene and a virtual one introduces many creative possibilities for TV producers. In addition, dynamic video mixing between computer graphics and broadcast video could be performed at end-user terminals. Studio effects have been used to show real characters into virtual worlds (e.g. weather news). For example, chroma-key environments (e.g. blue background) have been typically used to segment the real objects and insert them in a virtual scene, which is controlled by a computer system. Chroma-keying provides a feasible way for segmentation, but it can only be used in controlled environments. Olaizola et al. focus on the main issues in augmented reality ITV, which are tracking, segmentation, 3D registering and rendering.

Pastra et al. investigate the research literature on video search, which, in the past, has been elaborated in the context of digital video libraries. Moreover, they consider the technological challenges of distributed video resources (e.g. video uploading on web sites) and hybrid end-user terminals (e.g. ITV, multimedia phones). They focus on the latter, i.e., the development of cross-media decision mechanisms, drawing examples from the retrieval of video for the end-user. They argue, that efficient video search holds a key to the usability of the new ”pervasive digital video” technologies.

The distribution of audiovisual content to diverse end-user devices and the development of DTV standards for mobile devices have increased the scale-ratio between alternative TV screens. Besides dynamic video compression that suits the technological context of end-user terminals, an important issue is the scaling of the UI. Software engineers are concerned with the translation of a UI specification into an implementation for TV products with different screen properties. The research issue of software engineering is how to apply automatic layout and scaling in order to speed up and improve the production process. However, the question is whether a UI design lends itself for such automatic layout and scaling. For this purpose, Rauterberg et al. analyze a prototype UI design done by industrial designers.

In addition to the adaptation of ITV content and of the UI, described above, there is also a need to author an ITV application once and then to translate it in as many alternative formats as possible. Tsekleves and Cosmas propose the semi-automatic translation of simulations and rapid prototypes created in popular desktop multimedia authoring packages, such as Macromedia Director, into services ready for broadcast. This approach reduces the production cost, because it integrates prototyping and application generating from a unique description. According to Tsekleves and Cosmas, an application generation tool provides an integrated way to cope with the increasing variety of screen sizes, and middleware systems.

The interoperability between a broadcast terminal and a home networking one depends on the interoperability between the respective middleware standards. For this purpose, Redondo et al. explore the relationship between DTV applications and the controllers of domestic appliances. In order to ensure interoperability, they work on standard middleware: MHP for the STBs, OSGi for home networking. However, they report that the interoperability between the two standards is not as straightforward as expected. First, contemporary MHP specifications do not consider any networking functionality. Moreover, each middleware has a different approach: on the one hand MHP is function-oriented, while on the other hand OSGi is service-oriented. Then, the main research issue is to develop an abstraction layer that will allow the interoperability of these standards.

Cesar et al. propose an extension to the television-watching paradigm that permits an end-user to enrich broadcast content. In this way, the viewer takes an active role with direct control over content consumption, creation and sharing. A key difference with the web paradigm is that the ITV user remains a viewer who participates in an ongoing process of incremental content editing [ KKV02 ]. For example virtual edits that allow the order of presentation to be changed. Nevertheless, they point-out that besides desktop and web computers, there are not many technical solutions to support end-user content enrichment for digital TV. They have coined the term ′authoring from the sofa′ to define this paradigm. ′Authoring from the sofa′ includes three activities: intra-program selection (selection of the content to be enriched), enrichments authoring (the content enrichment process), and sharing (post-enrichment distribution).

An audiovisual landscape that consists of hybrid distribution mechanisms (broadcast, internet streaming) and diverse user terminals raises the issue of personalization versus the need to keep TV as a shared experience medium. Wages et al. propose an interactive broadcasting framework that allows personalisation of broadcasted live events. Their approach is to offer unedited camera streamsand at the same time provide the prerequisites for the establishment of ′virtual personalised channels′ [ Cho04 ]. They propose that the viewer could either follow the default version, or navigate live media events. The viewer could either ′lean-back′ on the sofa to be guided by the Video Composer, or ′lean-forward′ and take control over the navigation through the video streams (Video Conductor).


Table 2.  Research issues addressed by the respective papers in the special section

Paper

Research issue

Bimber et al.

Digital light projection

Cesar et al.

Multimedia standards for TV content annotation

Olaizola et al.

Video mixing of real and virtual scenes at the STB

Pastra and Piperidis

Video Searching for end-users

Rauterberg et al.

Automatic scaling of UI

Redondo et al.

Interoperability between MHP and OSGi

Tsekleves and Cosmas

Cross media application adaption

Wages et al.

Personalization of media streams

3.3.  Methodology

Bimber et al. employ video projectors for digital illumination in TV studios. Video projectors allow a spatial and temporal modulation of light that can be computer controlled and synchronized with the recording process of studio cameras. The key concept in Bimber et al. is that multiple projectors are used either solely or in combination with analog light sources for illuminating the entire, or parts of the TV studio environment. They explain that, spatially, projectors represent point light sources. Their capability of spatially modulating the light allows for creating almost arbitrary shading effects synthetically. Besides a spatial modulation, a temporal modulation of the projected light enables displaying different portions of the illumination time-sequentially. Bimber et al. acknowledge that there are alternative approaches, but the proposed digital light projection offers the potential of a unified solution for many practical problems in TV studios.

Olaizola et al. observe that AR systems demand increased hardware capabilities, which make difficult the reception and rendering in commercial set-top boxes. For this reason, they implemented the components of the 3D sub-system in a PC based system, where these limitations have been overcame by employing widely available Java 3D libraries. These libraries are not part of the MHP specification, although, the authors suggest that they could be easily added to the standard in the future.

Pastra et al. describe distribution and device technologies that go beyond traditional broadcast and TV and in particular the implications for video search services. In addition, they present the characteristics of the video search services and the ones of the more advanced videosearch techniques developed in state of the art prototypes. Moreover, they discuss the effects of shifting business models from the centralized digital libraries to distributed digital video storage.

In the context of dynamic UI scaling between diverse end-user terminals, the approach of Rauterberg et al. is to start by analyzing a UI design done by industrial designers. In doing so, they address one of the major issues in UI research and practice: how to make a deliberate design move from UI specification to software implementation. In particular, two studies are performed in order to bridge the gap between interaction designers of TV-based UI design and software engineers who need to convert this UI design into a functional UI. The methodological research question was which annotation method industrial designers would prefer and whether it could satisfy the technical requirements of the software engineering process.

In addition to UI scaling, there is a need for application adaptation to different terminals. Instead of developing the same application over and over and instead of emplying specialized tools, Tsekleves and Cosmas propose to exploit the familiarity of multimedia authors with the popular tool Macromedia Director. The translation from a high-fidelity Macromedia Director prototype to an ITV application could be performed with the use of metadata. Tsekleves and Cosmas first examined a number of possible metadata languages for describing an ITV application. Then, they provide an in-depth description of the operation of two tools, in order to offer an insight of how they can be used to make easier and streamline the process of creating digital TV applications for mobile terminals. Finally, they present case studies of converged broadcast and telecommunication service components.

In order to overcome the different development mentalities between OSGi and MHP, Redondo et al. introduce an abstraction layer that takes care of the interoperability between MHP and OSGi. Their concept supports the potential of MHP-OSGi communication in both directions: 1) enabling an MHP application to use OSGi services and 2) any OSGi device is able to take advantage of the MHP functionality. In their current work, they focus on the mapping from MHP to OSGi and describe the implementation experience.

Cesar et al. propose an architecture, which is based on a model that allows the original content to remain unaltered. In particular, they define an intermediate content enhancement layer that is based on the W3C′s SMIL language. Using a pen-based enhancement interface, end-users can manipulate content and save the annotations in a digital set-top box. They also describe scenarios of use, provide examples of how the system handles content enhancement and outline a reference implementation for creating and viewing enhancements.

Wages et al. propose the enhancement of audiovisual assets with semantic metadata, which could transform the media distribution value chain. For example, instead of broadcasting just a single edited video stream chosen by a producer, several camera streams are broadcasted. In complement to the work by Cesar et al., they suggest to transform the one-way production into a loop by introducing end-user annotation to the on-site producers. Finally, they provide a preliminary example of the proposed interactive broadcasting methodology in the case of a documentary-drama.


Table 3.  Methodology employed to approach the research issues for the respective papers in the special section

Paper

Methodology

Bimber et al.

Spatial and temporal light modulation

Cesar et al.

Scenario, system architecture

Olaizola et al.

Set-top box middlware requirements

Pastra and Piperidis

Business model analysis

Rauterberg et al.

Case study with industrial designers

Redondo et al.

Scenario, system architecture

Tsekleves and Cosmas

Metadata for dynamic application scaling

Wages et al.

Semantic metadata for content streams

4.  Research Agenda

The articles in the special section provide a broad overview of the research in interactive broadcasting. In addition, the articles provide a good balance between the technical and the creative aspects of this research field. Indeed, ITV systems have raised the issue of a closer integration between art and technology. One possible explanation for the delayed introduction of interactivity in TV broadcasts is the lack of co-operation between the creative professions (TV producers) and the technical ones (engineers). As a remedy, the articles in the special section provide an integrated view of the multidisciplinary challenges in interactive broadcasting. Still, there is need for further research in several of the issues addressed here 4 .


Table 4.  Directions for further research in the field of interactive broadcasting

Research Area

Further Research

Augmented reality

UI toolkit, control of augmented TV channel at user terminal,

novel TV formats and genres

Cross media

Ubiquitous video search, ITV content distribution in a home network, UI scaling,

application portability, asset management and metadata

Content enrichment

Social annotations, sharing

In brief, in AR ITV, there is a need for high-level UI toolkits, which support the production process and allow the control of the final scene (real scene integrated with virtual one) at the end-user terminal. The availability of ITV producer-friendly AR platforms, lighting techniques and authoring tools could motivate the development of novel TV formats and genres. Further research in cross media should consider video-search from diverse user terminals through hybrid distribution networks and how to make seamless the end-user experience between the different access methods. Moreover, there is a need for audiovisual content interoperability between the diversity of the distribution networks that reach the home. In addition, cross media developers should provide methods and tools for the automatic translation of ITV applications between heterogeneous devices and networks. Finally, interactive broadcasting research has to emphasize on the importance of the end-user as part of the content production process and empower viewers with easy-to-use content editing tools. Overall, the interactive broadcasting applications and methods presented in this article frame an emerging area for research and practice.

Bibliography

[Ada92] Paul C. Adams Television as Gathering Place Annals of the Association of American Geographers,  82 (1992) no. 1117—135.

[CBG00] Mike Craven Steve Benford Chris Greenhalgh John Wyver Claire-Janine Brazier Amanda Oldroyd, and Tim Regan Ages of avatar: community building for inhabited television Proceedings of the third international conference on Collaborative virtual environments (CVE'00),  2000pp. 189—194isbn 1-58113-303-0.

[CS04] Konstantinos Chorianopoulos and Diomidis Spinellis User interface development for interactive television: extending a commercial DTV platform to the virtual channel API Computers and Graphics 28 (2004)no. 2157—166issn 0097-8493.

[Fur96] Borko Furht Interactive television systems Proceedings of the 1996 ACM symposium on Applied Computing,  19967—11isbn 0-89791-820-7.

[FWI98] Borko Furht Raymond Westwater, and Jeffrey Ice Multimedia Broadcasting over the Internet: Part I IEEE MultiMedia,  5 (1998)no. 478—82issn 1070-986X.

[FWI99] Borko Furht Raymond Westwater, and Jeffrey Ice Multimedia Broadcasting over the Internet: Part II-Video Compression IEEE MultiMedia,  6 (1999)no. 185—89issn 1070-986X.

[KKV02] Clare-Marie Karat John Karat John Vergo Claudio S. Pinhanez Doug Riecken, and Thomas Cofino That's Entertainment! Designing Streaming, Multimedia Web Experiences International Journal of Human-Computer Interaction 14 (2002)no. 3-4369—384issn 1532-7590.

[Pos01] Ronald Pose Steerable Interactive Television: Virtual Reality Technology Changes User Interfaces of Viewers and of Program Producers Second Australasian User Interface Conference (AUIC'01),  2001pp. 77—84isbn 0-7695-0969-X.

[Spi03] Diomidis Spinellis Cross-media service delivery volume 740 of The Kluwer international series in engineering and computer science Kluwer Academic Publishers Boston, MA2003isbn 1-402-07480-8.

[Vor00] P. Vorderer Media entertainment : the psychology of its appeal Interactive entertainment and beyond pp. 21—36 Erlbaum London, UK2000isbn 0-8058-3324-2.

[WCGH99] Howard Wactlar Mike Christel Yihong Gong, and Alex Hauptmann Lessons Learned from Building a Terabyte Digital Video Library IEEE Computer,  32 (1999)no. 266—73issn 0018-9162 .

Fulltext

License

Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html.

Language
  1. Deutsch
  2. English
Navigation