Implementing Mobile TV, Second Edition: ATSC Mobile DTV,  MediaFLO, DVB-H SH, DMB,WiMAX, 3G Systems, and Rich Media Applications (Focal Press Media Technology Professional Series)

  • 29 73 1
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Implementing Mobile TV, Second Edition: ATSC Mobile DTV, MediaFLO, DVB-H SH, DMB,WiMAX, 3G Systems, and Rich Media Applications (Focal Press Media Technology Professional Series)

Implementing Mobile TV This page intentionally left blank Implementing Mobile TV ATSC Mobile DTV, MediaFLO, DVB-H/SH

2,044 186 15MB

Pages 658 Page size 476.22 x 666.142 pts Year 2010

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

Implementing Mobile TV

This page intentionally left blank

Implementing Mobile TV ATSC Mobile DTV, MediaFLO, DVB-H/SH, DMB, WiMAX, 3G Systems, and Rich Media Applications Amitabh Kumar

AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Focal Press is an imprint of Elsevier

Focal Press is an imprint of Elsevier 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA The Boulevard, Langford Lane, Kidlington, Oxford, OX5 1GB, UK © 2010 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). All information presented in this book is based on the best efforts by the author and is believed to be accurate at the time of writing. It should be recognized that Mobile TV is still an emerging technology and many facets of the technology including standards, regulatory treatment, spectrum and applications may undergo changes. The author or the publisher make no warranty of any kind, expressed or implied with regard to the accuracy or completeness of the information contained, documentation or intended uses of any product or service described herein. The author or publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing or use of this information in any manner whatsoever. Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data Application submitted British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. ISBN: 978-0-240-81287-8 For information on all Focal Press publications visit our website at www.elsevierdirect.com 10 11 12 13

54321

Printed in the United States of America

This book is dedicated to my father. I hope that posterity will judge me kindly, not only as to the things which I have explained, but also as to those which I have intentionally omitted so as to leave to others the pleasure of discovery. René Descartes, La Geometrie (1637)

This page intentionally left blank

Contents The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. H. P. Lovecraft, The Call of Cthulu (1926)

Mobile TV-A Prologue ........................................................................................ xv Introduction to the Second Edition ...................................................................... xix

Part I Overview of Technologies .................................................1 Chapter 1: About Mobile TV ............................................................................... 1 1.1 The Beginning ..................................................................................................... 2 1.2 Mobile TV: A New Reality .................................................................................. 3 1.3 What Else is Different in Mobile TV? ................................................................. 4 1.4 Standards for Mobile TV ..................................................................................... 5 1.5 New Growth Areas with Mobile TV ................................................................... 7 1.6 What Type of Opportunity Does Mobile TV Present? ........................................ 7 1.7 What Handset Types Does Mobile TV Work On? .............................................. 8 1.8 Is Mobile TV Really Important?.......................................................................... 8 Chapter 2: Introduction to Digital Multimedia ....................................................... 9 2.1 Introduction ......................................................................................................... 9 2.2 Picture ................................................................................................................ 10 2.3 Image Compression ........................................................................................... 16 2.4 Video .................................................................................................................. 18 2.5 Analog TV Signal Formats ................................................................................ 21 2.6 Digital TV Formats ............................................................................................ 23 2.7 Video Bit Rate Reduction .................................................................................. 25 2.8 Compression Standards ..................................................................................... 31 2.9 The AVS – M Video Coding Standard (China) ................................................. 39 2.10 Video Files ......................................................................................................... 40 vii

viii

Contents

2.11 2.12 2.13 2.14 2.15 2.16

File Containers and Wrappers ........................................................................... 44 Audio Coding .................................................................................................... 46 Audio Compression ........................................................................................... 48 Streaming ........................................................................................................... 54 Streaming Players and Servers .......................................................................... 57 Summary and File Formats ................................................................................ 60

Chapter 3: Introduction to Streaming and Mobile Multimedia ................................ 63 3.1 What is Mobile Multimedia? ............................................................................. 63 3.2 How Do Mobile Devices Access Multimedia?.................................................. 67 3.3 File Formats for Mobile Multimedia ................................................................. 68 3.4 3GPP Mobile Media Formats ............................................................................ 72 3.5 Internet Video .................................................................................................... 81 3.6 Flash Lite™ ....................................................................................................... 83 3.7 DivX Mobile ...................................................................................................... 84 3.8 Rich Media–Synchronized Multimedia Integration Language (SMIL) ............ 87 3.9 Delivering Multimedia Content ......................................................................... 90 3.10 Graphics and Animations in the Mobile Environment ...................................... 96 3.11 Mobile Multimedia Applications....................................................................... 98 3.12 Summary of File Formats Used in Mobile Multimedia................................... 103 Chapter 4: Overview of Cellular Mobile Networks ...............................................105 4.1 Introduction ..................................................................................................... 105 4.2 Cellular Mobile Services: A Brief History ...................................................... 106 4.3 CDMA Technologies ....................................................................................... 109 4.4 3G Networks .................................................................................................... 111 4.5 3G Technologies: CDMA and GSM................................................................ 114 4.6 4G Technologies .............................................................................................. 118 4.7 Data and Multimedia Over Mobile Networks ................................................. 119 4.8 Multimedia and Data Over 3G Networks ........................................................ 122 4.9 Mobile Networks: A Few Country-Specific Examples ................................... 128 Chapter 5: Overview of Technologies for Mobile TV ............................................137 5.1 Why New Technologies for Mobile TV? ........................................................ 137 5.2 What Does a Mobile TV Service Require? ..................................................... 139 5.3 Mobile TV Using 3G Technologies ................................................................. 148 5.4 Terrestrial TV Technology Overview .............................................................. 154 5.5 Mobile TV Using Terrestrial Broadcasting Networks ..................................... 163 5.6 Comparison of Mobile TV Services ................................................................ 175 5.7 Outlook for Mobile TV Services ..................................................................... 178

Contents

Part II

ix

Technologies for Mobile TV and Multimedia Broadcasting .............................................................181

Chapter 6: Mobile TV Using 3G Technologies .....................................................181 6.1 Introduction ..................................................................................................... 181 6.2 The Beginning: Streaming on Mobile Devices................................................ 184 6.3 Overview of Cellular Network Capabilities for Carrying Mobile TV............. 189 6.4 Understanding a 3G Streaming Service........................................................... 192 6.5 Mobile TV Streaming Using 3GPP Standards: Packet-Switched Streaming Service ............................................................................................ 193 6.6 Broadcasting to 3GPP Networks ..................................................................... 200 6.7 Examples of Streaming Platforms ................................................................... 201 6.8 Practical Implementation of Video Services over 3G Networks ..................... 202 6.9 Operator-Specific Issues in 3GPP Streaming Services .................................... 209 6.10 Multimedia Broadcast and Multicast Service (MBMS) .................................. 209 6.11 Mobile TV Services Based on CDMA Networks............................................ 213 6.12 Other Multimedia Services over 3G Networks................................................ 215 6.13 Wi-Fi Mobile TV Delivery Extensions............................................................ 218 Chapter 7: Mobile TV Services in the ATSC Framework .......................................221 7.1 Introduction: Digital Broadcasting to Handhelds and Mobile Devices ........... 222 7.2 Why ATSC Mobile DTV? ............................................................................... 222 7.3 The Open Mobile Video Coalition (OMVC) ................................................... 223 7.4 Technology of ATSC Mobile DTV.................................................................. 224 7.5 The ATSC Mobile DTV Standard ................................................................... 225 7.6 ATSC Frame Structure with Mobile Channels ................................................ 227 7.7 Content Types, Encoding, and Capacity .......................................................... 237 7.8 Multiplexing of M/H Channels ........................................................................ 240 7.9 Upgrading Transmitters for Mobile Services .................................................. 241 7.10 ATSC Mobile DTV Transmission ................................................................... 241 7.11 ATSC Transmitter Networks ........................................................................... 242 7.12 Receivers and Handheld Units ......................................................................... 244 7.13 Data Transmission on ATSC Mobile DTV ...................................................... 245 7.14 Electronic Service Guide (ESG) ...................................................................... 245 7.15 ATSC Mobile DTV Pilot Projects and Commercial Launches ....................... 246 7.16 Example of an ATSC Mobile DTV Transmission System for Mobile TV...... 247 Chapter 8: Mobile TV Using DVB-H Technologies .............................................. 249 8.1 Introduction: Digital Video Broadcasting to Handhelds ................................. 249 8.2 Why DVB-H? .................................................................................................. 250 8.3 How Does DVB-H Work? ............................................................................... 250

x

Contents

8.4 8.5 8.6 8.7 8.8 8.9 8.10 8.11 8.12 8.13 8.14

Technology of DVB-H................................................................................... 254 DVB-H Higher Layer Protocols .................................................................... 258 Network Architecture .................................................................................... 259 DVB-H Transmission .................................................................................... 260 Transmitter Networks .................................................................................... 262 Terminals and Handheld Units ...................................................................... 266 DVB-H Implementation Profiles ................................................................... 266 Electronic Service Guide in DVB-H ............................................................. 268 Content Security ............................................................................................ 270 DVB-H Commercial Services ....................................................................... 273 Example of a DVB-H Transmission System for Mobile TV ......................... 275

Chapter 9: Mobile TV Using DVB-SH Technologies ............................................ 279 9.1 Satellite Mobile TV with a Terrestrial Component ....................................... 279 9.2 The DVB-SH Standard .................................................................................. 280 9.3 Characteristics of Satellites for Mobile Broadcasting ................................... 289 9.4 Ground Transmitters for DVB-SH................................................................. 291 9.5 Receiver Characteristics ................................................................................ 293 9.6 The ICO DVB-SH System (MIM)................................................................. 293 9.7 DVB-SH System for Europe.......................................................................... 294 9.8 Future Systems Using DVB-SH Technology ................................................ 296 Chapter 10: DMB and China Multimedia Mobile Broadcasting (CMMB) ............. 299 10.1 Introduction to DMB Services ....................................................................... 299 10.2 A Brief Overview of DAB Services .............................................................. 300 10.3 How is the DAB Structure Modified for DMB Services? ............................. 300 10.4 Satellite and Terrestrial DMB Services ......................................................... 304 10.5 DMB Services in KOREA ............................................................................. 305 10.6 DMB Services Ground Segment ................................................................... 311 10.7 S-DMB System Specifications ...................................................................... 312 10.8 DMB Trials and Service Launches ................................................................ 313 10.9 China Multimedia Mobile Broadcasting (CMMB) ....................................... 314 10.10 The DTMB Standard ..................................................................................... 319 Chapter 11: Mobile TV Using MediaFLO™ Technology ...................................... 323 11.1 Introduction to MediaFLO............................................................................. 323 11.2 How Does MediaFLO Work? ........................................................................ 323 11.3 MediaFLO Technology Overview ................................................................. 326 11.4 System Capacities and Content Types ........................................................... 328 11.5 MediaFLO Transmission ............................................................................... 333 11.6 MediaFLO Transmitter Networks ................................................................. 337 11.7 Terminals and Handheld Units ...................................................................... 338

Contents

xi

11.8 MediaFLO Electronic Service Guide ............................................................ 340 11.9 MediaFLO Commercial Networks ................................................................ 341 11.10 Example of a MediaFLO System for Mobile TV: Verizon Wireless ............. 342

Chapter 12: Mobile TV Using WiMAX ............................................................. 345 12.1 A Brief Overview of WiMAX Technology ................................................... 346 12.2 Why is Mobile WiMAX Suited for Mobile TV? ........................................... 358 12.3 WiMAX-Based Mobile TV Basics ................................................................ 360 12.4 WiMAX Devices and Handsets ..................................................................... 364 12.5 Examples of Mobile TV Services Based on WiMAX ................................... 366 Chapter 13: Spectrum for Mobile TV Services.................................................... 371 13.1 Introduction ................................................................................................... 371 13.2 An Overview of Spectrum Bands .................................................................. 372 13.3 Mobile TV Spectrum ..................................................................................... 379 13.4 Country-Specific Allocation and Policies...................................................... 385 13.5 Spectrum for MediaFLO Services ................................................................. 391 13.6 Spectrum Allocation for Wireless Broadband Services ................................ 394

Part III

Multimedia Handsets and Related Technologies............399

Chapter 14: Chipsets for Mobile TV and Multimedia Applications ....................... 399 14.1 Introduction: Multimedia Mobile Phone Functionalities .............................. 400 14.2 Functional Requirements of Mobile TV Chipsets ......................................... 401 14.3 Chipsets and Reference Designs.................................................................... 405 14.4 Chipsets for ATSC Mobile DTV ................................................................... 408 14.5 Chipsets for 3G Mobile TV ........................................................................... 409 14.6 Chipsets for DVB-H Technologies ................................................................ 413 14.7 Eureka 147 DAB Chipset .............................................................................. 415 14.8 Chipsets for DMB Technologies ................................................................... 415 14.9 Industry Trends .............................................................................................. 419 14.10 Outlook for Advanced Chipsets..................................................................... 422 Chapter 15: Operating Systems and Software for Mobile TV and Multimedia Phones ................................................................. 425 15.1 Do I Need to Worry About the Software Structure on Mobile Phones?........ 425 15.2 Application Clients ........................................................................................ 426 15.3 An Introduction to the Software Structure on Mobile Phones ...................... 430 15.4 Common Operating Systems for Mobile Devices ......................................... 435 15.5 Middleware in Mobile Phones ....................................................................... 449 15.6 Application Software Functionalities for Mobile Multimedia ...................... 452 15.7 Applications for Mobile Phones .................................................................... 455

xii

Contents

Chapter 16: Handsets for Mobile TV and Multimedia Services ............................ 457 16.1 Introduction: Do You Have a Target Audience Out There? .......................... 457 16.2 Mobile Receiver Devices ............................................................................... 458 16.3 Handset Features for a Rich Multimedia Experience .................................... 459 16.4 Handsets for 3G Services............................................................................... 466 16.5 Handsets for Terrestrial Broadcast Services .................................................. 468 16.6 Handsets for Satellite Technologies with a Terrestrial Component............... 470 16.7 Handsets for CMMB ..................................................................................... 471 16.8 Phones for WiMAX and WiBro Technologies .............................................. 472 16.9 Portable Navigation Devices (PNDs) ............................................................ 473 16.10 Can Handsets Be Upgraded with the Latest Technology?............................. 473 16.11 Summary ........................................................................................................ 474 Chapter 17: Mobile TV and Multimedia Services Interoperability ......................... 477 17.1 Introduction ................................................................................................... 477 17.2 Organizations for Advancement of Interoperability in Mobile TV ............... 482 17.3 Interoperability in Mobile TV ....................................................................... 484 17.4 Interoperability in Terrestrial Mobile TV Networks ..................................... 486 17.5 Interoperability in 3G-Based Mobile TV Services ........................................ 489 17.6 Interoperability in Mobile TV Provided via the Internet: IP Networks......... 495 17.7 Interoperability of Multimedia Services ........................................................ 496 17.8 Summary ........................................................................................................ 498

Part IV

Content and Services on Mobile TV and Multimedia Networks..........................................501

Chapter 18: Mobile TV and Multimedia Services Worldwide .............................. 501 18.1 Introduction ................................................................................................... 501 18.2 China .............................................................................................................. 503 18.3 Japan .............................................................................................................. 508 18.4 Germany ........................................................................................................ 511 18.5 Italy ................................................................................................................ 512 18.6 Netherlands .................................................................................................... 513 18.7 The United States........................................................................................... 514 18.8 Hong Kong..................................................................................................... 516 18.9 India ............................................................................................................... 516 18.10 Summary ........................................................................................................ 519 Chapter 19: Content and Revenue Models for Mobile TV .................................... 521 19.1 Introduction ................................................................................................... 522 19.2 Mobile TV Content ........................................................................................ 523

Contents

19.3 19.4 19.5 19.6 19.7 19.8

xiii

Interactive Services .......................................................................................... 530 Delivery Platforms ........................................................................................... 536 Preparing Content for Mobile Delivery ........................................................... 537 Content Authoring Tools ................................................................................. 541 Mobile TV as a Business Proposition.............................................................. 543 Summary: Focus on Content Development and Delivery Platforms ............... 546

Chapter 20: Interactivity and Mobile TV ........................................................... 549 20.1 Introduction: Why Interactivity in Broadcast Mobile TV? ............................. 549 20.2 Making Mobile TV Interactive ........................................................................ 549 20.3 3G Networks .................................................................................................... 554 20.4 Broadcast Networks and Interactivity.............................................................. 555 20.5 Summary .......................................................................................................... 562 Chapter 21: Content Security for Mobile TV...................................................... 565 21.1 Introduction: Pay TV Content Security ........................................................... 565 21.2 Security in Mobile Broadcast Networks .......................................................... 568 21.3 Conditional Access Systems for Mobile TV ................................................... 568 21.4 Examples of Mobile CA Systems .................................................................... 571 21.5 Digital Rights Management (DRM) and OMA ............................................... 571 21.6 Content Security and Mobile TV Standards .................................................... 579 21.7 Multimedia Applications: High-Capacity SIMs and Removable Media ......... 581 21.8 Examples of Mobile Broadcast Content Security............................................ 582 21.9 Models for Selection of Content Security ....................................................... 586 Chapter 22: Mobile TV: The Future.................................................................. 589 22.1 Some Initial Happenings in the Industry ......................................................... 589 22.2 Where Does Mobile TV Stand Today? ............................................................ 590 22.3 Challenges for Mobile TV and Multimedia Services in the Future................. 594 22.4 Leading Indicators for Growth in Mobile TV Services ................................... 596 22.5 Summary .......................................................................................................... 597 Glossary ........................................................................................................ 599 Index ............................................................................................................. 609

This page intentionally left blank

Mobile TV-A Prologue The economists are generally right in their predictions, but generally a good deal out in their dates. Sidney Webb, The Observer, Sayings of the Week, February 25, 1924

When mobile TV was first launched in 2005, it was perceived as one of the most important happenings that would shape the mobile industry in the coming years. But events were to prove otherwise to the disappointment, and to an extent, the surprise of a very large industry. In fact the situation in 2008 was such that many virtually wrote off mobile TV. It was only in 2009 that a dramatic turnaround in fortunes began, with mobile TV in 2010 set to reach a critical mass for a very large ecosystem of viewers, operators, handset and chip manufacturers and software developers. The reasons in hindsight are not difficult to understand, and it is also not that the industry did not valiantly struggle to overcome these. The problem is that there were too many issues. First was the issue of mobile operators and broadcasters going different ways in leveraging their own networks to provide mobile TV. This led to the use of 3G unicast streaming by mobile operators and terrestrial transmission by the broadcasters based on replication of TV programs with little or no interactivity and a handful of receivers available that could actually receive them. Second was the use and multiple standards that split networks even within the same country, as was the case in Germany with DVB-H and DMB networks, both of which eventually closed down. In addition, the regulators were not helpful with spectrum issues, which held up launches in large parts of Europe and Asia. Third, the operators did not seem to get the model right. They attempted to offer the service as pay TV, which restricted the market and the handsets available. This is evident from the success of free to air DMB-T services in Korea and ISDB-T in Japan. Korea had over 20 million users of its free ISDB-T service, while Japan had over 60 million phones sold that had tuners for its 1-Seg ISDB-T services, which are aired free. A majority of multimedia handsets in these markets now come with the mobile TV tuners and decoders built in. In contrast, the users of pay mobile TV in any market did not reach even a fraction of this number. The only exception was the 3G-based services such as MobiTV (over 6 million customers), which do not need special handsets. However, even these networks did not make a breakthrough, as operators in most markets levied high data usage charges for a bandwidth, which was at a premium. xv

xvi

Mobile TV-A Prologue

The 3G quality was also restricted for various reasons, such as low encoding resolution, usage environment and limitations of unicast streaming. It was not a surprise that the initial years left bruised operators and foreclosed networks even while the major product vendors touted successful trials in each country. In the United States, for the broadcast systems based on ATSC DTV, there was no mobile extension until as late as 2009. The initial launches of DVB-H by Modeo and Hi-Wire were closed down, as it was impractical to build entirely new infrastructure. MediaFLO, which operated on its own spectrum and provided services through AT&T and Verizon Wireless, also garnered less than half a million users in the first year of its launch due to the requirement of a separate FLO-enabled handset and the availability of the service in limited markets only. The situation changed only in 2009 when additional spectrum became available after the digital transition. The success story of AT&T was being written with the iPhone, a device that did not support mobile TV. Mobile TV was not a priority with the major operators: AT&T, Verizon Wireless, or T-Mobile. In Europe, the European Union (EU) took the bold step of declaring DVB-H as the standard to be followed across Europe. Despite this apparent advantage, mobile TV continued to face heavy challenges. DVB-H met the same fate in Germany as in the United States, where Operator ‘3’ returned the DVB-H license to the regulator. In the United Kingdom, no spectrum was made available for DVB-H, while in France and Spain, commercial launches were delayed. With the exception of Italy, the pioneer of mobile TV in Europe, no country could get even a million users, with their pay mobile TV offerings requiring special handsets and conditional access systems. The users could opt for either a substandard phone that offered mobile TV or one that burnt a hole in their pockets. Phones in use by large segments of the customers stayed out of the domain, which was addressed by the mobile operators. The model of set-top boxes as applied to mobile TV was not working. Asia, China, and India were delayed in their regulatory processes, which would have enabled the provision of mobile TV to large communities. Smaller countries did launch mobile TV, but these were prodded by the vendors and looked more like “me-too” efforts rather than a successful mobile TV offering. China came out of the time warp only in 2009, with the SARFT driving terrestrial mobile TV with CMMB standard. In order to address the split markets, new operators ventured forth with satellite-based mobile TV. In 2008, it appeared to be a panacea for all the ills of mobile TV. China, going into the 2008 Olympics, had signed a deal with CMBsat, a subsidiary of EchoStar for a high powered S-band satellite providing services over China. However, its regulators failed to give the necessary permissions for the satellite to be placed in orbit. On April 18, 2008, the ICO G1 satellite was launched and all set to provide mobile TV services for the U.S. market. In January 2009, the W2A satellite was launched for providing high-powered DVB-SH mobile TV services for Europe by Solaris after it won the license. However, all was to go

Mobile TV-A Prologue

xvii

wrong with this industry as early as 2009. The CBMsat satellite was delayed, while the W2A mobile broadcasting payload failed after its launch in early 2009. By May 2009, ICO North America had filed for bankruptcy under Chapter 11, despite having an operational satellite in orbit and an operational network on the ground. The successes of Japan and Korea again appeared to be not working elsewhere. The quest for business models was unending. Any single model, such as subscription, advertising, or sponsored content did not seem to work, as there were too few handsets except in Korea and Japan. Mobile networks did embrace multimedia, but in ways that were not predicted by analysts and research reporters. Mobile devices came with such large memories (upwards of 16 GB) that a connection to online music services was unnecessary. On-device storage of videos and music became the norm. Where video was concerned, it was YouTube and Google Video that emerged as the winners, apart from social networking sites. But in an industry with more than 4 billion mobile users, the initial fallacies in embarking on mobile TV were quickly understood. ATSC has now come out with its mobile handheld standard, ATSC Mobile DTV (formerly ATSC M/H), which can enable thousand of transmitters across the United States at a relatively low cost to also broadcast simultaneously to mobile phones. Despite apparently different mobile TV standards, the underlying technologies have converged to a set of uniform standards, such as IP-Datacasting (IPDC), the Open Mobile Alliance’s Electronic Service Guide (ESG), smartcard profiles (SCP) for content protection, and multistandard universal chipsets that can tune in to any type of transmission. After a dawn-to-dusk cycle, the sun is again rising on the horizon for mobile TV—and with a renewed intensity. The use of video content on mobile phones is entering a new phase, with customers increasingly wanting video access on their mobile phones. The number of 3G users has ballooned, as have the smartphones needed for multimedia. Equipment vendors now make multistandard transmission equipment as well as receivers, making the diverse standards not such a major issue at the end of the day. Spectrum has begun to be available after WRC 07 and the digital transition in which was completed in 2009. The launch of CMMB in China has led to a massive uptake of mobile TV. According to an In-Stat report on China1 released in 2006, the number of mobile TV users in China was predicted to grow at a compound annual growth rate of over 315% in the next five years. It is now estimated that by 2012, more than 20% of users will be using mobile TV. The scales will be tilted by the increasing use of free-to-air broadcast networks, including ATSC Mobile DTV in the United States, and the spread of mobile TV to user communities in China and India. There are likely to be four major streams for the growth of mobile TV. The first will continue to be the mobile operators, where improved quality will be offered through the upgrades to 1

Mobile TV in China, Anty Zheng – Research Director, In-Stat China (http://www.instat.com.cn/index.php/ archives/672)

xviii

Mobile TV-A Prologue

3GPP standards and the use of MBMS. These operators will also embrace LTE by 2012. The second stream remains that of broadcasters, which are scaling up the operations as spectrum and standards issues get resolved. The third stream is that of wireless broadband (including mobile WiMAX, a technology that has weathered many a storm and is now here to stay, with more than 500,000 users being added per quarter) and broadband for all plans on the horizon in the United States. The fourth category of providers is that of satellite-based mobile TV providers with a terrestrial component. This book is a second journey into the exciting world of mobile TV and multimedia, with new operators, technologies, and business models.

Introduction to the Second Edition The trouble with doing something right the first time is that nobody appreciates how difficult it was. Walt West

This book is exclusively dedicated to mobile TV, which is the killer application of the twenty-first century, riding on the success of 3G mobile networks, transition to digital TV, and wireless broadband. A lot has changed since mobile TV initially appeared in 2005. 3G networks have achieved a critical mass of over 500 million users. There have been breakthroughs in terrestrial broadcasting of mobile TV across countries, addressing potentially a billion additional users in 2010 alone. It today presents an opportunity that is unparalleled in history. This is an opportunity for service providers, content producers, application developers, handset vendors, and users alike to target high revenue generating applications. This revised edition is about the new opportunity. It provides a comprehensive overview of the entire landscape, answers all your questions, and provides all the tools you need to be a meaningful player in the new markets.

About This Book Even though mobile TV is slated to grow exponentially in the very near future, concise information on the subject continues to remain scattered. It is true that many of the technologies have recently emerged from the trials, but the basic bedrock of the structure on which such services will be based is now firmly in place. No single week passes by today when a new commercial launch of mobile TV somewhere in the world is not announced. The standards for the services have the status of recommendations of ATSC, DVB, ETSI, ITU, and 3G Partnership projects. The implementation is swift and multifronted—in the form of technology itself as well as every other form: handsets, applications, chipsets, software, operating systems, spectrum, transmission technologies, and even content writing for mobile TV. The book provides a comprehensive introduction to the technological framework in which such services are being provided, with extensive clarity on how one type of service, for example, a mobile TV service based on 3G (MobiTV™, AT&T®) differs from DMB service in Korea or CMMB in China or ISDB-T in Japan. Will it be possible to use one handset for xix

xx

Introduction to the Second Edition

all these services? What types of services can be expected on mobile networks? What are the techniques used for digital rights management on these networks? What spectrum will they use? What limitations do they have? What quality of viewing can they offer? What type of content will make such networks work and how will it make money? Mobile multimedia has brought about a profound change in the industry. The handsets are now designed to deliver multimedia rather than voice. They support large, 3-inch WVGA screens, stereo speakers, A2DP Bluetooth, media players, and 16 GB flash memories. Their software is empowered to deliver content tailored for cellphones or mobiles with rich animations. It is a different world, carrying with it smaller screens, and requiring lower data rates to carry the information, but in a much more challenging delivery environment. It deals with media formats that are unique to the mobile domain. It deals with players that are for mobiles and with browsers that are unique to the mobile world. It also deals with technologies that not only deliver content but also provide mechanisms for its payment and user interactivity. The growth of mobile TV brings challenges for everyone. The users now have a very powerful device in their hands that can do much more than connect calls or play music. Are they ready to use such services? The operators are aggressively launching services. Are the content providers ready for them? Is the content secure? What type of advertising will work on such networks? What are the technology options for operators and service providers and customers? Are the regulatory authorities ready to enable the environment for mobile TV? What spectrum will be available for such services? What are the limitations for services based on each individual technology? The book addresses all these questions.

About the Second Edition The technology and markets for mobile TV have changed dramatically in the very recent past. In July 2009, the ATSC Mobile DTV transmitters went on the air, signifying a new era in the United States, where most local stations will have a mobile simulcast based on the newly recognized ATSC Mobile DTV standards. CMMB, a mobile TV standard for China, has spread to about 200 cities by end of 2009, and 3G is now enabled in China and India. MediFLO technology has had a new lease on life with additional spectrum having been released in the United States with DTV transition and its recognition as an approved technology for mobile TV in Japan, the largest mobile TV market in the world, and a bastion of ISDB technologies. This revised second edition is a completely rewritten volume that updates technologies, services and media formats and presents all information in a practical framework. Four new chapters have been added on ATSC Mobile DTV, MediaFLO technologies, WiMAX, and DVB-SH, while information on others such as CMMB has also been added in detail. The book is divided into four parts: Part I: Overview of Technologies Part II: Technologies for Mobile TV and Multimedia Broadcasting

Introduction to the Second Edition

xxi

Part III: Multimedia Handsets and Related Technologies Part IV: Content and Services on Mobile TV and Multimedia Networks Part I begins by laying down the fundamentals that go into the mobile multimedia networks, such as those that deliver mobile TV. Though digital multimedia is discussed in brief, the key focus is on mobile multimedia. Part I also gives an overview of Mobile Networks worldwide as well as an overview of technologies for mobile TV. The need to carry mobile TV and rich media applications has led to 3G networks evolving rapidly in order to add higher data carrying capabilities with HSDPA, EV-DO, and LTE. This book seeks to piece together the technologies of video, audio, data, and networks that make mobile TV possible and presents an integrated view of the interfaces, services, and applications that will frontline the developments of mobile TV in the coming years. These are discussed in two chapters on “Overview of mobile networks”( Chapter 4) and “Overview of technologies for mobile TV”(Chapter 5). In Part II, the book discusses each of the mobile TV technologies, including those based on 3G, ATSC Mobile DTV, MediaFLO, DMB and CMMB, DVB-H, and WiMAX in detail, with one chapter devoted to each service. The technology-specific chapters dwell on all aspects of the services ranging from standards, protocols, transmission, ESG, broadcast characteristics, and examples of networks where these are implemented. The rollout of mobile TV is also closely linked to the availability of spectrum as a resource. One chapter (Chapter 13) is devoted to spectrum for mobile TV services and the manner of rollout in various countries. This chapter presents the information in a holistic manner, including the impacts of digital dividend post-digital transition and WRC 07 harmonized allocations. Interoperability issues between networks and roaming have proved to be very important in the past, and will be more so in the future. Interoperability for mobile TV and multimedia networks is discussed in a separate chapter (Chapter 17). Mobile TV has spawned many new industries and fast-paced developments are happening in operating systems for mobile devices, application software, chipsets, and the handsets themselves. The industry is aware that the past growth has been possible due to increasing volumes and continuously lowering prices. The revenues that can be derived from the networks will depend on understanding the optimum multimedia formats and delivery modes, smartphones, feature phones available in the market, and how they can be addressed. The new handsets and user devices present in all cases frontline developments in each area of technology ranging from satellite or terrestrial tuners to multimode devices such as portable navigation devices or personal media players. Part III of the book is exclusively dedicated to presenting the new devices and what drives them. We discuss the chipsets, operating systems, and handsets for multimedia in Chapters 14, 15, and 16.

xxii

Introduction to the Second Edition

Finally, Part IV of the book, devoted to content, presents a series of interlinked chapters on content types that can be delivered along with their preparation tools, user interactivity, and content security. Although mobile TV will undoubtedly have its share of live TV channels, a host of new content best suited for viewing on the small screens is already appearing and will be the key to the usage and growth of mobile TV services. Mobile environment needs content specifically designed that can be compelling to watch. The content for mobile TV, already a specialized business, will be more so in the coming years. Along with the content, the delivery platforms for such content are equally important. This book discusses the emerging trends and prerequisites in this regard. Mobile networks have emerged as important vehicles for delivery of content. However, such delivery of content needs to be secure and the license holders need to able to exercise rights on how the content is used. Content security technologies common across the industry such as OMA BCAST and smartcard profiles are discussed.

Intended Audience The book is primarily intended to give a coherent view of the world of mobile TV and multimedia applications on mobile networks. It offers an insight into the maze of technologies, processes, and dimensions involved in providing the mobile TV services. The book—while technical—does not contain any formulae or mathematical calculations that go into the design of networks. It has been planned in a manner to benefit all those in the mobile industry, such as professionals, engineers, and managers, as well as students and the academic community. The mobile industry directly or indirectly comes into contact with every individual, and extensive work is being done to further the capabilities of the networks. The book is intended to help all those who are in any manner connected with mobile networks and multimedia, as they need to get a complete picture on what is happening in the field and how they can be a part of the momentum. It helps users, content providers, and operators, as well as those who are planning such services understand the dimensions of the new medium, which is the best possible integration of communication, broadcasting, and multimedia technologies. The understanding of the basic technologies and all related developments in the field prepare the ground for an easy introduction to the complex world of mobile TV, which will be essential for success in the coming years.

How to Read This Book Any of the four parts of the book can be read independently, with the other parts being used for a reference to the technologies or networks in use. However, as mobile TV and multimedia networks are characterized by their own file formats, encoding technologies, and content delivery mechanisms, it is useful to read through the book in sequence if time permits. Readers will find some repetition in the content in some chapters, which was

Introduction to the Second Edition

xxiii

necessary to present the matter in a self-contained format without excessive referrals to other sections or chapters.

Acknowledgments The information in a book of this nature is based on the work of numerous standards bodies, industry organizations, and operators who have deployed the technology in their networks. These include the OMA, DVB, ATSC, ETSI, ITU, 3GPP, CDG, GSMA, and many others. I would like to thank Paul Temme, Senior Acquisitions Editor at Focal Press, who not only encouraged me to write this extensively revised edition but also provided valuable suggestions. I would also like to thank Anais Wheeler, who managed the production of the book in the most friendly and efficient manner. Finally, I would also like to thank the many readers who provided valuable input after the first edition, which makes the second much more practical and aligned to readers as well as the industry.

Amitabh Kumar [email protected]

This page intentionally left blank

PAR T 1

Overview of Technologies Do we really need another way to rot our brains? Yes, yes we do—and live TV on our phones is just the ticket. Danny Dumas in Gadget Lab,Wired (www.wired.com/gadgetlab/2008/05/review-att-mobi/)

CHAPTE R 1

About Mobile TV Television? No good will come of this device. The word is half Greek and half Latin. C. P. Scott, journalist (http://en.wikiquote.org/wiki/Television)

Are you one of those who is fascinated with the idea of being able to deliver content to mobile devices? Or by the new mobile Flash Player, which lets you watch amazing streaming videos from thousands of sites? Or record a movie using a Handycam®, then edit it and post it on your website? Or seen a game of baseball on MobiTV or VCAST? Or are you a content producer, broadcaster, or network operator who is at the other end of the line feeding content to millions of users? Are you intrigued by the P2P networks and the way they deliver video and audios? You have lots in common, then, with many others who are deep into the world of handling audio, video, and pictures on mobile networks. Join me on a practical journey together into the realm of mobile TV, which has emerged as the most effective way to deliver high-quality interactive content—and get paid for it. Over 6 million users are using just one of the services of mobile TV (MobiTV). Millions more are connected to other networks, some based on 3G streaming, while others are using terrestrial broadcast much like digital TV for the big screens.

© 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00001-1

1

2

Chapter 1

1.1 The Beginning For the first time in the history of the Emmy awards, a new category was created for 2006: for original production of content that is designed for the new platforms, including PCs and the mobile world—cellphones, PDAs, Palm devices, iPods® and iPhones®, and the platform of mobile TV. Seventy-four entries were received, more than in any other Emmy awards category. The entries included “24 Mobisodes” from leading Hollywood studio 20th Century Fox. In October 2006, content industry’s biggest event, MIPCON 2006, described mobile TV as the most significant wireless trend for the mobile industry in the coming years. The excitement in the industry was not unfounded, as the events to unfold were revealed. Every major in1ternational event since has been telecast live on mobile TV, from the Olympics to President Obama’s election. However, a real breakthrough had still eluded the industry. It is at the turn of the year 2010 that the long-awaited breakthrough is finally in sight: the transition to 3G networks, which were nascent in 2006 but had grown to over 500 million users at the close of 2009. The first quarter of 2009 added 50 million active 3G subscribers (as reported by Maravedis®), indicating that the 3G is now adding at least 200 million users a year. Subscriber numbers will double in the next two years, with countries in Europe and Asia, including India and China, enabling the 3G networks and creating a pool of nearly a billion customers with mobile multimedia devices. This now is the new audience, not counting millions of terrestrial mobile TV receivers. They are ready to receive mobile TV, multimedia, and advertising, and to generate interactive content. They will be buying more than a billion smartphones in the next three years. An equally powerful sequence of events is being staged in the field of terrestrial broadcasting. The digital transition has finally been completed in the United States, releasing the newly auctioned spectrum to players such as MediaFLO, which has triggered their nationwide rollout. The broadcast industry also got its act together and agreed on different but regionally harmonized standards such as ATSC Mobile DTV (formerly ATSC M/H) for the United States, DVB-H for Europe (and parts of Asia), DMB for Korea, and ISDB for Japan. China was a surprise—out of multiple standards such as DMB and DTMB, a single implementation of mobile TV, CMMB, rose like a phoenix from the flames of the summer Olympics in 2008. Before 2009 was finished, over 200 cities and provincial markets were live with CMMB. Today China is the fastest-growing market for mobile TV in the world, taking even seasoned industry observers by surprise. India is next, with 400 million users waiting for 3G and a countrywide rollout of mobile TV before the sun rises on the Commonwealth Games in November 2010. There is now no single area where the focus of delivery is greater than those of mobile receivers. These receivers are not mobile phones alone. Far from it: these include standalone receivers, navigation devices, personal media players, and car receivers. The production

About Mobile TV

3

of content and applications for the tiny screens of mobile TVs and navigation devices has indeed unleashed the imagination of the industry with the production of short-form programs and original content designed to be effective even for the limited span of time available for viewing.

1.2 Mobile TV: A New Reality Mobile TV is now a reality. The technology, though new, has been proven. It is inconceivable that a major global event or news will now not be available on the mobile TV medium for major future entertainment, sports, or other national or international events. Operators have started gearing up their networks for adding mobile TV services or have rolled out entirely new networks. There are over 4 billion mobile users around the globe with over 500 million smartphones capable of handling mobile multimedia. The growth in the markets is expected to be exponential, and will be aided by the falling price of handsets and better harmonization of standards. The price of chipsets for mobile TV has already fallen below $10, opening the way for advanced handsets to be inexpensively available. The price points of the chipsets such as mobile TV receiver are expected to fall below $5 in the next year.

1.2.1 What is Mobile TV? Mobile TV is the transmission of TV programs or video for a wide range of wireless devices ranging from mobile TV–capable cellphones to PDAs and mobile receivers usable in every conceivable mode of transport. The programs can be transmitted in a broadcast mode to every viewer in a coverage area or be unicast so as to be delivered to a user on demand. They can also be multicast to a group of users. The broadcast transmissions can be via the terrestrial medium, just as analog or digital TV is delivered to our homes, or can be delivered via using high-powered satellites directly to mobile devices. The transmissions can also be delivered using the Internet as the delivery mechanism.

1.2.2 How is Mobile TV Different from Ordinary Terrestrial or Satellite TV? Mobile phones constitute an entirely different domain. The phones come with screens that are tiny in comparison to a standard TV. They have a limitation on the power consumption as preservation of the battery and talk time is of paramount importance. Every device in the cell is designed with features that can conserve power. The processors in cellphones, though powerful even in comparison to PCs just a few years back, cannot be harnessed to run complicated encoding or decoding tasks or format and frame rate conversions. The cellphones are connected via 3G cellular networks, which can support high data rates for multimedia but are not designed to handle the 4–5 Mbps needed for a standard-definition TV. Hence though there are cellphones that can receive ordinary TV telecasts, they are not really ideal for such use.

4

Chapter 1

Mobile TV is a technology that has been specifically designed to fit in the mobile world of limited bandwidth and power and small screens, yet add new features such as interactivity via the cellular network. Taking advantage of the small screen size, the number of pixels that need to be transmitted is reduced to roughly 1/16 a standard-definition TV. Digital TV today is based on the use of MPEG-2 compression, mainly because this was the best compression available in the 1990s when widespread cable and satellite delivered TV became common. Mobile TV uses more efficient compression algorithms, such as MPEG-4, Flash Lite, or H.264, for compressing video and audio—and with visual simple profiles. Compressing voice efficiently has been the hallmark of cellular networks using audio coding in AMR or QCELP. In mobile TV, we need high-fidelity stereo, and the use of audio coding using Advanced Audio Coding (AAC) based on MPEG-2 or MPEG-4 has become the norm. In the 3G world, which is characterized by the need to use bandwidth efficiently to accommodate thousands of users in a cell area, file formats based on cellular industry standards such as 3GPP (Third-Generation Partnership Project) are commonly used. Based on transmission conditions, cellular networks may also reduce the frame rates or to render frames with lower number of bytes per frame. However, reducing the bit rates needed to deliver video is not the only characteristic of mobile TV services. The broadcast technologies have been specially modified to enable the receivers to save power. The terrestrial broadcast mobile TV services, such as DVB-H or ATSC Mobile DTV, use a technique called time slicing, which allows the receiver to switch off power to the tuner for up to 90% of the time while showing uninterrupted video. The transmissions also incorporate features to overcome the highly unpredictable signal reception in mobile environments by providing robust forward error correction. Mobile environments are also characterized by users traveling at high speeds—for example, in cars or trains. Standard terrestrial transmissions based on ATSC (Advanced Television Systems Committee) or even DVB-T (Digital Video Broadcasting –Terrestrial) are not suited for such environments. This is due to the use of orthogonal frequency division multiplexing (OFDM) in DVB-T, where the 8000 carriers, which are used for the modulation, appear to be at different frequencies than intended. For this purpose, a special modulation technique—that is, COFDM with 4K carriers—is used. ATSC, which uses 8-VSB, is subject to severe multipath fading and uses a distributed transmission system (DTS), which is needed to overcome these effects. Mobile TV has spawned its own set of standards for terrestrial, satellite, and 3G cellular network deliveries.

1.3 What Else is Different in Mobile TV? Mobile TV is designed to received by cellphones, which are basically processors with their own operating systems (e.g., Windows Mobile™) and application software packages (e.g., browsers, mailing programs). The handsets support the animation and graphics software such as Java or Adobe Flash, players such as Real Player or Windows Media, and so on. Content providers have been aware of these capabilities and hence have designed content

About Mobile TV

5

that now takes advantage of the devices on which these will be played. The new content that is prepared for mobile TV takes advantage of intermixing rich animations, graphics, and video sequences, which play either natively or through software clients on mobile phones. The bandwidth used to deliver a flash animation file is a fraction of that used for delivering the same length of video. This means that mobile phones with all the limitations can indeed display very appealing content and presentation for simple programs such as weather or news. They can also be used to create entirely new interactive services such as voting, online shopping, chat, or mail, which are delivered with video music and animations. Mobile TV programming is delivered with a new interactive electronic service guide (ESG), which makes access to content and its purchase much easier. It needs to be delivered with a user interface (UI) that makes reading on the tiny screens much easier and intuitive through use of widgets or interactive icons. The animation software such as Java or Flash that is basically taken from the PC world is again not ideally suited for the constrained environment of mobile sets. This has led to the need to adopt profiles of implementation that are suited for mobile devices. Java MIDP, Flash Lite™ profiles, and graphics delivered via scalable vector graphics (SVGTiny or SVG-T) are results of marathon standardization efforts across the industry to make a uniform environment for creation and delivery of content.

1.4 Standards for Mobile TV Watching mobile TV appears deceptively simple. After all, it is carrying the same programs that were being broadcast anyway. But this simplicity hides a vast trove of technologies and standards that have been developed over time to make the feat of bringing TV to the small 2-inch screens possible. Audio enthusiasts have long been used to handling over 30 types of audio file formats ranging from simple .wav files to .mpg, Real, QuickTime, Windows Media, and other file formats. Video has been available in no fewer than 25 different formats, from uncompressed video to MPEG-4/AVC. Moreover, video can be shown in a wide range of resolutions, frame sizes, and rates. It has been a massive job for the industry to come together and agree on standards that will be used as a common platform for delivering mobile TV services. The standards may differ slightly based on technology, but the extent of harmonization that has been achieved in a time frame as short as a decade reflects a new life cycle of technology and products. The effort required countless groups to work together—chip designers, handset manufacturers, software developers, TV broadcasters, and mobile operators being amongst hundreds of stakeholders involved. It also required the content generation industry to design content for the mobiles, the broadcasting and the cellular mobile industry to prepare the transmissions systems, and security specialists to come up with new ways to secure content. The change, which became abundantly clear with the advent of mobile phones, had been in the air for quite some time. Mobile phones are no longer “phones,” but are multimedia

6

Chapter 1

devices for receiving and creating content, entertainment, and professional use. The handsets can be connected to PCs, digital and video cameras, office systems, and a host of other devices to deliver or play multimedia files or presentations.

1.4.1 Resources for Delivering Mobile TV A mobile phone is a versatile device. It is connected to cellular networks and at the same time receives FM broadcasts through its FM tuner or connects to a wireless LAN using Wi-Fi. The delivery of mobile TV can similarly be multimodal through the 3G networks, Wi-Fi, satellite, or terrestrial broadcast networks. In all these manifestations of delivery, a common necessary resource is the spectrum. The rapid growth of mobile TV and its momentum and scale was indeed an event not foreseen by the industry, though not all may agree with this statement. The result has been that the mobile TV industry has been left scrambling to search ways to find spectrum and deliver mobile TV. In Europe, the traditional TV broadcast spectrum in UHF and VHF stands occupied by the transition to digital TV and the need to simulcast content in both modes. The United States, after completion of its digital transition, has auctioned the excess spectrum, which has enabled technologies such as MediaFLO (AT&T mobile TV and Verizon VCAST) to cover all markets in the country. In Korea the DAB spectrum for audio broadcast services was used to deliver mobile TV services in a format named as Digital Multimedia Broadcast-Satellite or S-DMB. The government also allowed the use of the VHF spectrum for mobile TV services and this led to the terrestrial version of the DMB services, called T-DMB, being launched and used in Europe including in Germany and Italy. DVB-H is a standard largely designed to use the existing DVB-T networks and ideally use the same spectrum. This is indeed the case in many countries with the UHF spectrum being earmarked for such services. The United States has now adopted the ATSC Mobile DTV standard for mobile TV, which will enable virtually all digital TV stations to simulcast content for reception on mobile devices. In Japan, which uses ISDB-T broadcasting, the industry chose to allow the same spectrum to be used for mobile TV with technology called 1-Seg broadcasting. The scramble to provide mobile TV services by using the available networks and resources partly explains the multiple standards that now characterize this industry. Serious efforts are now on to find spectrum and resources for mobile TV on a regional or global basis, which will in the future lead to convergence of the standards.

1.4.2 The Mobile TV Ecosystem It is not only the TV viewers or content producers that constitute the mobile TV community. The new multimedia phones that can display mobile TV can also play music, and that too is directly taken off the networks rather than downloaded from a PC. A new content industry for sale to mobiles was born. The new opportunities unleashed by software for mobile TV

About Mobile TV

7

and the content development in Java or Flash, made in one go millions of software developers working in these fields a part of this industry, and their products are now available through the application marketplaces. The chipset industry needed to come up with specialized mobile chips for handling multimedia, content security, and connectivity. The family expanded with new content creators, content aggregators, music stores, and e-commerce platform developers. The need to protect content so that the rights holders could receive their dues (unlike the early days of Internet content sharing) led to serious measures for digital rights management or DRM. The traditional community of content production of Hollywood indeed expanded manifold, encompassing all in the industry, including cellular operators, broadcasters, content producers, or those in the vast software, hardware, and services industries.

1.5 New Growth Areas with Mobile TV Although mobile TV may appear to be an end in itself, it is in fact a part of the portfolio of multimedia services that can be delivered by the new generation mobile networks. It is thus in company with YouTube, Twitter, Facebook, multimedia messaging (MMS), video calling, and multimedia client server, Java applications, location-based services (LBS), instant messaging, and so on. In fact, the increasing use of multimedia was a foregone conclusion after the success of i-mode services in Japan, which demonstrated the power of the data capabilities of the wireless networks. The launch of FOMA (Freedom of Multimedia Access) services, with its 3G network, took interactivity and multimedia applications to a new level. The new generation of networks empower users to generate their own content, which can be broadcast or shared with others. The rich media services have become a part of all advanced third-generation networks.

1.6 What Type of Opportunity Does Mobile TV Present? What is available today as mobile TV is only the tip of the iceberg. Although there are over 200 networks operational today for delivering mobile TV, what is happening right now is a major move toward regionally harmonized broadcasting using new technology networks such as ATSC Mobile DTV, FLO, or DVB-H, and open standards for ESG, encryption and content. This is changing the landscape for the content providers and network operators in being to target larger audiences more efficiently while the users benefit from open handsets and lower priced offerings. Using the mobile networks to address over 4 billion mobile users is an exciting idea that drives content producers to the MIPCOM. It is even more exciting to be able to target hundreds of millions of devices that have the capabilities to process and deliver video in real time. Broadcasters crowd the NAB or IBC mobile TV forums to get the quickest entry into the new world of mobile broadcasting.

8

Chapter 1

However, it is not only the number of subscribers or revenues that reveal the future potential of mobile TV. The medium is much more personal, direct, and interactive, a significant departure from broadcasting to a faceless set of customers, which is what most broadcast environments provide. Mobile TV provides a new opportunity to a wide range of users. The users get new power from the multimedia capabilities built into the handsets, which now include video and audio and multimedia applications properly configured to deliver live TV or video on demand. The nature of content needed for mobile networks is different, so the media industry also gets an opportunity to create new distribution platforms, target advertising, and reuse existing content for the new networks. The broadcast and cellular operators have been seeing a new growth market and there is considerable new opportunity for the manufacturing and software industries.

1.7 What Handset Types Does Mobile TV Work On? The capability to receive mobile TV is today largely dependent on the delivery network. Terrestrially delivered mobile TV can be received only with handsets that have a tuner specifically built in for the type of broadcast, e.g., DVB-H or MediaFLO. Such handsets may be operator-specific and the choice may be limited to just a few types. In markets such as Korea or Japan, where free-to-air transmissions exist, more than 80% of handsets have a tuner built in. Most 3G multimedia smartphones, on the other hand, permit reception of streamed mobile TV.

1.8 Is Mobile TV Really Important? A question that has been asked in millions of mobile TV blogs was whether mobile TV was really that important. Would anyone really watch TV on the sets once the initial craze was over? The answer, it would appear from initial responses, is probably positive. This is so because the mobile TV can be available widely through broadcast networks, and watching it is not necessarily expansive. The users are today on the move, and refreshing new content and updates, fun, and music seem to be always welcome, as do the opportunities to remain connected using the new generation of smartphones. Continuous additions to mobile phone capabilities, beginning from a simple camera, MP3 player, FM radio, and now mobile TV have now shifted the handset from a mere calling and answering device to being squarely a part of an advanced entertainment, Internet access, gaming office application, mobile commerce, and utility device. We are now squarely in this new age.

CHAPTE R 2

Introduction to Digital Multimedia There must be an ideal world, a sort of mathematician’s paradise where everything happens as it does in textbooks. Bertrand Arthur William Russell

2.1 Introduction When mobile TV was initially implemented, it was seen as a textbook case of transporting large-screen content to mobile devices. However, things did not work out as expected. This was no different from the way mobile websites had gone in the initial days of wireless access protocol or WAP. There was a certain uniqueness about the mobile world—the small screens, interactive applications, and creative users that made it a different world from the relatively passive large-screen TV. The world of digital video is indeed very challenging. It involves delivering video to a range of devices, from tiny mobile screens to giant digital cinemas. In between lies a wide range of devices such as HDTVs, DTVs, PDAs, and so on with varying sizes and resolutions. The delivery may be via terrestrial, satellite, or cable systems; DTH platforms; IP TV; 3G networks; or mobile broadcast networks such as ATSC, DMB, or DVB-H. All these are made possible by standards and technologies that define audio and video coding, transmission, broadcast, and reception. The basic elements of the digital transmission system are however very simple. These constitute a still or moving picture (video) and audio in one or more tracks. The audio and video are handled using compression and coding standards and transmitted using well-defined network protocols. An understanding of the coding formats and standards and protocols and standards for transmission is useful to fully understand the dimensions of mobile TV and other frontline technologies. We begin our journey into the world of mobile TV by taking a broad overview of the media types used in the digital domain. Audio and video compression is a very common topic and the only reason we need to discuss it here is that multimedia in mobile networks is handled with specific formats and characteristics. The quality of what you see on the mobile screens is totally determined by some of the characteristics we define in this chapter. © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00002-3

9

10

Chapter 2

Figure 2.1: Broadcasting environment today.

Digital video on the web and on broadcast networks has traditionally employed different resolutions and techniques of coding. As the broadcast networks begin to target the mobile devices, an understanding of the multimedia formats with origins in different domains becomes very important. Mobile networks are characterized by transmissions at speeds much lower than the standarddefinition TVs and require audio and video to be compressed by very efficient algorithms such as MPEG-4 with limited profiles. The mobile devices present a very constrained environment for applications, owing to the limitations of power, processor, and memory capacity. This implies that they can handle only visual simple profiles of the video comprising limited objects suited for the tiny screens. We look at the pictures, video, and audio and the manner in which they are compressed for mobile networks.

2.2 Picture The basic element of multimedia is a picture. The picture in its native format is defined by its intensity, color, and size. For example, a picture presented on a full screen of a VGA (Video Graphics Array) monitor would be represented by 640480 pixels. The size of the picture file would be dependent on the number of bytes used to represent each pixel. For example, if a picture is stored as 3 bytes per pixel, the picture size  640  480  3  921,600 or 921 kilo bytes KB or 921KB. The image size is thus represented as 0.92 mega bytes (MB) and the picture quality is represented as 0.297 megapixels. The same picture on an extended graphics array (XGA) monitor (1024  768) would be displayed on a higher resolution with file size of 1024  768  3  2359.2 KB or 2.4 MB. The picture resolution is 0.78 megapixels.

Introduction to Digital Multimedia

11

Figure 2.2: A picture.

2.2.1 Image File Sizes A picture is represented by pixels, and the number of pixels has a direct bearing on the image’s file size. An image as transmitted for standard-definition TV (CCIR 601, now called ITU BT.601) is represented by 720576 pixels (or 720480 for NTSC), i.e., about 300 K

Figure 2.3: Screen size and pixels. (Courtesy of 3G.co.uk)

12

Chapter 2

pixels. The same image if represented on a mobile TV screen could be represented as 352240 and would only need 82 K pixels. An HDTV transmission with 19201080 pixels will need 2 M pixels for displaying on one screen. In general, different screen sizes and resolutions can be represented by different pixel counts. The pixel count and its representation in respect of number of bits directly reflect the quality.

2.2.2 Image Resolution: Communications, Computer, and Broadcast Domains A number of image formats are used for carrying video at lower bit rates and for lower resolutions. One of the early formats was the CIF (Common Intermediate Format), which was needed for applications such as video conferencing, which connect across national borders. As the ISDN lines supported only 64–128 kilo bits per second (Kbps), full-screen resolution could not be supported. The CCIR H.261 video conferencing standard, for example, uses the CIF and Quarter CIF (QCIF) resolutions. The CIF format is defined as 352240, which translates to 240 lines with 352 pixels per line. QCIF format is also used for low-bandwidth applications or on a web page and has only 178144 pixels. The use of CIF and QCIF notations to determine the picture “window size” is quite common in the telecommunications and Internet domains. Analog broadcasting is based on one of the three standards: PAL, NTSC, and SECAM. The key transmission parameters can be represented by NTSC (525 line, field frequency 60 fields/ sec), PAL (625 lines, field frequency 50 fields/sec), or SECAM (625 lines, field frequency 50 fields/sec). (The actual number of active lines transmitted are however lower—480 for NTSC and 576 for PAL video.) The digital representation of TV signals was standardized in the MPEG-1 standard, which used the term Source Input Format or SIF. The SIF for NTSC was defined as 360240 (active pixels 352240); for PAL it was defined as 360288 (active pixels 352288). This is the resolution

Figure 2.4: Image representations.

Introduction to Digital Multimedia

13

used in the VCDs. It is thus evident that the SIF for PAL is identical to the CIF format used in the communication domain, except for the fact that in the aspect ratio of pixels used is 1.22 in CIF as opposed to 1.33 (4/3) in PAL. The CIF format of 352240 was chosen so as to be divisible into 88 “macroblocks” of pixels for compression, as you will see later in this chapter. For the computer industry, using video monitors, mentioning resolution in the form of VGA is much more common, as this was the resolution of the erstwhile color video monitors (monitors used today have much higher resolutions, such as XGA [1024768] and SXGA [1280960]). The VGA resolution is 640 pixels480 lines. A Quarter VGA (QVGA) is then 320240 and has 0.768 megapixels. QVGA is a commonly used format in mobile TV applications, although VGA and CIF resolutions are also used, depending on whether the service originates on a cellular network or a broadcast network. QVGA is sometimes also called Standard Interchange Format (SIF) as defined in the computer industry and represents the same resolution as in the source input format (SIF) in the broadcast domain. There are other sizes that can be used to define an image. These can be one half or 1/16 of a VGA screen (i.e., 160120 or QSIF). As the displays moved to higher resolutions, new formats that represented the higher pixel densities as well as higher aspect ratios (e.g., 16:9 instead of 4:3) became common. Table 2.1 depicts the pixels recommended for various image size applications. Table 2.1: Common Display Formats. Broadcast Domain Format SIF (PAL) SIF(NTSC) 480i (SDTV NTSC) 480p

Computer Displays/Mobile

Communications

Pixels

Aspect Format Ratio

Pixels

Aspect Ratio

Format

Pixels

Aspect Ratio

352288 352240 704480

4:3 QVGA 4:3 CGA 4:3 or VGA 16:9 4:3 or WQVGA 16:9 16:9 WVGA 16:9 SVGA

320240 320200 640320

4:3 4:3 1:1

CIF QCIF 4 CIF

352288 176144 704576

1.2:1 1.2:1 1.2:1

400240

5:3

16CIF

14081152

1.2:1

768480 800600

8:5 4:3

SQCIF Web 720

12896 720540

1.33:1 4:3

Web 720HD Web 360 Web 360HD Web 640 Web 640HD

720400

16:9

360270 360:203

4:3 16:9

640:480 480270

4:3 16:9

704480

720p (HDTV) 1280720 1080i,1080p 19201080 HDTV QSIF 176144

4:3

XGA

1024768

4:3

Cinema 2K Cinema 4K

19981080 39962160

1.85:1 1.85:1

WSXGA SXGA

1280720 12801024

16:9 5:4

Academy 2K Academy 4K

18281332 36562664

1.37:1 1.37:1

WXGA UXGA

1368766 16001200

16:9 4:3

14

Chapter 2

Quick Facts Image Resolution and Display Resolution The distinction between the image resolution and the display quality should be appreciated. The quality of display depends on pixels per inch (ppi) on the display device. An HD image with 19201080 resolution will display well on a TV screen but will not have the same perceived quality of display on a cinema screen, as the pixels per inch will be too low and pixel size too large. Digital cinema has a typical resolution of 4 K, in full aperture (40963112), and has 12 million pixels per frame as opposed to 2 million for HDTV.

Figure 2.5: Image quality and pixel size (or grain size). Most mobile phones today have high enough resolution to support either 320240 or 640480 image resolution. An iPhone 3G, for example has a screen resolution of 480320 pixels, which is half of VGA resolution. However, it has 163 pixels per inch, giving a good perceived image quality.

In general, the screen can display only low-resolution images in mobile phones. This is the reason why a low-resolution camera (i.e., a VGA or CGA camera) is placed near the screen for video calls. Most phones also have cameras that can go up to 8 megapixels. However, such pictures displayed on the screen of the mobile phone

Introduction to Digital Multimedia

15

Table 2.2: Image Resolution. Print Size Wallet 45 inches 57 inches 810 inches

Megapixels

Image Resolution

0.3 0.4 0.8 1.6

640480 pixels 768512 pixels 1152768 pixels 15361024 pixels

Figure 2.6: Picture quality by pixels. (Courtesy of cnet.com)

look quite different, due to the lower resolution of display! For the same size picture, quality can vary widely based on the resolution of the camera used. The need for high resolution can imply very large pixel counts for digital images. As an example the Kodak recommended image resolutions for different picture sizes are listed in Table 2.2.

Quick FAQs 1. What is a high-resolution-screen phone? One of the phones with a high-resolution screen is the Sharp Aquos Fulltouch 931SH. It has a resolution of 1024480, which brings it to par with that of XGA resolution, except for the smaller screen size, represented by 480 lines. The phone, incidentally, also has a 1-Seg TV tuner, which can be used mobile TV using ISDB-T.

16

Chapter 2

Figure 2.7: Sharp Aquos Fulltouch 931SH. 2. What is the screen resolution of PDAs? Most common PDA resolutions are 640480 (VGA) and 800480 (WVGA).

2.3 Image Compression Transmission of a picture in uncompressed format is not practical, due to its large size and consequent time needed for its transmission. For use on the Internet and email, the image sizes need to be much smaller than the uncompressed formats. There are many ways to reduce the file size, such as: ● ● ●

Changing the picture size to suit the receiver Number of bytes used to represent each pixel Compression

Obviously, there is a very wide range of image formats, with different compression ratios, and the techniques used have a bearing on the image portability and quality. For local storage and special applications (e.g., publication, large-screen displays), it may still be necessary to handle images in uncompressed format.

2.3.1 JPEG Image Format The JPEG format is one of the most commonly used image formats. The JPEG encoders work by dividing a picture into macro blocks of 88 pixels and applying the DCT process (Discrete Cosine Transformation). The higher coefficients are then discarded through a “zig-zag” scanning (selecting the lower frequency components first followed by higher frequencies) leading to reduction in file size. The reduction is dependent on how many coefficients we are willing to discard and correspondingly the loss acceptable in compression. The quantized values are further compressed using lossless Huffman Coding. The entire process of compression using DCT is based on the fact that the human eye can not perceive fine details that are represented by higher frequency coefficients, so these can be easily discarded without discernable loss of quality.

Introduction to Digital Multimedia

Figure 2.8: Compression using DCT.

Figure 2.9: DCT quantization using zig-zag scanning.

17

18

Chapter 2

In most cases, a 20:1 compression can be achieved without discernable loss of quality. The compression so achieved is lossy, as the higher frequency components discarded cannot be recovered again. It is for this reason that images needed for studio work and editing are stored in an uncompressed format. The JPEG files are stored with the .jpg extension and are widely supported by browsers as well as a majority of applications.

2.3.2 The GIF Format The GIF or Graphical Interchange Format was originally developed by Compuserve in 1980 and has been a de facto standard for image storage and transmission since then. It is virtually a lossless compression. The GIF format uses LZW compression technique and is efficient at condensing color information for pixel rows of identical color by limiting the color palette to 16 or 256 colors. It is particularly effective for drawings and sketches with large areas of the same color. There are two variants: GIF87a, which supports 8-bit color (256 colors) and interlacing, and GIF89a, which in addition supports transparency and animation. GIF files are saved with the .gif extension and have universal browser support. The GIF format is a Unisys patented technology.

2.3.3 The BMP Format The BMP is the Bitmapped Graphics Format, defined by Microsoft and commonly used in the Windows environment. BMP reduces the file size by supporting 1-, 4-, 8-, or 16-bit color depth. The images can be uncompressed or have RLE compression. Due to this, the file size is very large. The files have the .bmp extension. Some of the common picture formats are summarized in Table 2.3.

2.4 Video When there is motion, there is a need to convey continuous information on the objects as they move, which brings us into the realm of video. The handling of video is based on Table 2.3: Common Picture Formats. bmp exif gif87a/gif89a jpeg paint/pict pdf png tiff wmf

Bitmap Exchange Image File Format Graphical Interchange Format Joint Photographic Expert Group MacPaint and MacDraw Portable Document Format Portable Network Graphics Tagged Image File Format Windows Meta File

Introduction to Digital Multimedia

19

the principles of persistence of vision of the human eye, which cannot distinguish rapid changes in scene. Taking advantage of the persistence of vision, it is possible to transmit a predetermined number of pictures (called frames) every second, and the human eye would not see any discontinuity in the motion. This is the principle used in cinema projection where 24 frames are shown, with each frame being shown twice, to bring to a refresh rate of 48 frames per second, to provide a feeling of continuous motion.

2.4.1 Generation of Video Signals: Scanning The word “frame” originates from cameras, which capture a series of pictures called frames that are then on passed in the form of a video output together with one or two audio channels. Each frame essentially represents a picture and the motion is captured by transmitting either 25 or 30 frames per second (based on PAL or NTSC standards). The first step in the generation of video from pictures is that of scanning a picture. A camera typically measures the intensity and color information in a picture by scanning a horizontal line across the picture. A series of horizontal lines are used to complete the full picture.

Figure 2.10: Scanning of images.

In the analog domain, the scanning process generates a series of levels (amplitude vs. time) representing the variation of the picture from white to black. The process generates a waveform representing each horizontal line until all the lines are scanned and converted into the analog waveform to complete a frame. Each frame has a predefined number of lines. The lines are separated by a vertical blanking pulse.

20

Chapter 2

Figure 2.11: Scanning in a television frame.

The scanning must be repeated a number of times each second to cover motion resulting in transmission of 25–30 frames per second.

2.4.2 Interlaced and Progressive Scanning When the pictures were scanned at a frame rate of 25 or 30 frames per second, there was a visible flicker owing to the time gap between the frames (i.e., 40 milliseconds (msec) for 25 frames per second (fps), while a 20 msec refresh is needed to give a flicker-free viewing experience). Hence the techniques that had been used in the motion picture industry came to be used, whereby the projector shows each frame twice to reduce flicker. In the days of analog signals, this could not be implemented easily, as there was no way to store the frame. Hence a new mechanism called interlaced scanning was used. The frame was divided into two halves with each having about half the lines, called a field. The first field displayed the odd-numbered lines, while the other displayed the even-numbered lines. Interlaced scanning is still used today even in digital transmissions. Interlaced scanning does not work when applied to computer monitors or mobile screens. Computer monitors need to display small character images and produce a visible flicker with interlaced scan. These therefore work on progressive scan, which produced better pictures. Nonlinear editing of video editing also requires signals to be processed in progressive scans.

Introduction to Digital Multimedia

21

Figure 2.12: Interlaced and progressive scan.

2.4.3 Color The human eye perceives light in three colors: red, green, and blue (called RGB in the video world). While this is a good way to represent the signals, it is more convenient to have the luminance component separate and the color components carried separately. This helps in a black and white TV, to use only the luminance signal. This mapping is done easily be representing the signals as Y (luminance) and color components called U (representing B-Y) and V (representing R-Y). Historically, when all TV sets were monochrome, only the luminance component was used. For backward compatibility, the technology of transmission of the luminance and color signals separately was adapted. The monochrome monitors could continue to display the Y (luminance signals) while the color TV sets would use luminance and color signals. As the human eye perceives color details at lower acuity than luminance, the bandwidth of the color signals is kept lower than luminance. For example, in PAL, the luminance channel is transmitted at a larger bandwidth of 5.5 MHz while the U and V channels are transmitted at 1.5 MHz. Similarly, in NTSC the color channels called I and Q are transmitted at bandwidths of 1.3 MHZ and 400 KHz, respectively, against that of luminance, which is 4.2 MHz.

2.5 Analog TV Signal Formats Analog video comprises the color components R, G, and B, which may be carried separately on three different wires or cable for local connectivity. This type of video carriage is known as the component format. Computer monitors usually have connectors for accepting the RGB format component video. In TVs, the YUV format is used owing to compatibility with monochrome devices.

22

Chapter 2

2.5.1 Composite Video Where the carriage of signals is over medium distances (e.g., within a facility) the three-cable method proves cumbersome and instead the technique of a composite video signal is used. A composite video signal comprises the luminance component (Y), which has been modulated with a color subcarrier.

Figure 2.13: Composite and component video.

NTSC standard uses the QAM modulation of the two color components; the SECAM standard uses frequency modulation.

Figure 2.14: NTSC composite signal.

Introduction to Digital Multimedia

23

2.5.2 S-Video S-video avoids the combining of luminance and chroma components by keeping the two separate. This means that the video is carried using two cables, one carrying the luminance (Y) and the other carrying the chrominance (C). S-video connectors were frequently used in higher-grade home video equipment.

2.6 Digital TV Formats Analog video can be digitized by sampling at a frequency that is larger than the Nyquist rate, i.e., twice of the bandwidth of the signal. The sampling of video is done on the component video (Y,U,V) (to generate digital streams Y, Cb, and Cr), which are then combined to generate a digital representation of the signal. The sampling of the color signals is done at lower rates than the luminance signal without perceptible loss of quality. For this purpose, it is usual to code the U and V components at half the bit rate of the luminance component. This is denoted by the nomenclature of YUV samples as 4:2:2; i.e., for every four samples of Y there are two samples of U and two samples of V. This is done by using half the sampling rate for the color signals U and V. It is possible to reduce the bit rates further by sampling the color only on alternate slots. This gives rise to 4:2:0 notations for the sampling employed. Table 2.4 shows the ITU BT.601 recommended rates for component digital video. In both PAL (576i) and NTSC (480i), each horizontal scan line is represented by 720 samples for Y and 360 for the color components Cr and Cb. This makes processing of multistandard digital signals easier. The samples are represented digitally by using 10 bits each in professional video equipment. This generates interlaced digital component video. There is a difference between the total picture area and the active area used for carrying video information (see Table 2.5). In digital video, the capacity available through the inactive lines (horizontal ancillary area) and the vertical blanking (vertical ancillary areas) is used to carry pairs of stereo audio channels. In the case of NTSC, the available capacity can carry data rates of up to 5.7 Mbps. Table 2.4: Sampling for Generation of Component Digital Video. Component Digital Video Sampling (PAL) 4:2:2 Sampling 4:1:1 Sampling 4:2:0 Sampling 4:4:4 Sampling

Luma at 13.5 MHz, Chroma at 6.75 MHz (23.375 MHz) Luma at 13.5 MHz (43.375 MHz), Chroma at 3.375 MHz Luma at 13.5 MHz, Chroma at 6.75 MHz (interleaved) Luma and Chroma are sampled at 13.5 MHz Each

24

Chapter 2 Table 2.5: Active Picture Areas Used in Digital Standards. Total Area Including Sync

NTSC PAL/SECAM

Active Picture Area

Width

Height

Width

Height

864 864

525 625

720 720

486 576

Frame Rate

29.97 25

Table 2.6: SDI Signal Standards. Video Format 480i, 576i (SD-SDI) 480p, 576p 1080i, 720p (HD-SDI) 1080p (Dual-Link SDI)

SMPTE Standard

Bit rate

SMPTE 259M SMPTE 344M SMPTE 292M SMPTE 272M

270 Mbps 540 Mbps 1.485 Gbps 2.970 Gbps

In the AES/EBU format, two audio channels can be carried for a data rate of 3.072 Mbps. Thus two to four channels of audio are carried along with component digital video to generate the SDI signal.

2.6.1 SDI Video An analog-to-digital conversion of video signals by sampling of video generates digital video in uncompressed format. This video, along with audio, is delivered in broadcast stations using a serial digital interface and is commonly called SDI. The Society of Motion Picture and Television Engineers (SMPTE) has standardized SDI video and SDI-HD video as per the notations in Table 2.6. SDI video is generally the source used for all further processing such as encoding, compression, and storage. The SDI signals can also carry embedded closed-caption information.

2.6.2 Digital Video for Small-Screen Devices For CIF and QCIF signals, the ITU provides for a lower sampling rate of 4:2:0 for the chroma signals. This leads to the following representation: Common Intermediate Format (CIF)—288 lines of luminance information (with 360 pixels per line) and 144 lines of chrominance information (with 180 pixels per line). Quarter Common Intermediate Format (QCIF)—144 lines of luminance (with 180 pixels per line) and 72 lines of chrominance information (with 90 pixels per line). Table 2.7 lists the CCIR recommended video standards.

Introduction to Digital Multimedia

25

Table 2.7: ITU Video Standards.

Luminance Resolution Chrominance Resolution Color Subsampling Fields Per Sec Interlacing

CCIR 601 525/60 NTSC

CCIR 601 625/50 PAL/SECAM

CIF

QCIF

720480 360480 4:2:2 60 Yes

720576 360576 4:2:2 50 Yes

352288 176144 4:2:0 30 No

176144 8872 4:2:0 30 No

2.6.3 Interlaced Scanning vs. Progressive Scan for Small-Screen Devices The small screen devices (CIF and below) use progressive scan instead of interlaced, as shown in Figure 2.15. (The progressive scan is denoted by “p” and interlaced scan with “i”.)

Figure 2.15: Display on small-screen devices.

2.7 Video Bit Rate Reduction SDI video at 270 Mbps is a commonly used standard for professional use in studios, broadcast systems, and a variety of other video handling environments for standard-definition video. However, for most transmission and broadcast applications there is a need to reduce the bit rates while maintaining acceptable quality. There are two ways in which the bit rate of video can be reduced: scaling and compression.

2.7.1 Scaling In applications where a smaller window size can be used, the number of pixels and consequently the bits required to carry them can be reduced. This type of scaling is called spatial scaling.

26

Chapter 2

Temporal scaling Bit rates can be reduced for certain applications by reducing frame rates. This is particularly true for frames where motion is limited (such as a news reader on TV). An example is the RealVideo© streaming, which can drop the frame rates from 30 (or 25) to 15 fps or even lower. In mobile streaming, transmission conditions may force frame rates down to as low as 7–10 frames per second, which makes video look “jerky” as the frame rate is too low even with persistence of vision to provide a perception of continuous motion.

2.7.2 Video Compression Compression of video is a complex process, and a number of techniques are employed to compress video by factors of 100 or more while maintaining quality for designated applications. The compression of video builds on the techniques for compression of pictures such as JPEG compression using DCT. As each frame represents largely the same picture, with motion in some areas of the picture, the techniques of frame prediction and interpolation are used in addition to the compression of the picture itself represented in the frame. All the compression techniques take advantage of the redundancies that are present in the video signal to reduce the bit rates of video for use in digital TV, mobile TV, IP TV, and other networks. Compression of video can be lossy or lossless. In case of lossy compression (such as dropping of bits or coefficients in the compression algorithms), the original picture cannot be restored to full resolution. Spatial redundancy In normal pictures, there are areas where the pixels would all depict the same object, e.g., sky or clouds. In such cases, the variation from one pixel to another is minimal, and instead of describing each pixel with all Y and color information bits, these can be coded by using the statistical redundancy information. A code such as Run-Length Encoding (RLE) enables the carrying of frequently occurring parameters using fewer bits. Temporal redundancy In case of motion, each frame has some pixels that would have changed with respect to the previous frame as a result of motion. However, this is not the case for all pixels in the frame, many of which would carry the same information, as the frame rate is quite high (e.g., 25–30 frames per second). Hence conveying all the information of a frame, as if it were totally unrelated to the previous frame, is unnecessary. Only the “change” information (denoted as motion vectors) between one frame and another frame needs to be conveyed. It is also possible to predict some frames based on the motion vector information. Every time all information of a frame is carried it is called an I-frame; frames that are predicted using the motion vectors from previous frames are called P-frames, per the notion used in MPEG-2

Introduction to Digital Multimedia

27

compression. There is another type of predicted frame called the B-frame, which is predicted using the I- and P-frames using the previous as well as next (forward frames) as reference. Temporal or interframe compression is possible owing to a large amount of common information between the frames, which is carried using only motion vectors rather than full frame information. Perceptual redundancy The human retina and the visual cortex are inherently able to distinguish the edges of objects with far superior acuity than they can the fine details or color. This characteristic of human vision is used to advantage in object-based coding in some higher compression protocols such as MPEG-4, which use contour-based image coding. Statistical redundancy In natural images, not all parameters occur with the same probability in an image. This fact can be used to code frequently occurring parameters with fewer bits and less frequently occurring parameters with a larger number of bits. This type of coding enables the carriage of a greater number of pixels with fewer bits, thereby reducing the bit rate of the stream. This technique, called Huffman Coding, is commonly used in compression algorithms. Scaling: reducing pixel count An important parameter for the bit rate of a signal is the number of pixels that are required to be carried, as each pixel may be required to be encoded with up to 24 bits. As an example, although the standard-definition video is 720480 (345.6 K pixels), MPEG-1 format, which is used to carry “VCD quality” (SIF) video, uses a resolution of only 352288 (101.3 K pixels), thus reducing the number of pixels to be carried by one-third. Video conferencing, which is used over multiple 128 K telephone ISDN lines (H. 261), employs only a quarter of the SIF format pixels by using 176144 as the pixel density (25.3 K pixels per frame). This is lower by a factor of 13 as compared to standard-definition video (Table 2.8). Table 2.8: Bit Rates for Small Screen Devices. S.No 1 2 3 4 5

Compression Format MPEG-1 MPEG-2 MPEG-4 H.261 H.263

Picture Representation 352288 SIF 720480 CCIR 176144 QCIF 352288 QSIF 176144 QCIF 352288 QSIF 12896–720480

Application

Bit Rate

Video CD Broadcast TV, DVD Internet, Mobile TV Video Conferencing Video Conferencing

0–1.5 Mbps 1.5–15 Mbps 28.8–512 Kbps 384 K–2 Mbps 28.8 K–768 Kbps

28

Chapter 2

Figure 2.16: Compressing video.

Now it is very easy to visualize the processes that are involved in the two areas—i.e., scaling and compression—for reduction of bit rates. In our example, the SD video (720480) with 345.6 K pixels per frame at 25 frames per second requires the transmission of 8.64 megapixels per second. For mobile TV, having QCIF resolution of 176144 (25.3 K pixels per frame) and 15 frames per second, the required transmission rate is only 380 K pixels per second. In the previous example, by scaling the picture and the frame rate, the pixel rate has been reduced from 8.64 megapixels to 0.38 megapixels, which is a scaling down of approximately 23 times. The final bit rate is of course determined by how many bits are required to carry each pixel. This is the domain of compression. The pixels are now ready to be subject to compression, the first stage of which would begin by formation of 88 macro blocks and application of DCT process, Huffman Coding, RLE, object-based coding, and so on, based on the compression protocol employed. Once the entire process is completed, a bit rate as low as 64 kbps is needed to carry the information, which would otherwise have needed 9.12 Mbps to carry even the scaled-down video rate of 0.38 megapixels per second at 24 bits per pixel.

2.7.3 MPEG Compression MPEG stands for the Motion Picture Expert Group; compression standards formulated by MPEG have been widely used and adapted as international standards.

Introduction to Digital Multimedia

29

Figure 2.17: MPEG compression process.

Compressing within a frame The DCT quantization process for each frame is the same as used for images. Each 88 block is transformed into another 88 block after DCT transformation. For color pictures, the macroblock comprises of four blocks of luminance and one block each of U and V color. The new 88 block now contains frequency coefficients. The upper-left corner of the block contains the lower frequencies, and these are picked up for transmission. The lower-right corner contains higher-frequency content that is less discernable by the human eye. The number of coefficients dropped is one of the factors in determining the compression. If no coefficient is dropped, the picture compression is lossless and can be reversed by inverse discrete cosine transformation process. Compressing between frames Compressing between the frames is used so that not all the 30 frames (NTSC) are required to be transmitted every second. This is called “temporal compression.” For this purpose, video is divided into series of frames into a “group of pictures.” The group of pictures carries three types of frames. Intraframe or I-Frame: These frames are coded based on the actual picture content in the frame. Thus, each time an I-frame is transmitted, it contains the full information for the picture in the frame and the receiving decoder can generate the picture without any reference to any previous or next frames.

30

Chapter 2 Predicted Frame or P-Frame: Generated from the previous I- or P-frames by using the motion vector information to predict the content. Bidirectional Frame or B-Frame: The B-frames are generated by an interpolation of the past and future I- and P-frame information using vector motion information. The encoder has the frame memory and the transmission order of the B-frame, which has been generated by interpolation and is reversed so that the decoder finds the frames in the right order.

The degree of the temporal compression depends on the number of I-frames transmitted as a ratio of the B- and P-frames. This would depend on the type of source video content and can be set in the encoders. The lowering of data rate takes place owing to a B-frame containing only about half the data contained in an I-frame, and a P-frame containing only one-third the amount of data.

Figure 2.18: Temporal compression in MPEG.

2.7.4 Motion Vectors and Motion Estimation The techniques of spatial compression using DCT largely address the compression of pictures. In order to effectively compress video with moving images, it is also necessary to employ techniques that directly target moving objects. The estimation of motion by motion estimation is one such technique used in MPEG.

Introduction to Digital Multimedia

31

Motion estimation is done by comparing the position of picture elements (macroblocks) in one frame with previous frames and estimate direction and magnitude of motion, which is represented by motion vectors. The process is complex and encoder quality is determined by how accurately the motion can be estimated.

2.8 Compression Standards A number of compression formats exist and are used based on the application. The existence of many formats also depicts the historical evolution of compression technology, which has become more complex with the falling cost of processing-intensive chips.

2.8.1 MPEG-1 MPEG-1 (ISO 11172) was the first multimedia audiovisual coding standard. MPEG-1 was designed for CD-quality video and audio coding with a limited resolution of 352288 for PAL and 352240 for NTSC. The frame rates are 25 fps for PAL and 30 for NTSC, as in the analog systems, but the MPEG-1 uses progressive scanning. It generates the compressed stream at rates up to 1.5 Mbps and has been largely used for VCDs. It uses the processes of DCT and RLE as well as motion estimation based on pixel motion. MPEG-1 provides for up to two audio channels and three layers of audio coding complexity (layer 1 to layer 3), of which layer 3 is most popular and is known as MP3. The MPEG-1 standard does not address the streaming formats.

2.8.2 MPEG-2 The MPEG-2 standard (ISO 13818) is a compression standard that was finalized in 1994 and is today the most widely used standard for broadcast TV as well as storage applications such as DVDs. MPEG-2 was designed to handle full-resolution video, including HDTV. It can generate bit rates from 1.5 to 15 Mbps for standard-definition video. The type of compression employed is defined through the use of the MPEG-2 profiles. For transmission of broadcastquality standard-definition video (CCIR 601), the “main profile at main level ([email protected])” is used, which can generate bit rates up to 15 Mbps, but in practice bit rates of 2.5 Mbps may be adequate. For studio processing, the use of B- and P-frames is dispensed with and only I-frames are used, resulting in a compressed video stream at 50 Mbps. The I-frame-only compression makes the compressed video suitable for frame by frame editing. MPEG-2 transport frame MPEG-2 provides a unique structure for the transport stream whereby the stream can carry any number of video, audio, and data channels, which are identified by their program IDs and can be grouped together in any manner using Program Association Tables or PAT.

32

Chapter 2

Figure 2.19: MPEG-2 transport stream.

MPEG-2 is also backward-compatible with MPEG-1 and has the provision for carriage on different transport media, including streaming and ATM (asynchronous transmission mode adaptation layer). MPEG-2 is the most widely used system used today in digital broadcasting. The digitalization of the analog TV transmission networks is based on the use of the MPEG-2 transmission format and frame structure. MPEG-2 transport frame is also used in mobile TV broadcasting networks such as ATSC Mobile DTV and DVB-H, as you will see in later chapters.

2.8.3 MPEG-4 MPEG-4 follows an entirely different approach to video compression. The video objects and the background are considered as distinct and basic constituents of a picture. This is a departure from the approach used in MPEG-1 and MPEG-2 standards of using only pixels and blocks to describe the picture. Under MPEG-4, the picture is analyzed in such a manner so as to identify a single background (generally static) and a number of objects that are in motion. The objects are identified and compressed separately. Information on the motion of video objects is sent as part of the stream. The decoder then reconstructs the picture by combining the background and the individual video objects, including their motion.

Introduction to Digital Multimedia

33

The MPEG-4 algorithms, which were primarily oriented toward providing high compression and lower bit rates than MPEG-2, have subsequently found application in streaming video applications. To cater to the wide range of applications that are possible using MPEG-4, a number of profiles and levels are defined. Figure 2.20 shows the bit rates generated by MPEG-4 for various screen resolutions.

Figure 2.20: MPEG-4 profiles for mobile devices.

The MPEG-4 visual simple profile is the prescribed standard for video and audio transmission over mobile networks under the 3GPP Release 5 as explained in the next chapter. In addition, the profiles for MPEG-4 have been enhanced to include Advanced Simple Profile (ASP). The ASP provides for interlaced frame video to be coded using B-frames and global motion compensation. MPEG-4 standards now also include scalable video coding (SVC) by adding the concept of enhancement layers. The basic level of encoding is the base layer with image quality as per the MPEG-4 ASP (visual). One level of enhancement is provided by better picture quality per frame (also known as the fine-grain scalability (FGS).This improves the number of bits used to represent each picture or frame. The second layer of enhancement is provided by improving the frame rate or temporal enhancement (called the FGS Temporal Scalability layer or FGTS). As the MPEG-4 standards define a video object separately, it is possible to define threedimensional (3D) objects as well, and this makes the MPEG-4 standard ideally suited for video handling for many applications such as video games and rich media. Compression under MPEG-4 has a number of steps, some of which are: Identification of video objects—The picture is broken up into separately identified video objects and the background.

34

Chapter 2 Video object coding—The video object is then coded. The texture coding within the object is handled using the DCT process.

2.8.4 Multimedia and Interactivity with MPEG-4 The high efficiency of video and audio coding achieved by the MPEG-4 are not the only success factors that led to its increasing use in applications such as IP streaming or mobile TV. It is also tailor-made for interactive and multimedia applications. Why? First, as it is based on object-based coding, it can deal separately with video, audio, graphics, and text as objects. Second, synthetic (and natural) objects can be created and incorporated in the decoded picture. Third, as it is based on object-based coding rather than frame-based coding, it provides flexibility in adapting to different bit rates. It is not limited by the need to transmit a certain number of frames per second, with repeated coding of the same objects in case of scene changes. This makes it ideally suited for mobile environments, where the user may travel from near a base station transmitter to the outer fringes and the usable bit rates may change considerably. Finally, it has a provision for scene coding called Binary Format for Scenes (BIFS), which can be used to recreate a picture based on commands. This implies that objects can be reordered or omitted, thus virtually recompositing a picture with objects, graphics, and text. A picture can be rendered by adding or deleting new streams. When such changes are done based on commands (termed Directed Channel Change or DCC), it can be used for a host of applications with powerful interactivity, such as targeted advertising. The BIFS information determines the source of the elementary streams in the final picture, and these can be different from those from the originating source.

Figure 2.21: Object-based decoding in MPEG-4.

Introduction to Digital Multimedia

35

MPEG-4 has 22 parts, which define various attributes of the standard, such as Delivery Multimedia Integration Framework (MPEG-4, part 6), Carriage Over IP Networks (part 8), and Advanced Video Coding (MPEG-4, part 10, now standardized as H.264/AVC).

2.8.5 MPEG-4 Applications Use of MPEG-4 now spans all applications, from web-based video to digital television, production, and transmission. MPEG-4 provides file structure in which the .MP4 files can contain video, audio, presentation, images, or other information. MPEG-4 files may or may not contain audio. Files carrying MPEG-4 audio are denoted by .MA4; files carrying MP4 audio outside the MP4 container are denoted by .AAC.

2.8.6 H.264/AVC (MPEG-4, Part 10) The H.264 coding standard was a result of joint development effort of the Moving Picture Expert Group (MPEG) and the Video Coding Expert Group (VCEG) and was released in 2003. The standard was adopted by the ITU in May 2003 under the H.264 recommendations and the ISO/IEC as MPEG-4 part 10 (ISO 14496-10) in July 2003. The H.264 standard was oriented toward the twin objectives of improved video coding efficiency as well as better network adaptation (i.e., the coding is independent of the transmission network that will be used). These are achieved by distinguishing between the two different conceptual layers, i.e., the Video Coding Layer (VCL) and the Network Abstraction Layer (NAL). H.264/AVC represents a significant improvement over the previous standard of MPEG-4 in terms of bit rates. The lower bit rates and the use of the Network Abstraction Layer makes H.264/AVC ideally suited to be used in wireless multimedia networks, CDMA, 3G, and other packetbased transport media. For mobile devices, 3GPP release 6 has adapted the H.264 video coding as the standard for wireless and mobile broadcast networks; 3GPP release 5 was limited to the use of the MPEG-4 visual simple profile. H.264 enables the transmission of video at bit rates about half of those generated by MPEG-2. This, together with better network layer flexibility and the use of TCP/IP and UDP protocols, is leading to its increasing use in DSL/ADSL networks for IPTV as well as conventional broadcast networks, which are today completely dominated by MPEG-2. In the coming years, with reduction in the cost of the encoding and decoding equipment, the transition to H.264 is expected to be significant. The comparison in Figure 2.22 reflects the bit rates and storage requirements using the MPEG-2, the MPEG-4 (advanced simple profile or ASP), and the H.264 standards. MPEG 4 can deliver HD content at 7–8 Mbps as opposed to 15–20 Mbps by using MPEG-2. H.264 has been ratified as a standard in both the HD-DVD and Blu-ray DVD formats. It has also been built into Apple QuickTime 7 (and higher versions) as a video codec.

36

Chapter 2

Figure 2.22: Performance comparison of a 120-minute DVD-quality movie at 768 Kbps.

2.8.7 H.264/AVC Encoding Process In the H.264 encoding process, a picture is split into blocks. The first picture in an encoding process would be coded as an I-frame without use of any other information involving prediction. The remaining pictures in the sequence are then predicted using motion estimation and motion prediction information. Motion data comprising displacement information of the block from the reference frame (spatial displacement) is transmitted as “side information” and is used by the encoder and decoder to arrive at the predicted frame (called inter-frame). The residual information (the difference between intra and inter-blocks) is then transformed, scaled, and quantized. The quantized and transformed coefficients are then entropy-coded for inter-frame or intra-frame prediction. In the encoder, the quantized coefficients are also inverse-scaled and transformed to generate the decoded residual information. The residual information is added to the original prediction information and the resulting information is fed to a deblocking filter to generate decoded video.

2.8.8 TV and Video At this point in time, it is also important to understand the distinction between TV and video.

Introduction to Digital Multimedia

Figure 2.23: H.264 encoding. Table 2.9: H.264/AVC Profiles. Level 1 Level 1b Level 1.1 Level 1.2 Level 1.3

15 Hz QCIF at 64 Kbit/s 15 Hz QCIF at 192 Kbit/s 30 Hz QCIF at 192 Kbit/s 15 Hz CIF at 384 Kbit/s 30 Hz QCIF at 768 Kbit/s

Level 2 Level 2.1 Level 2.2

30 Hz QCIF at 2 Mbit/s 25 Hz 625HHR at 4 Mbit/s 12.55 Hz 625SD at 4 Mbit/s

Level 3 Level 3.1 Level 3.2

25 Hz 625SD at 10 Mbit/s 30 Hz 720p at 14 Mbit/s 60 Hz 720p at 20 Mbit/s

Level 4 Level 4.1 Level 4.2

30 Hz 1080 at 20 Mbit/s 30 Hz 1080 at 50 Mbit/s 60 Hz 1080 at 50 Mbit/s

Level 5 Level 5.1

30 Hz 16VGA at 135 Mbit/s 30 Hz 4Kx2K at 240 Mbit/s

37

38

Chapter 2

The TV world is characterized by the use of interlaced video (for economy of bandwidth in transmission) and the use of specific frame rates, frame resolution, and color information. In the analog domain, these manifest themselves as NTSC, PAL, or SECAM standards and their variations. In the digital domain, the terms NTSC or PAL are strictly not applicable, as these terms also include the manner in which color information is handled using subcarriers. However, digital TV still retains the characteristics of the original analog TV signal (such as frame rate) and pixels per frame (such as 480i at 30 fps or 576i at 25 fps). NTSC signals are characterized by a resolution of 720480 and a frame rate of 30 fps; PAL signals have a resolution per frame of 720576 and a frame rate of 25 fps. The TV signals, even after digitalization or compression, retain their source specific use (such as NTSC or PAL) and can be used only in compatible environments unless converted. The “TV” thus retains its distinct identity as against video, as it is widely used the Internet-dominated arena, which is common across the world (Figure 2.24).

Figure 2.24: A TV signal maintains its identity of original format after compression.

The term “video” used here denotes the way it is handled on the Internet or while displaying it on common display devices such as CRTs or LCD displays. The computer monitors are based on the use of the video in a progressive display format rather than an interlaced format. Similarly, on the Internet, the format for video used is per internet engineering task force (IETF) standards, which is common across the globe. This is how streaming video can be

Introduction to Digital Multimedia

39

displayed or websites opened worldwide without any additional thought to the standards underlying the video and audio.

2.9 The AVS ⴚ M Video Coding Standard (China) China uses video and audio coding per the specifications defined by the audio and video coding standards (AVS) workgroup established by the Ministry of Information Industry (MII). The AVS standards, as they are known, have 10 parts, of which part 7 pertains to the video coding for mobile devices. Part 7 of the AVS standard is popularly known as the AVS-M.

2.9.1 AVS Standard Parts Part 1 : The AVS System Part 2: Video Part 3: Audio Part 4: Conformance Part 5: Reference Software Part 6: Digital Media Rights Management Part 7: Mobile Video Part 8: Transmission of Video via an IP Network Part 9: AVS File Format Part 10: Audio and Speech Coding The architecture of the AVS-M codec is very similar to the H.264 standard. AVS-M supports only progressive scanning of video and uses a 4:2:0 scheme for color components. Hence there is no concept of fields and one picture is always one frame. Only two types of pictures are specified: the I-pictures, which are derived from a full encoding of the frame, and the P-pictures, which are predicted based on a maximum of two reference frames for forward prediction. For the purpose of encoding, the picture is divided into macroblocks. A slice is a sequence of macroblocks in a raster scan. The slices are always nonoverlapping. A macroblock is partitioned into six 88 blocks (4: luma, 2: chroma). Alternatively, it can be partitioned as 24 44 blocks (16 luma and 8 chroma blocks). It uses VLC coding and an error image deblocking filter in a manner similar to H.264. Macroblocks of 44 are used for inverse cosine transformation (ICT). For faster processing in the encoder, AVS-M uses a prescaled integer transform (PIT) where the scaling results are precalculated and available in the encoder. The predicted picture frames (P) can be away from the reference frames for better error resilience. It is possible to also mark frames so that these will not be used as reference frames for prediction. This makes it possible to drop these frames, if temporal scaling is required, without affecting the predicted frames. Thus there is no cascading effect of dropping frames on video quality.

40

Chapter 2 Table 2.10: AVS-M Profiles and Levels ( JiBen Profile).

Level 1 1.1 1.2 1.3 2 2.1 2.2 3 3.1

Screen Size SQCIF (12896) or QCIF (176144) SQCIF (12896) or QCIF (176144) CIF (352288) CIF (352288) CIF (352288) 352480 or 352576 352480 or 352576 VGA (640480) D1 (720480 or 720576)

Maximum Bit Rate

Maximum Frame Rate

64 Kbps 128 Kbps 384 Kbps 768 Kbps 2 Mbps 4 Mbps 4 Mbps 6 Mbps 8 Mbps

30 fps for SQCIF,15 fps for CIF 30 fps for SQCIF,15 fps for CIF 15 fps 30 fps 30 fps 30 fps or 25 fps 30 fps or 25 fps 30 fps 30 fps or 25 fps

As is the case with H.264, specific levels and profiles have been standardized to promote interoperability amongst AVS-M systems. Under AVS-M a “JiBen Profile” has been defined, and has nine levels. Table 2.10 lists these levels. The encoders and decoders for AVS-M can be implemented by using the software provided in the standard. The implementations give performance similar to H.264 but at a relatively lower cost.

2.10 Video Files Video is not always transmitted after compression. In general, it is necessary to store video. A number of file formats are used in the multimedia industry. Many of the file formats have their origin in the operating systems used and the manner in which the files were sampled and held in store in the computers. Others are based on the compression standard used. Conversions between file formats are today easily done by using a variety of software available.

2.10.1 The Windows AVI Format (.avi) AVI (Audio Video Interleaved) is the de facto standard for video on Windows-based machines, where the codecs are built-in for generating AVI video. AVI represents how audio and video are carried.

Figure 2.25: AVI format.

Introduction to Digital Multimedia

41

AVI is generated through sampling of audio and video input and does not have any significant compression. For this reason, AVI files are used for storage but not for transmission over networks.

2.10.2 Windows Media Format (.wmv) The Windows Media Format is a proprietary format of Microsoft and is used on Windows Media 9 codecs and players. Despite being proprietary, due to the wide use of Windows machines, it is used extensively. The use of .wmv files on other machines such as Macs requires Windows Media software.

2.10.3 MPEG (.mpg) Format As the name suggests, the MPEG format denotes video and audio compressed as per MPEG-1 or MPEG-2 compression. Motion JPEG (MJPEG) files are also represented by .mpg files. MPEG being an international standard, both Windows and Mac operating systems provide native support for MPEG.

2.10.4 QuickTime™ (.mov) Format QuickTime™ is a proprietary format from Apple Computer. It is widely used in the industry for audio and video as well as graphics and presentations. However, it is closely aligned to the standards at its core and has MPEG-4 as the base in QuickTime 6 and H.264/AVC in QuickTime 7. Due to friendly and advanced features, QuickTime players are available for most operating systems.

2.10.5 RealMedia™ Format (.rm) The RealMedia format has gained popularity through the extensive use of RealMedia players and servers on the Internet. The basic versions of RealMedia Producer, Server, and Player have been available as free downloads, which has contributed to the widespread use as well (such as the free player RealPlayer® 10). Full-length movies and music through the Rhapsody® music store are now available. Most websites support content hosted in the RealMedia format and for this reason it is almost mandatory for any device accessing the web to support RealMedia.

2.10.6 Flash Video™ (.flv) The Flash Video format has gained popularity as one of the most extensively used formats for web-based video deliveries, both for streaming and download. Flash Video can be played by Adobe Flash players (Adobe Flash Player 10) as well as web browser plug-ins. Flash Video

42

Chapter 2

is used by websites such as YouTube®, Google Video®, Yahoo Video®, and many others. Flash Video players can be downloaded and installed free and their popularity has led to these players being downloaded on an estimated 80% of computers connected to the Internet. Third-party players that support DirectShow®, such as Windows Media, QuickTime (with the Perian® plug-in), and the VLC media player, also support Flash Video. Originally, Flash Video was created by the use of a proprietary variant of the H.263 codec (called the Sorenson Spark). However, version 10 of Flash Video supports video compression in H.264 and audio in MP3 or AAC, making it an open format.

2.10.7 The DivX Format DivX is a media format (and a media player) that is generated using DivX codecs. The name owes its identity to DivX, Inc., the company that originally introduced these codecs, which have a capability to compress lengthy videos into manageably sized files. These files can be played by DivX players. DivX codec can also be used as a plug-in for a variety of players such as Windows Media Player. DivX files are denoted by the file extension .divx and denote a media container that contains multiple video streams, multiple audio tracks, subtitling (in the DivX-specific XSUB format), interactive video menus, and so on. Although the original DivX codec used proprietary encoding, release 7 of DivX includes MPEG-4 video and AAC audio (amongst other formats) in the .divx containers. DivX has multiple profiles based on the screen resolution needed and span the range from mobile devices to HD content. An alternative container format used for DivX content is the Matroska Multimedia Container format. Files in this format are denoted most often by file extensions of .mkv. The Matroska container format is characterized by its capability to store a large number and types of audio, video, subtitling, and metadata, and features an interactive menu for access to multimedia content similar to DVDs. The latest version of DivX—i.e., DivX Plus HD—can play all DivX video formats, including content in the HD (.mkv) format. Most well-known players such as ALL player®, VLC media player®, DivX player, and Media Player Classic provide native support to content in Matroska format. DivX Player is available for free download from the divx.com website. A player with more enhanced features (DivX Table 2.11: DivX Profiles. DivX Profile

Application

6.5 and above High Def 4 and above High Def 3.11 and above Standard Def, Home Theater 5 and above Mobile

Resolution

Bit Rate (Average–Peak)

19201080 30 fps 1280720 30 fps 480720 30 fps 576720 25 fps

4–20 Mbps 4–20 Mbps 4–8 Mbps

320240 30 fps

0.6 Mbps

Introduction to Digital Multimedia

43

Pro) is available for download for about $20 and features conversion to DivX video, advanced encoding features, and a DivXPlus™ HD encoding profile. DivX encoding can compress typical DVD content of 4.7 GB into a file of about 700 MB, which makes it a good option for Internet-based downloads.

2.10.8 XVID Format Xvid is a family of video codecs developed by some erstwhile DivX staff who had embarked on a separate line of development. It is based on the use of MPEG-4 part 2 (advanced simple profile) for encoding. Xvid is free software available for a number of platforms. It can be downloaded from www.xvid.org. Like DivX, Xvid codecs provide high compression for video. Xvid is available for mobile devices and Xvid mobile players can be downloaded free.

2.10.9 MXF File Format The MXF (material exchange format) is an SMPTE standard (SMPTE 377M) that has been in use in professional video equipment such as video servers or non linear editing machines (NLEs). The MXF defines a file container that can carry multiple types of video and audio and is thus an open file standard. Having a common file wrapper (defined by MXF) helps by easy transport and storage of video content. MXF files are denoted with a file extension .mxf. It is possible to use multivendor NLEs, servers, cameras, and other devices by exchanging the content using MXF format. The MXF format also effectively captures and transfers metadata, which is very important in a multidevice environment. MXF can also store files in a streamable format. This is made possible by index tables that maintain portioning for streaming or for file transfer.

Figure 2.26: MXF file format.

44

Chapter 2

However, not all MXF files are interchangeable, despite the original intention of having a common format. This is due to variants in subformats of MXF files created by certain equipment such as such the Panasonic DVCPRO-V2.

Quick FAQs Multimedia Formats 1. Where is DivX format most commonly used? The most common use of DivX format is for online transfer of movies and in home theater systems. Its advantage lies in manageable file sizes for even HD content. It is also used in consumer devices such as Sony’s PlayStation Portable (PSP®) and many video games. 2. If MXF is a common media format, does it mean that the source of files such as PAL or NTSC content is no longer relevant? NTSC and PAL are specifically used to denote the analog signal format. However, the MXF provides only a container for the format for common transport. The source formats such as 480i/30 fps or 576i/25 fps remain the same as the original content source. 3. Is the AVS-M format of china compatible with MPEG4? No; even though both formats are very similar, they are not compatible. However, it is interesting to note that China has chosen to use H.264 for video and HE-AAC V2 for audio in its standard for mobile TV (CMMB). 4. What is the official audio coding standard for china? China uses audio coding per DRA standards, which constitutes the official Chinese standards for audio. 5. Why does MPEG4 not have its own transport stream like MPEG2? MPEG4 is designed for delivery over multiple types of media, including IP-based transport. An abstraction layer is used to logically separate the transport from the compression standard.

2.11 File Containers and Wrappers As discussed earlier in the chapter, there are many standards for compression of video and its storage in files. Some of these formats are proprietary (such as Windows Media and RealVideo); others are based on different ways of “wrapping” video and audio compressed per one of the open standards (MPEG2, MPEG4, AAC, MP3, etc.). The file wrapper specifies the types of video and audio that are present along with the metadata such as title, author, or other details. The players for video frequently work with multiple types of file formats. Further, digital video can be a transmitted digital stream (e.g., being transmitted from an ATSC transmitter), streamed over IP, or saved in the form of a file. We find that even though these different variants may look bewildering at first sight, they are based on a few common standards. Figure 2.27 represents the architecture of digital video in different manifestations.

Introduction to Digital Multimedia

45

Figure 2.27: An overview of digital video and audio.

In case the video and audio need to be stored in a file format, the common method is to use a container for audio and video, which is then placed in a file wrapper that describes the types of audio and video content and metadata. These stored files are then recognized by appropriate wrappers such as QuickTime, Flash Video, MP4, DivX, MP4, ASF, and others. Moreover, even these containers can carry video and audio in many alternative formats. Figure 2.28 further elaborates on how the video and audio content is handled after compression. The MPEG-2 transport stream may for example be transmitted via a terrestrial transmission system using ATSC (or DVB-T, or ISDB-T), or a satellite or cable system. The stored files can be played by an appropriate player or streamed by a media streamer.

2.11.1 File Format Converters File format conversion is frequently required in view of multiple standards of video capture, storage, editing, and transmission. In most cases in which the file formats are in an open standard, the file format conversion is a straightforward process converting the “wrapper” to the desired format such as QuickTime, AVI, MXF, and so on. An example of such a format

46

Chapter 2

Figure 2.28: Overview of digital video formats, continued.

converter is the XFconverter™ 1.1 from OpenCube Technologies. It provides compatibility with different container formats such as AVI, MXF, GXF, QuickTime, Wave, and so on, and can handle video and audio essences in all common formats such as MPEG2, MPEG4, DV, DVCPro, DVCProHD, and many other formats. An easy-to-use GUI provides an excellent way to manage file wrapper conversions.

2.12 Audio Coding There are many ways to represent audio, depending on whether the audio is compressed or uncompressed, and the standard used for compression. Many of these formats have a historical origin based on use (e.g., telecommunications systems such as PCM) or the operating systems of the computers used. The audio standard used also depends on the application. Music systems require high-fidelity audio—i.e., 20 Hz–20 KHz with two or more channels—and mobile phones use 100 Hz–4 KHz for speech.

Introduction to Digital Multimedia

47

2.12.1 Audio Sampling Basics The range of frequencies audible to the human ear is from 20 Hz to 20 KHz. In order to handle this audio range digitally, the audio needs to be sampled at at least twice the highest frequency. The rates of sampling commonly used are as follows: Audio CDs: 44.1 KHz at 16 bits per sample per channel (1.411 Mbps for stereo) DATs (Digital Audio Tapes): 48 KHz at 16 bits per sample DVDs: 48–192 KHz at 16–24 bits per sample

● ● ●

The large number of bits needed to code audio is due to the large dynamic range of audio of over 90dB. Using a smaller number of bits leads to higher quantization noise and loss of fidelity.

Figure 2.29: Sampling and coding of analog audio.

The process of sampling and coding generates Pulse Code Modulated (PCM) audio. PCM audio is the most commonly used digital audio in studio practice. From the perspective of mobile TV, it is useful to distinguish between music (stereo audio of CD quality) and voice (mono and limited in bandwidth to 4 KHz). Some of the sampling rates commonly used are given in Table 2.12. Table 2.12: Audio Sampling Rates. S Number 1 2 3 4 5

Audio Source

Frequency Band

Sampling Rate

Speech Telephony Wideband Speech Music Music (CD Quality) Music (Professional and Broadcast)

200 Hz to 3.4 KHz 100 Hz to 7 KHz 50 Hz to 15 KHz 20 Hz to 20 KHz 20 Hz to 20 KHz

8 KHz 16 KHz 32 KHz 44.1 KHz 48 KHz

48

Chapter 2

2.12.2 PCM Coding Standards Owing to the logarithmic nature of the human ear in perceiving audio levels and the wide dynamic range involved, PCM coding is usually done using logarithmic coding. The A-law and μ-law codecs that have been standardized by the ITU under recommendation G.711 form the basis of digital telephony. The A-law codec is used internationally; the μ-law codec is used in the United States, Canada, and Japan, amongst others. Both coding standards are similar and provide for different step sizes for quantization of audio. The small step size, near zero (or low level), helps code even low-level signals with high fidelity while maintaining the same number of bits for coding. A-law voice at 64 Kbps and μ-law voice at 56 Kbps are most commonly used in digital fixed-line telephony.

2.12.3 Audio Interfaces When the audio is coded, such as when using PCM or a coder, it consists of a bitstream. There is a need to define audio interfaces that prescribe the line codes and formats for the audio information. AES-3 audio The standards for the physical interface of audio have been standardized by the Audio Engineering Society (AES) and the European Broadcast Union (EBU) under the AES-3/ EBU. This physical interface provides for a balanced shielded pair cable that can be used up to around 100 meters. Due to the need to carry on cable for such distances, the signals are coded with a line code. In the case of AES-3, a Non Return to Zero (NRZ) code is used with BPM (Biphase Mask) in order to recover the digital audio at the distant end. AES-3 can carry uncompressed or compressed audio and is most commonly used for carriage of PCM audio. Commonly used AES bit rates are as follows (for two audio channels): ● ● ●

48 KHz sampling rate 3.072 Mbps 44.1 KHz sampling rate 2.822 Mbps 32 KHz sampling rate 2.048 Mbps

2.13 Audio Compression In most applications, audio must be transmitted at rates that may range from 8–12 Kbps (cellular mobile) or 144 Kbps (stereo music). There is thus a need to compress audio. Audio compression is based on the characteristics of the human ear, which does not perceive all frequencies equally. The following are the main characteristics of the human ear that can be used to advantage when introducing compression.

Introduction to Digital Multimedia

49

Stereophonic perception limit The ear does not recognize sound as stereo below 2 Khz and hence sound can be transmitted as mono for frequencies below this threshold. Hearing threshold When the sound level is low, the human ear is most sensitive only to the middle band. It is relatively insensitive to low-frequency and high-frequency sounds. Temporal masking A high-level tone signal masks the lower-level signals near this tone (i.e., raises the threshold of hearing). Lower-level signals, which will be not heard, can be discarded in the compression process.

2.13.1 Audio Compression and Coding Principles Audio coders use the perceptual compression on the twin basis of human ear perception and discarding of irrelevant data. Sub-band coding is the most common technique, whereby the spectrum is split into a number of sub-bands. The bands that would be masked by the louder components nearby are then discarded. MPEG compression MPEG has developed standards for audio coding that are used widely in the industry. MPEG-1 audio coding uses the psychoacoustic mode. MPEG-1

For audio coding, the MPEG-1 compression standard is most popular; MPEG-1 Layer 3 (MP3) has been used widely over the Internet and in media players. MPEG-1 has three layers that denote increasing complexity of compression and encoding process. The MPEG-1 Layer 1 is used in the digital compact cassettes; Layer 2 is based on the MUSICAM compression format. The MUSICAM format is also used in the Digital Audio Broadcasting Systems (which are replacements for analog FM broadcast systems). Layer 3 (known as MP3) is an Internet standard and is used in many popular MP3 players. The sampling rates provided for in MPEG-1 are 32, 44.1, and 48 KHz. MP3 is also used for audio associated with digital video in VCDs, which use MPEG-1. The MPEG-1 standard has the following components describing video and audio: ● ● ●

Part 1 MPEG-1 Program Stream Part 2 MPEG-1 Video for CD Part 3 MPEG-1 Audio

50 ● ●

Chapter 2 Part 4 Conformance Part 5 Reference Software

2.13.2 Advanced Audio Coding (AAC) (MPEG-2 Part 7) MPEG-2 has been the standard for digital broadcast TV since its introduction and is one of the most widely used standards in the industry. The Advanced Audio Coding Standard (AAC) was developed as an improvement over the MP3 audio standard (MPEG-1 Part 3). There profiles were defined for AAC: low complexity (AAC-LC), main profile and (AAC-Main), and scalable sampling rate profile (AAC-SSR).

2.13.3 Audio Codecs in MPEG-4 MPEG-4 audio coding constitutes a family of standards that cater to a wide range of bit rates. MPEG-4 audio coding brings in much more complex algorithms with superior compression. MPEG-4 encoding also generates audio in AAC format. MPEG-4 AAC is backward-compatible with MPEG-2 AAC. MPEG-4 AAC adds an additional tool called Perceptual Noise Substitution, which removes the coding of background noise to reduce the data rates. It also uses a tool called Joint Stereo Coding whereby similarity between the left and right audio channels is used to remove the redundancy between the channels. The redundancy between consequent audio frames is reduced by using a tool called the Long-Term Predictor (LTP), which removes the stationary harmonic signals from the encoding cycle. The AAC standard is very popular because of its use in Apple’s iPod™ and the iTunes™ music store. MPEG-4 AAC provides better quality than MP3 at the same bit rates. It also supports coding of multichannel audio. AAC codecs have three profiles, based on the environment in which they are being used. These are the AAC-MP (main), the AAC-LC (low complexity), and the AAC-SSR (scalable sampling rate) profiles. The functionalities introduced in MPEG-4 include the “Multiple Bit Rate Coding” and scalable coding. Variable bit rate coding algorithms are better suited to media where streaming is involved and fixed rates of delivery cannot be guaranteed. The new techniques introduced in MPEG-4 AAC include: ●



Speech Codec HVCX: Stands for Harmonic Vector eXcitation Coding and is used to code speech at 2 kbps and 4 Kbps. CELP Coder (Code Excited Linear Prediction): Provides encoding from 6 Kbps to 18 Kbps. The encoder has options for 8 KHz and 16 KHz sampling rates.

MPEG-4 high-efficiency AAC V2 The HE-AAC V2—or AAC-Plus codec, as it is popularly known—is an improvement over the AAC coding in that it is able to improve the bit rates without degradation of quality.

Introduction to Digital Multimedia

51

Figure 2.30: MPEG audio formats.

Figure 2.31: MPEG-4 audio encoder bit rates.

This audio codec is very important, owing to its adoption by the DVB as well as standards bodies such as 3GPP and 3GPP2 for use on mobile and 3G networks. It is also the mandatory audio coding standard for Korea’s S- DMB mobile TV system as well as the Japanese ISDB-T mobile TV system. It is used extensively for music downloads over 3G and 2.5G networks.

52

Chapter 2

In addition, it is also used in the U.S. satellite radio service XM Satellite Radio and other radio systems, such as Radio Mondiale, which is the international system of broadcasting digital radio in the shortwave and medium-wave bands.

Figure 2.32: AAC encoder families.

The AAC encoding is improved in two steps called the v1 and v2. AAC v1 uses a technique called Spectral Band Replication (SBR) whereby the correlation between the highfrequency and low-frequency bands is used to replicate one from another. The version v2 goes further by adding another tool called the Parameterized Representation of Stereo (PS). In this technology, the stereo images of the two channels (L and R) are parameterized and transmitted as mono-aural information together with difference signals. These are then used to reconstruct the signal. Structure of MPEG-4 audio files MPEG-4 files have an ISO container structure that contains metadata with content: ● ● ● ● ●

MPEG-4 container file Song title Album cover … Audio

The audio files coded in MPEG-4 are denoted by an .MP4 or .MA4 suffix. The MPEG-4 container has multiple parts, including the title and album cover (constituting all information in the signal being transmitted). It is possible to apply DRM to MPEG-4 audio.

Introduction to Digital Multimedia

53

2.13.4 The AMR-WBⴙ Codec The extended adaptive multirate wideband codec (denoted by AMR-WB) is derived from the AMR series of codecs as opposed to the MPEG encoding used in AAC. Mobile multimedia applications require an audio codec that can handle a wide range of content such as speech, sports, news, and music, and provide extremely low bit rates at consistent quality. Together with the widely fluctuating quality of received channels and in order to maintain consistent quality, the audio codecs need to be able to adapt bit rates rapidly. AMR-WB meets these requirements and provides bit rates in the range of 6 Kbps to 48 Kbps for stereo audio (up to 48 KHz sampling) by using dual technologies of ACELP for speech and a technology called transform coded excitation (TCX) for audio. Quick FAQs

AMR-WBⴙ Codec 1. Where is the AMR-WBⴙ codec used? AMR WB has been adopted for use in mobile broadcasting by both 3GPP and DVB. It is used in Packet Switched Streaming Service (PSS), Multimedia Broadcasting and Multicasting (MBMS), DVB-H (IP Datacasting [IPDC] over DVB-H), and MMS in mobile messaging. It is also used for podcasting, audio books, and commercial video clips. 2. Why is MP3, which is very popular for music, not preferred for mobile streaming? At the bit rates available in a mobile broadcasting channel (i.e., 8 to 48 Kbps) AMR-WB performs much better than MP3. The feature of adaptable bit rates makes it more efficient for speech and music—for mixed content. AMR-WB tackles variable-quality mobile broadcast channels efficiently. 3. How is AMR-WBⴙ encoding done in practice? 3GPP Release 5 (and above) encoders support selection of AMR-WB as one of the methods of converting audio. 4. What is the sony® Atrac audio coding? The ATRAC codecs (for “adaptive transform acoustic coding”) is a proprietary coding scheme of Sony developed for portable audio players, e.g., using minidisks (ATRAC CDs). In order to maintain high fidelity, frequencies up to 22.5 KHz are covered, giving a stereo encoding rate of 292 Kbps. It is used in the Sony Walkman and in Walkman phones in some markets.

2.13.5 Proprietary Audio Codecs Some of the codecs used in the industry do not fall under the MPEG umbrella. The prominent ones include Windows Media, Apple QuickTime, and Real Audio. Windows Media 9 Players are available as a default on Windows-based machines and use Windows Media Codec version 9. A wide range of sampling and encoding rates can be selected depending on the application.

54

Chapter 2

Apple QuickTime™ 9 supports a wide range of codecs including the choice of MPEG4. Some of the proprietary options include Qualcomm PureVoice™ codec for speech encoding, Fraunhofer™ II S MP3 audio codec, and Qdesign Music codec for music. RealAudio from RealNetworks provides its proprietary audio codecs, which include the ATRAC3 codec jointly developed with Sony. The ATRAC3 codec provides high-quality music encoding from 105 Kbps for stereo music.

2.14 Streaming Streaming of content such as video became a popular technology alongside the growth of the Internet in the 1990s. The alternative was to download a file (which can be 20 Mbytes, even with MPEG-4 compression for three minutes of play). But the wait time for download was generally unacceptable. In streaming mode, video and audio are delivered to the users of mobiles or other devices at the same rate (on the average) at which it is played out. For example, for a connection at 128 Kbps, video at 64–100 Kbps can be streamed continuously, giving the user effectively live access to multimedia content. Streaming is made possible by high compression codecs together with the technology to “stream” content by converting a storage format to a packetized format (i.e., UDP packets) suitable for delivery over the Internet or IP networks. In principle, there are two approaches to streaming. It is possible to receive video, audio, and web pages using HTTP itself (i.e., without the use of any special protocol). This is referred to as HTTP streaming and is possible if the delivery channel is capable of sustained HTTP data delivery at the required bit rates. A more efficient approach is by using real-time streaming, which uses the standard IETF protocols RTP and RTSP. In addition, there are proprietary formats for streaming, e.g., Apple QuickTime Server, RealTime Server, Windows Media, and Flash Video streaming server.

2.14.1 Streaming Network Architecture Streaming involves the following steps: ● ● ● ● ●

Capture and encoding of content Conversion to streaming format Stream serving Stream transport over IP networks Media player

Complete streaming and delivery solutions have been developed by RealNetworks, Microsoft Windows Media Platforms, and Apple QuickTime multimedia. All of these are widely used. Formats such as QuickTime have support for MPEG-4 coding.

Introduction to Digital Multimedia

55

2.14.2 The Capture and Encoding Processes The capture of video involves the acceptance of a video and audio stream in a format that is compatible with the video capture card of the PC or server. The input streams can be in uncompressed or compressed format. After compression, the files are stored in the appropriate compressed format, such as .mpg or .mp4, depending on the encoder used.

2.14.3 File Conversion to Streaming Format In order for the files to be delivered via real-time streaming, they need to have timing control information, which can be used by the server to manage the delivery rate. For this purpose, the files are converted to the streaming format, which adds the timing control information as well as metadata to help the orderly delivery of streaming data for a variety of applications. QuickTime uses a feature called Hint Tracks to provide control information that points to the streamed video and audio information.

2.14.4 Stream Serving Stream serving is a specialized application that is used in a client–server mode to deliver a continuous series of packets over the IP network to the client. The streaming application uses multimedia real-time file exchange protocols that have been developed by the IETF. These include the Real Time Protocol (RTP), the Real Time Control Protocol (RTCP), and the Real Time Streaming Protocol (RTSP). The streaming process involves two separate channels that are set up for the streaming session. The data channel provides for the transfer of the video and audio data, whereas the

Figure 2.33: Streaming protocol stack.

56

Chapter 2

control channel provides feedback from the streaming client (i.e., the media player) to the server. The video and audio data that forms the bulk of the transfer in the streaming process is handled by the RTP using UDP and IP as the underlying layers. Hence the data is delivered as a series of datagrams without needing acknowledgments, making it very efficient. The client provides information such as the number of received packets and the quality of the incoming channel to the client via the RTCP channel. The server, based on the information received, knows the network congestion and error conditions and the rate at which the client is actually receiving the packets. The server can take action to deliver the packets at the correct rate. For example, based on the feedback from the client, the server can select one of the available streaming bit rates (64 Kbps, 128 Kbps, 256 Kbps, etc.) or choose to lower the frame rate to ensure that the sustained data rate of the transfer does not exceed the capability of the IP channel. RTSP is thus the overall framework under which the streaming content is delivered to a client over the IP network. It supports VCR-like control of playback such as the play, forward, reverse, and pause functions, which in association with the client media player provide the user with full control over the functionality of the playback process via streaming.

2.14.5 Stream Serving and Bandwidth Management The streaming server and the media client that sets up a connection to the server for streaming operate in a handshake environment.

Figure 2.34: Stream serving.

Introduction to Digital Multimedia

57

Quick Facts Transmission Rates Needed for QuickTime Streaming ● ● ● ● ● ● ● ●

1 megabit per second, 640480 1 megabit per second, 480360 768 kilobits per second, 320240 512 kilobits per second, 320240 384 kilobits per second, 320240 256 kilobits per second, 240180 112 kilobits per second, 240180 56 kilobits per second, 192144

In a streaming session, if the data rate drops due to link conditions, the client needs to signal to the server to carry out intelligent stream switching or other measures such as dropping of the frame rate. The process mentioned previously constitutes a one-to-one connection and handshake and is termed as a “unicast” connection. For each client (e.g., a mobile or a media player), there is a separate stream (i.e., a separate data channel and separate control channel) that is set up to successfully run the streaming process. This type of connection may not be ideal when there is a large number of users accessing the same content, as the number of streams and the data to be supplied multiplies rapidly. The other option is to have a multicast transmission. In a multicast connection, where all users receive the same content, the data is multicast. The routers in the network that receive the multicast stream are then expected to repeat the data to the other links in the network. However, instead of hundreds or thousands of unicast sessions, each link carries only one stream of multicast content. The approach has many advantages, but the individual clients here have no control or mechanism to request server for changing bit rate and so on in the event of transmission disturbances. In MPEG-4, there is another mechanism to provide higher bit rates to clients on a higher bandwidth network. The MPEG-4 streaming server transmits a basic low-resolution stream as well as a number of additional streams (helper streams). The client can then receive additional helper streams and assemble a higher quality of picture if bandwidth is available.

2.15 Streaming Players and Servers There are a number of encoders and streaming servers, some of them based on proprietary technologies.

2.15.1 RealNetworks RealNetworks streaming setup consists of the RealVideo™ codec and SureStream™ streaming server. The RealVideo is based on the principles of MPEG-4 coding. It uses frame rate

58

Chapter 2

Figure 2.35: MPEG-4 layered video coding and streaming.

unsampling. This allows frame rates required for the selected delivery rate to be generated by motion vectors and frame interpolation. This implies that a simple media file can be created for different encoding rates. While serving streams, the RealNetwork SureStream will set up the connection after negotiation with the player. The lowest rate (duress) is streamed in the most congested conditions. SureStream uses dynamic stream switching to switch to a lower

Figure 2.36: Stream switching.

Introduction to Digital Multimedia

59

(or higher) bit rate depending on the transmission conditions and feedback from the client. For example, it can switch from a 64 Kbps stream to a 128 Kbps stream or vice versa. The RealMedia format uses both the RTP protocol and its proprietary RDP protocol for data transfer. The RealMedia family of protocols is oriented towards unicast streaming.

2.15.2 Microsoft Windows Media Format Windows Media is in the Microsoft family of coders and decoders as well as that of streaming servers and players. The encoders can take video files stored in various formats such as .avi and generate files in the .wmv (Windows Media Video Format) or .asf (Advanced Streaming Format) formats. The codecs used are of two types: Windows Media–based and MPEG-4based. Windows Media Player is available as a part of the Windows operating system. Windows Media Servers (WMS) stream files in the .wmv or .asf formats.

Figure 2.37: Buffered playout in streaming.

Release 9 of Windows Media provides advanced features such as Fast Streaming and Dynamic Content Programming. Fast Streaming provides for instant-on streaming, i.e., no buffering before playback and “Always On features,” which are suited for broadband connections. This ensures that there are no interruptions during playback. The Windows Media streaming is not based on RTP, RTSP, and SDP protocols but is proprietary. Multicasting is supported via IGMPv3 support. Windows media has support for IPV6.

60

Chapter 2

Figure 2.38: Media players.

2.15.3 Apple QuickTime Apple’s QuickTime is a complete set of tools and players for handling multimedia and streaming. QuickTime components include a browser plug-in or QuickTime multimedia player, and the QuickTime streaming server. QuickTime, in addition to handling video, audio, graphics, and music (MP3), can handle virtual reality scenes. QuickTime uses the RTP and RTSP protocols as the underlying stack in its latest releases, and MPEG4 as the core compression standard.

2.16 Summary and File Formats In this chapter, you have seen that the basic element of multimedia is a picture. The size of the picture in terms of the pixels determines the file size through which the picture can be represented. Mobile phones have screens that range from a quarter of a VGA screen (QVGA) to WVGA or higher pixel counts. The size of the picture can be further reduced by compression schemes such as JPEG. When there are moving images, these are carried as a series of pictures called frames. Commercial television systems carry 25 or 30 frames per second. It is common to reduce the bit rates for carriage of video by compression or reduction of frame rates. There are many schemes for compression, beginning with MPEG-1 and increasing in complexity. MPEG-2 is today widely used for the carriage of digital television. MPEG-4 and H.264 are further developments that provide lower bit rates. With mobile phones having a small screen size such as QVGA and high compression such as MPEG-4, it is possible to carry video at very low bit rates ranging from 64–384 Kbps. Audio needs to be similarly coded for carriage on mobile networks, and a number of protocols have developed

Introduction to Digital Multimedia

61

Table 2.13: Summary of File Formats. Picture File Formats BMP (*.bmp) GIF (*.gif) PNG (*.png) JPEG (*.jpeg) or (*.jpg) WBMP (*.bmp)

Microsoft Windows Bitmap Graphics Interchange Format Portable Network Graphics Joint Photographic Experts Group Wireless Bit Map Video File Formats

AVI files (*.avi) DV video files (*.dv, *.dif) MJPEG video files (*.mjpg, *.mjpeg) MPEG-2 files (*.mp2) MPEG-4 files (*.mp4) QuickTime files (*.mov, *.qt) Raw MPEG-4 video files (*.m4v) Flash Video (*.flv) Raw video files (*.yuv) Real Media files (*.rm) MPEG2 program stream files (.mpg) MPEG2 video elementary files (*.m2v) WAV files (*.wav, *.wmv)

Audio Video Interleaved Digital Video Motion JPEG MPEG-2 MPEG-4 Apple QuickTime Source MPEG-4 files Adobe Flash Video format YUV video files Real Media Video MPEG2 Program Stream

Audio File Formats MP3 files (*.mp3) Windows Media Audio (*.wma) MPEG-4 audio files (*.m4a, *.mp4) AAC files (*.aac) Real Media Audio (*.rma, *.ra) WAV files (*.wav, *.wmv) MIDI

Advanced Audio Coding, MPEG-4 Windows Audio and Video Musical Instrument Digital Interface

for this purpose. These range from MPEG-1 layer 3 (MP3) to AAC (MPEG-2 part 7) and MPEG-4 AAC for music and AMR for speech. The use of advanced compression techniques makes it possible to deliver multimedia to the world of mobile phones. Some of the commonly used file formats found in various applications are given in Table 2.13. Before We Close: Some FAQs 1. In which format do popular online sites offering movies provide the content? what are the file sizes for such movies? Hulu® is a popular online cinema website that provides videos in the Flash Video format. VuzeHD® is a network where HD videos can be downloaded. This site uses the DivX format. CinemaNow® has over 7500 full-length feature films available for download with support for

62

Chapter 2

multiple formats (i.e., DivX, Flash Video, Real video/audio, and Windows Media). A 2.5-hour movie will have a download file size of 1.5 GB and require typically a T1 line (1.5 Mbps for download). 2. Is it possible to capture streaming video in a home theater for later viewing? Many players offer the facility for capture of streaming video (e.g., Replay Media Catcher 3 can capture streaming video in QuickTime, Real video and audio, Windows Media, and Flash formats). 3. I have a MiniDV camera bought in the United States that has a FireWire output cable. I am able to process my content using common editing software such as Windows Movie Maker and play it using Windows Media Player. Is this content still NTSC? Unless the content is converted to another standard such as PAL, it still retains its original format of frame rates and frame resolutions. However, that does not prevent it from being played using any of the common media players. Again, unless the signal is in analog format, the term NTSC is strictly not applicable—it should be termed as 720480 30 fps video. 4. What is an .Ogg file format? The .ogg file format is an open-standard container format for audio files. It is promoted by the Xiph.org foundation as a patent-free technology for encoding, transporting, and downloading audio. The encoder from Xiph.org is the Vorbis. Files in the .ogg format can be played by popular players such as iTunes, iMovie, QuickTime, and Windows Media Player. 5. Can Final Cut Pro™ (FCP) 5 be used to create 3GPP content? Yes, FCP is based on QuickTime and can be used to save content in the 3GPP or 3GPP2 formats. 6. How can Xvid content be displayed on mobile devices? The Xvid mobile player can be downloaded for a range of devices such as Windows Mobile or Symbian. The Xvid mobile profile is designed for use on mobile devices with limited resources and its certification ensures compatibility across many devices.

CHAPTE R 3

Introduction to Streaming and Mobile Multimedia In a way staring into a computer screen is like staring into an eclipse. It is brilliant and you don’t realize the damage until it is too late. Bruce Sterling (http://thinkexist.com/quotation/in_a_waystarting_ into_a_computer_screen_is_like/262688.html)

3.1 What is Mobile Multimedia? Imagine how it would be if all the multimedia that you have in your home—the music and the DVDs—could be transported into the mobile domain. Well, we are almost there. But it has involved “fitting the multimedia to the pipe size available.” When mobile devices began to be targeted for the delivery of multimedia (video, music, pictures, animations, Flash movies, and voice), a number of issues needed to be settled. These include: ● ● ● ●

Technologies used for multimedia in the mobile domain File formats used for multimedia Transport protocols to be used on mobile networks Procedures for call setup, release, and transfer of multimedia content

Many of these characteristics are dependent on the mobile devices themselves. How large are the screen sizes? What are the capabilities they possess to handle multimedia files, e.g., those in MPEG-2, MPEG-4, or Windows Media? Can they handle multiple services at one time? In this chapter, we will discuss some of these characteristics that set mobile devices apart from fixed desktop devices or home TVs. It turns out that this has been not a single step but a continuous journey through a series of formats to find the right fit, just like the Apple coming out with “adaptive live streaming” for 3G iPhones in late 2009 or Microsoft with Silverlight™.

3.1.1 The Mobile World Legacy for Multimedia: The 3GP The mobile world is today dominated by networks using the GSM, CDMA, or 3G technologies. Devices that can handle multimedia on these network number in hundreds of © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00003-5

63

64

Chapter 3

millions. Hence many of the media formats that exist in the industry today are those defined by the 3GPP or 3GPP2 standards.

3.1.2 Encoding for Audio and Video The mobile devices are characterized by small screens, limited processing power, and limited memory.1 This implies that complex encoding and decoding tasks for video, pictures, graphics, animations, and so on need to be defined as subsets of the full-resolution, full-powered desktop applications. The limited capabilities of mobile devices have meant defining encoding standards and encoder profiles (such as MPEG4 simple profile Level 1) to ensure that these can be used safely in a wide range of devices. Specific formats for video, voice, and audio encoding have been prescribed for use in GSM and 3G (UMTS or CDMA) networks.

3.1.3 Screen Sizes, Frame Rates, and Resolutions Mobile devices are characterized by the use of small screen sizes, e.g., CIF (352288), QVGA (320240), or QCIF (176144). These have a correspondingly lower resolution than the SDTV or a desktop (SVGA or XGA). The frame rates of video offered on mobile devices may be at the prescribed NTSC or PAL rate (30 or 25 frames per second, respectively) or may be at lower rates, e.g., 15 frames per second. The service providers make a selection of the screen sizes and frame rates over which certain services (such as mobile TV) may be offered.

3.1.4 File Formats A number of file formats for video, music, and voice are prevalent in the industry. These range from completely uncompressed video and audio to compressed and commonly used formats such as MPEG-4 video and MP3 or AAC audio. The file formats are also standardized by bodies such as 3GPP and 3GPP2 so that the files can be delivered and played by universally used players. When video is used in a broadcast environment, it is not sufficient to merely have raw or encoded data. It needs to be associated with “metadata,” which describes the file and media properties. This requires “containers” in which everything can be put together. It is common to use ISO-based file-format containers for storage and transmission of multimedia information. MPEG4 and 3GPP files, for example, are based on an ISO file container format. 1

It should be recognized that the capabilities of mobile devices have been growing exponentially. With advanced graphics and multimedia processors, 8 GB micro SD cards, 8 megapixel cameras, and WVGA resolution screens already available, it is expected that there will be essentially no major limitations on these devices except power consumption in the next two years.

Introduction to Streaming and Mobile Multimedia

65

3.1.5 Transmission Media The transmission of content to mobile devices implies the use of wireless media. The media can be a cellular network such as GSM, GPRS, CDMA-1X, CDMA2000, or 3G evolutions such as EVDO and HSPA. Wireless is a very challenging environment for the delivery of multimedia. This is due to the fact that the signal strengths and consequently the error rates can vary sharply as the user moves in the coverage area. The protocols used are expected to recover the data to deliver error-free files. However, for real-time services such as video or music streaming, the maintenance of a basic transmission rate is critical. There are various mechanisms to deal with sustained rates of data transfer such as buffering, service flows in WiMAX, and automatic reassignment of resources. The operators need to make a selection of these parameters for service planning.

Figure 3.1: Elements of mobile multimedia in the cellular mobile world.

3.1.6 Service Definitions and Transfer Protocols The transfer of multimedia information can involve a number of steps. These can include setting up a connection, selection of services (voice call, video call, browsing, and so on), negotiation

66

Chapter 3

of parameters (e.g., data rates for streaming), and the tearing down of the connection after successful data transfer. Some services may involve point to multipoint data transfer, such as a video conferencing service. The 3GPP, for example, defines certain predefined services such as MMS messaging, 64 Kbps circuit-switched data calls (3G–324M), packet-switched streaming 3GPP-PSS), multimedia broadcast and multicast service (MBMS), etc. The services may be associated with specific transfer protocols such as Flute for file transfers.

3.1.7 Animation and Graphics Another aspect of multimedia in the mobile domain relates to the transfer of applications from the desktop to the mobile space. These applications include streaming servers and players, video players on mobiles, and mobile browsers. Gaming and animation applications also need to be “squeezed in” so as to fit into the capabilities of mobile devices. Hence it is common to use different profiles that are more suitable for mobile phones rather than desktops for development of such applications as well as their execution on the mobile platforms.

Figure 3.2: Software environments for mobile phones.

In general, the mobile multimedia standards prescribe the use of limited types of encoders and encoding formats, subsets of graphics applications (such as Scalable Vector GraphicsTiny), scaled-down animation software (such as Adobe Flash Lite or Java MIDP® (Mobile Information Device Profile). To summarize, mobile multimedia has a number of elements: ● ● ●

Multimedia files Handling graphics and animation Call setup and release procedures to deliver multimedia

Introduction to Streaming and Mobile Multimedia ● ●

67

Multimedia transfer protocols Multimedia players or receive end clients

This is not to suggest that all mobile devices need to be confined to the same limitations on encoder profiles, file formats, and applications that can be supported. A range of mobile devices (including mobile phones) is constantly emerging that can support much higher capabilities than supported by baseline profiles that are recommended for backward compatibility. Although the standardized protocols and formats mentioned in this chapter restrict the recommended use to a few specific file types and protocols, networks in practice may use other file formats, encoders, and players, as well as DivX, QuickTime, Windows Media, Flash Video, and Real, which are proprietary but used so widely so as to be considered de facto standards in their own right.

3.2 How Do Mobile Devices Access Multimedia? There are primarily four ways in which mobile devices gain access to multimedia information: The first is via a connection made via a telecom network to a “service” providing multimedia content. These types of connections, made via 3G or GPRS/EDGE networks, are often used for video calls or circuit switched video conferencing. These are set up using the protocols and procedures specifically prescribed for such calls by 3GPP. The second method of access is by making an Internet connection. This connection can be via any means: • Using the 3G/GPRS network on which the device operates. • As an “external connection” using networks such as Wi-Fi, WiMAX, Bluetooth, or others. Most 3G networks now support access to such unlicensed mobile access (UMA) networks seamlessly while roaming. The iPhone 3GS can, for example, switch between Wi-Fi and 3G networks where available. More details on the characteristics of third-generation networks are given in Chapter 4. Thirdly, the mobile device may be in the receiving area of a terrestrial transmission and receive mobile TV or multimedia files. Lastly, WiMAX and 4G networks such as LTE (long term evolution technologies) deliver multimedia content using unicasting or multicasting.

3.2.1 The Mobile Internet and the .mobi Domain Mobile devices are being used increasingly to access the mobile Internet. So is there a separate “Internet” for these devices, where all content is designed to suit the capabilities of mobile phones?

68

Chapter 3

Figure 3.3: Mobile devices can access multimedia in diverse ways.

It turns out that that is not the case, even though initial moves were toward designing WAP-enabled sites. It is not a “must” to design the sites for mobile devices. Browsers such as Opera Mini can convert the content designed for desktop devices for mobiles. However, it was natural that an Internet top-level domain be made available and customized for mobile devices. Such a domain is the .mobi domain, the registrations for which commenced in 2006. Having a .mobi website helps in providing a better web experience to users than a “normal website.” Such sites may be built with special mobi screen makers and Flash-based animation content in addition to a better organization of information or media.

3.3 File Formats for Mobile Multimedia It was evident to all that without the harmonization of efforts for standardization, deployment of mobile multimedia would be a difficult proposition. Operators, equipment manufacturers, and handset vendors, as well as the standards bodies, became very seriously involved with

Introduction to Streaming and Mobile Multimedia

69

efforts to standardize the file formats, protocols, call setup procedures, and applications for mobile networks. The standardization was done under 3GPP.

3.3.1 3GPP Standardization Areas 3GPP The 3GPP is a partnership project of a number of standards bodies that are setting standards for third-generation cellular mobile services and LTE. 3GPP specifically refers to the third-generation partnership project of GSM-evolved 3G networks, i.e., 3G-UMTS. Evolution technologies such as HSDPA, HSUPA, and HSPA(LTE) for higher-speed data transfer are a result of coordinated effort by the 3GPP. The 3GPP releases include standards for encoding and decoding of audio, video graphics, and data, as well as call control procedures and cell phones and user devices. 3GPP2 The 3GPP2 partnership was created to provide specifications and an evolution path for technologies for the cellular networks based on the ANSI-41 (CDMA-1x RTT) standard and its successors, such as CDMA2000 and EV-DO. Its members included organizational partners such as the Telecommunications Industries Association (TIA) USA, ARIB, TTC Japan, TTA Korea, and CCSA China, amongst others.

3.3.2 3GPP Mobile Networks The services that can be provided using 3G networks have been defined progressively in different releases of 3GPP, which mirror the capabilities of new networks, and consequently the types of connections that can be established and the multimedia services that can be provided. 3GPP recommendations provide for end-to-end protocols for call establishment, media transfer, and release. The first release of the industry-coordinated specifications for the mobile networks was in 1999. Since then there have been progressive developments that have been reflected in further releases and upgrades. 3GPP release 1999 The 3GPP release 1999 resulted in the adoption of a Universal Terrestrial Radio Access (UTRA). The UTRA is the radio standard for WCDMA; release 99 had provisions for both the FDD and the TDD (3.84 Mcps) modes. Release 99 also standardized a new codec—narrowband AMR.

70

Chapter 3

Figure 3.4: 3GPP-supported services.

3GPP release 4, March 2001 3GPP release 4 took the first steps toward an IP-based infrastructure. The 3GPP embraces an all-IP core network based on IPv6. It provided for the following main features: ●







New Messaging Systems: Provided for enhanced messaging systems, including rich text formatting (RTF) and still image and multimedia messaging (MMS). Circuit-Switched Network Architecture: Release 99 provided for bearer-independent network architecture. IP Streaming: The release provided for a protocol stack that provided for streaming of real-time video over the network. GERAN—GPRS/EDGE Interface: Release 4 provided for the EDGE /GPRS interface.

3GPP release 5, March 2002 Reflecting the rapid pace of standardization in 3G systems, the 3GPP release in 2002 unveiled the IMS (IP Multimedia System) as a packet core of the 3G mobile networks. Voice over Internet Protocol(VoIP) calls became possible using the Session Initiation Protocol (SIP), which is how VoIP calls are made. At the same time, legacy switched voice calls can be made using the circuit-switched core. The concept of HSDPA (High-Speed Downlink Packet

Introduction to Streaming and Mobile Multimedia

71

Access) based on higher order modulation (of 16 QAM; quadrature amplitude modulation) was also unveiled. It also provided for a wide-band AMR (AMR WB) codec and end-to-end quality of service (QoS). HSDPA is a major step toward the services such as Unicast Mobile TV on 3G networks. The framework provided by the IP multimedia system of release 5 sets the stage for end-toend IP-based multimedia services, breaking away from the circuit-switched architecture of the previous generations. It also provides for an easier integration of the instant messaging and real-time conversational services. The messaging enhancements include enhanced messaging and multimedia messaging. It is important to note that the IMS is access-independent. Hence it can support IP-to-IP sessions over packet data GPRS/EDGE or 3G, packet data CDMA, IP wireless LANs 802.11, and 802.15, as well as wire-line IP networks. The IMS consists of session control, connection control, and application services framework. Security interfaces were also introduced in release 5, which included access security, access domain security, and lawful interception interface.

Figure 3.5: 3GPP releases for mobile multimedia.

72

Chapter 3

3GPP release 6, March 2005 A major feature of release 6 was the introduction of the MBMS services. The following were the major new features of release 6 of 3GPP: ●

● ●

● ● ●

Wide-band codec: Release 6 introduced an enhancement of the AMR wide-band codec (AMR-WB) for better sound quality and coding. Packet Streaming Services (3GPP-PSS protocols) Wireless LAN to UMTS interworking: A mobile subscriber can connect to a wireless LAN using the IP services via the W-LAN. Digital Rights Management Push Services: Pushing of content to mobile devices Multimedia broadcast and multicast services (MBMS)

The 3GPP packet streaming services that were introduced in Release 6 also brought in new media types for streaming. New “brands” were introduced for the description of these services in the ISO basic media file formats so that the content could be identified and directed to the appropriate players in receiving devices. Examples of new brands introduced are: ● ● ●

Streaming servers: 3gs6 Progressive download: 3gr6 MBMS: 3ge6

3.3.3 Evolution to Packet-Switched Architecture One of the major areas of standardization of both the partnership projects has been the migration from circuit-switched domains used in the GSM and ANSI-41 CDMA networks to packed-switched domains. In the case of 3GPP, the new architecture for migration to packetswitched domains is the IP multimedia system (IMS), which in 3GPP2 is under the multimedia domain (MMD). The initial implementations of the new 3G networks involved the packet networks as an overlay. However, later implementations have been merged in a single IP core.

3.4 3GPP Mobile Media Formats In the following sections, we will be reviewing progressive evolutions in the data transmission speeds and delivery mechanisms with new releases of the 3GPP. Instead of considering these releases in an abstract manner, it is also important to understand the forces that have been driving these changes. The simplest way to demonstrate this is the use of megapixel cameras. A 2 MP camera has a picture file size of approximately 6 MB with 24 bits per pixel. Transmission of such pictures via MMS became practical when HSDPA handsets were introduced in 2006 (such as by Cingular, now part of AT&T in the United States), which could provide a data rate of

Introduction to Streaming and Mobile Multimedia

73

1.8 Mbps. This made it possible to download a picture in less than 30 seconds using such a service. HSDPA was unveiled in 3GPP release 5 and was a result of the forecasting of impending higher connectivity requirements.

3.4.1 Mobile Streaming Streaming video (MP4) and MP3 music are some of the most commonly used services on desktops. It was natural that these would be extended to the world of mobile devices. This meant that the devices support RTSP through mobile networks. However, it was soon evident that in many cases, the connections were not good enough to sustain continuous streaming. This led to mobile streaming technology to also include a “progressive streaming format” as well as delivery of content as a file before viewing. This became the basis of RSS feeds and podcasting. Using an RSS feed, content creators can publish their broadcasts as progressive download files rather than streaming them in real time, making the viewing more pleasant. The Packet Switched Streaming Services (PSS) were defined in release 6 of 3GPP. These have become the basis of packet IP connectivity of mobile devices and video on 3G and HSDPA networks. The new enhancements in 3GPP permit Mo-blogging, video sharing (YouTube), and picture sharing (Flickr™) types of services with portals, IP-based connectivity, and large file transfer capability.

3.4.2 The IP Multimedia System IMS, which was unveiled in release 5 of the 3GPP, was destined to become one of the most important developments toward the migration to IP-based core networks. Today it provides the only widely implemented mechanism for fixed-mobile convergence by virtue of its IP core network, SIP-based call initiation, and media gateways to all types of networks. The mobile environment today requires users to be continuously connected to networks, even though there may be little activity, because establishing and releasing connections is resourceintensive in mobile networks. At the same time, upon becoming active, the users expect the minimum latency in restarting the applications. With an increasing base of UMTS and HSDPA users, this has meant that mechanisms to keep thousands of users continuously connected in every cellular base station area needed to be evolved. This very feature, a result of lifestyle evolution, is being introduced in release 7 of the 3GPP as continuous packet connectivity (CPC). Large storage is now a common feature of all smartphones and mobile devices, demonstrated nowhere better than in the iPhone or iPod with 80 GB storage. Release 7 of the 3GPP provides for a new approach in dealing with multimedia and large files by allowing a high-speed protocol based on USB technology. The new enhancements permit the UICC to be considered as a large and secure storage including use of flash memory technology, an OMA smartcard web server, and remote file management technologies.

74

Chapter 3

The 3GPP releases include standards for encoding, decoding of audio, video graphics, and data, as well as call control procedures and cellphones and user devices.

3.4.3 File Formats for Mobile Multimedia in 3GPP What are file formats for 3GPP? The 3GPP, for third-generation UMTS (or WCDMA) networks, has defined a standard file format to contain audio/visual sequences that may be downloaded to cellular phones so that they play uniformly, regardless of the country the user is in, the handset type, or the operator network. The 3GPP2 (which as mentioned earlier is the body for CDMA evolved networks) has also adapted the use of similar file formats. The files are based on the ISO file format. Within the file (as with all files in the ISO family), there is an intrinsic file-type box, which identifies the specifications to which the file complies, and which players are permitted by the content author to play the file. This identification is through four-letter “brands.” The media files generated by the encoders are based on MPEG-4 and H.263 coding standards for the initial releases of 3GPP. The files that are used in GSM, 2.5G, and 3G WCDMA networks are denoted by .3gp and are based on MPEG-4 video and AAC or AMR audio. The files used in CDMA and evolved networks—CDMA2000, CDMA1x, and 3X—are denoted by .3g2 and are also based on the same codecs, i.e., MPEG-4 for video and AAC, AMR for audio, with additional support for QCELP. MPEG-4 is an object-based encoding standard that constitutes a layered structure with separation between the coding layers and the network abstraction layers, making it ideal for delivery over a variety of media. MPEG-4 also has a large number of profiles that can enable its application to very low bit rate applications, while at the same time maintaining the flexibility to go up to higher bit rates through enhancement layers, or to broadcast-quality transmissions right up to high definition (HD). It is also ideally suited for handling computergenerated and animation objects such as synchronized graphics and text, face, and body animation, and many other applications. The MPEG-4 part 10 is also standardized as the ITU standard H.264 and fits into the MPEG-4 framework.

3.4.4 3GPP File Formats for Circuit-Switched 3G-324M Services As MPEG-4 has many profiles and levels, 3GPP has standardized the information in Table 3.1 as the baseline media specifications for use over 3G networks with 3G-324M encoders/decoders. The standardization was considered necessary to limit the complexity of the encoders and decoders used in mobile devices that could be used over circuit-switched 3G-324 M services. The simple profile permits the use of three compression levels with bit rates from 64 kbps in Level 1 to 384 kbps in Level 3. The MPEG-4 simple visual profile Level 1 has adequate

Introduction to Streaming and Mobile Multimedia

75

Table 3.1: 3GPP File Formats for 3G-324M Networks. Codec Feature Video Codec Frame Rate Resolution Audio Coding

Specification MPEG4 Simple Profile Level 1, Recommended Support of MPEG-4 Simple Visual Profile Level 1 (ISO/IEC 14496-2) Up to 15 fps 176144 AMR coding and decoding is mandatory; G723.1 is recommended

error resilience for use on wireless networks, while at the same time having low complexity. It also meets the needs for low delay in multimedia communications. The MPEG-4 simple visual profile Level 1 has support for H.263 baseline profile codec. The encoding mechanism recommends the enabling of all error resilience tools in the simple visual profile. The conversational calls using 3G-324M use essentially the H.263 protocols. The 3GPP recommends the features and parameters that should be supported by such codecs and such extensions are covered in the Mobile Extension Annex of H.263. The support for the MPEG-4/AVC (H.264) codec with full baseline profile has been recommended as optional in release 6 of the 3GPP. Today, the support of H.264 or H.264 with modifications for mobile networks is quite common.

3.4.5 ISO File Formats 3GPP files (.3gp) are “structurally” based on the ISO file formats, which is the primary standard for MPEG-4-based files. The ISO/IEC formats have been standardized by the ISO Moving Picture Expert Group and were earlier derived from QuickTime formats. The 3GPP file format described in Table 3.1 is a simpler version of the ISO file format (ISO-14496-1 Media Format) supporting only video in H.263 or MPEG-4 (visual simple profile) and audio in AMR or AAC-LC formats. The .3gp and .3gp2 formats, both of which are based on the ISO file format, have structures to incorporate inclusion of non-ISO codecs such as H.263, AMR, AMR-WB (for 3GPP), and EVRC and QCELP (for 3GPP2).

3.4.6 The ISO File Container The ISO file format essentially provides a container in which the media metadata and information is carried based on a universally accepted format (Figure 3.6). For this purpose, the ISO-based media file format defines a “file type” box called “ftyp.” This field precedes any variable-length fields such as media data. The file type box also contains fields called “brand” and “compatible brands.” The field “brand” describes the best use of the file; “compatible

76

Chapter 3

Figure 3.6: MPEG file format definitions in ISO.

brands” gives a list of compatible formats, e.g., players on which the playback may be possible. The values that can be used are defined by 3GPP and 3GPP2. Brand The field “brand” can be used to indicate the 3GPP release version in 3GPP, thus indicating to the receiver the file capabilities. In general, higher releases such as release 7 will be compatible with the lower releases and these will fall in the “compatible brands field.” An ISO-compatible file in 3GPP (release 5 and beyond) would also have the value “isom” as a “compatible brand” to indicate compatibility of the file with the ISO baseline format. 3GPP2 has its own specific values for these fields.

3.4.7 3GPP Files 3GPP files generated by the encoders are based on MPEG-4 and H.263 coding and packaged in accordance with 3GPP standards. In this book, we will refer to .3gpp and .3gp2 files by the generic name .3gp.

Introduction to Streaming and Mobile Multimedia

Figure 3.7: Examples of brand usage in 3GPP files for MMS and download.

Quick Facts .3GP File Format

Figure 3.8: File conversion to 3GPP.

77

78

Chapter 3

1. How are .3GP Files Generated? Mobile phones, while recording video, record files in .3gp format (some advanced phones also record video in DV or HDV formats). An example is the Sony P1i phone. .3GP files can be also be generated from any other content format by all major video processing packages such as QuickTime Pro. 2. How are .3GP Files Played? .3GP files can be played by software available on virtually all mobile phones in addition to desktop players such as QuickTime, iTunes (movies), the VLC media player, RealPlayer, and so on. 3. How can Video Content be Converted to .3GP? Video content can be easily converted to .3GP by a large number of software packages, some of which are free. One such package is IamTOO™ 3GP Video Converter. The software can convert any video files or DVDs to any of the common cellphone formats as well as screen resolutions (the software provides the save option). Another example of media converter is the Nokia Multimedia Converter 2.0 for conversion of AVI, WAV, MPEG, and MP3 into standard 3GPP/AMR, H.263, wide-band, and narrow-band AMR-supported formats.

The MPEG4 format (.mp4) (ISO 14496-14) with wrapper and container attributes allows multiplexing of multiple audio and video streams in one file (which can be delivered over any type of network using the network abstraction layer). It also permits variable frame rates, subtitles, and still images. 3GPP files may conform to one of the following profiles: ●



3GPP Streaming Server Profile: The profile ensures interoperability while selecting the alternative encoding options available between the streaming server and other devices. 3GPP Basic Profile: Used for PSS and messaging applications (MMS). The use of the basic profile guarantees that the server will work with the other networks and mobile devices. Quick Facts 3GPP-PSS Streaming Profile Latest Release in Common Use: Release 6 Video Encoding: Mandatory, H.263 Profile 0, Level 10 (QCIF 64 Kbps); Optional, H.264 Visual Simple Profile, Basic Profile Audio Encoding: AAC-LC (stereo, 48 Kbps) mandatory; AMR-WB mandatory; AAC, AAC optional Vector Graphics: SVG-T mandatory, SVG-Basic optional Image Coding: JPEG mandatory; GIF 87a, 89a, PNG optional

Introduction to Streaming and Mobile Multimedia

79

Session Setup and Control: SDP, RTSP Presentation Format: Synchronized Multimedia Integration Language (SMIL) 2.0 Basic Language Profile, Meta Information, Media Description, Media Clipping, Event Timing, Basic Transitions Transport: RTP over UDP for media; RTCP for control, QoS signaling, progressive download Encryption: Mandatory per 3GPP for storage and transmission, DRM based on OMA DRM 2.0 standard

3.4.8 Creating and Delivering 3GPP and 3GPP2 Content Content in 3GPP and 3GPP2 formats can be prepared and delivered using a number of available industry products. As an example, Apple QuickTime provides a platform for the creation, delivery, and playback of 3GPP and 3GPP2 multimedia content. It provides native support of mobile standards as well as the full suite of tools, from ingesting, editing, encoding, and stream serving. Apple’s QuickTime Pro, which can be installed on Windows computers or Macintosh computers, allows the user to ingest video and audio files, to compress using H.264 or

Figure 3.9: Screenshot of QuickTime video file creation. (Courtesy of Apple Computers)

80

Chapter 3

3GPP(2), and to prepare multimedia files using Dolby 5.1 or AAC audio. The output files can be saved as 3GPP for delivery over mobile networks as well. Apple’s QuickTime Streaming Server (QTSS) provides the capability to stream MPEG-4, H.264, or 3GPP files over IP networks using the open standards RTP/RTSP protocols. The QuickTime family also has other tools, such as XServer, where playlists can be loaded with 3GPP, MPEG-4, or MP3 files to create a playlist so that the server can be used as an Internet or mobile network TV station. Limitations of 3GPP .3GP files have many limitations, many of which were intentional to make encoding, decoding, and transmission in the mobile environment practical. The picture size of 176144 and encoding at 128 Kbps is fine for video calls but is proving to be inadequate as the networks advance beyond 3G (HSDPA or EVDO) and further to 4G and applications include video clips and movies. Many handsets now support the WVGA (720480) format or higher pixels and can display much higher-resolution video than is the norm in 3GPP encoding. Figure 3.10 shows how .3GPP content looks at QCIF (176144), WVGA (720480), and a larger picture size as it is scaled up. Notice the compression artifacts and loss of resolution, which are unacceptable in larger screen sizes.

Figure 3.10: 3GPP video on different display sizes.

Introduction to Streaming and Mobile Multimedia

81

Quick Facts Other Common Container Formats Container Format 3GPP, 3GPP2 Apple QuickTime (.mov) Microsoft ASF/ASX Flash Video (.fli/.flv) RMVB/RA/RM (China) OGG/OGM DVD (VOB)

Video/Audio Codecs MPEG-4 SP, H.264, and HE-AAC, H.264, and AMR WB/NB MPEG-4 SP, H.264, and HE-AAC, AMR WB/NB VC-1 and WMA FLV (Sorenson Spark or H.264) and MP3 RV and RA Vorbis, FLAC MPEG-2, AAC

3.5 Internet Video “Internet TV” is commonly used to describe video and associated audio that is available and can be viewed on the Internet. This may be by opening a website by entering its URL address in the browser. This type of viewing of video over the Internet falls under the category of HTTP access to web content (HTTP streaming). Video can also be streamed from a streaming server using the RTSP protocol or initiating media play by using a media player that may use RTSP streaming or proprietary formats (Windows Media Player, Adobe Flash Player, or RealVideo, for example). Any video files that are embedded in the HTML content of the page and natively supported by the browser (e.g., Internet Explorer, Firefox, Opera) will then “play” on the web page. Apart from live streaming, video files can also be downloaded and played by launching the appropriate player (Windows Media Player, Adobe Flash Player, RealPlayer, Apple QuickTime, Google Video Player, etc.). Downloads are practical only with short videos, owing to file size and time involved in downloading.

3.5.1 YouTube The file formats and resolutions of videos that are available on the Internet are site-dependent, in most cases. For example, YouTube, which is designed to accept video uploads from camcorders and cellphones, accepts video in .wmv (Windows media), .avi (Audio Video Interleaved, used by Windows for video), .mov (QuickTime movies format), .3gp and .mpg (compressed MPEG-2) formats. When users download video from YouTube, the format used is Flash Video (.flv).

3.5.2 Google Video Google Video can be downloaded in three formats: .avi, .mp4, and .gvi. Video from the website can also be streamed by using a shortcut that points to a .gvp file. The downloaded

82

Chapter 3

video can be played by using Google Video Player (a free download) if in .avi or .gvi formats and by using a DivX player if it is in .mp4 format. If the browser used is Flash-enabled, Flash Video (.flv) can also be played. Videos uploaded to the websites can vary in resolution depending on the source, which can range from a cellphone with a VGA camera to a digital camera with 5 MP resolution or even HD. These are usually converted to a common format and scaled down in resolution so that the viewers can retrieve them universally. The common screen resolutions supported on mobile devices are 320240 (QVGA) and QCIF (176144). YouTube automatically converts videos to the 320240 format. The videos can also be downloaded to other devices such as iPods or the Sony PSP. Internet video is in effect TV (or video) on a PC with “best-effort delivery.” There is no end-to-end quality-of-service control to ensure that video can be viewed without serious degradation or interruption. There is no encryption of services to generate pay TV revenues and generally no TV business model.

3.5.3 Apple HTTP Live Streaming The mainstay of streaming applications by Apple has been its QuickTime Streaming Server (QTSS), which streams video using the RTSP protocol. However, RTSP streaming in the mobile environments has been difficult, due to highly variable bit rates. In fact, the iPhone 3GS does not even support RTSP-based streaming. RTSP video can also be affected by firewalls that may restrict this type of traffic. Apple has now introduced HTTP Live Streaming, where the media file is broken down into segments of about 10 seconds each and packaged using an MPEG transport stream. These segments are picked up by the HTTP player in a receiver device such as the iPhone 3G, that begins playout after three or four segments are received and then keeps receiving these segments through HTTP requests. There is no restriction on content type and the media player can negotiate the type of stream that best fits the bandwidth available, such as via a mobile network or 3G. With the improvements in core and access networks, the Internet can be used for good-quality video streaming, particularly in areas where the access networks provide high speeds of 4 Mbps or higher. Content delivery networks (CDNs) such as iBEAM™, Limelight networks®, and so on provide content caching at the edge so that the streaming video can be delivered with the highest data rates. Some operators now offer services over the Internet using CDNs to deliver content on a unicast basis. The use of Flash Video for providing TV services over the Internet (replicated by the CDNs) has become a very popular method for delivery of video in the recent past. However, such content is viewable only where the access networks support the speeds needed to view live streaming video.

Introduction to Streaming and Mobile Multimedia

83

3.6 Flash Lite™ Nothing greater signifies the importance of Flash Video than its use on YouTube as an exclusive format for delivery of video. You can upload content in a variety of formats (such as .mpg, .3gp, .avi, or QuickTime), but YouTube has standardized on the use of Flash Video for streaming (or download) from the site. It also supports Flash Lite for video delivery to mobile devices. The number of handsets that have been shipped with the Flash Lite runtime and players preinstalled is in the hundreds of millions and is likely to exceed 2.5 billion by 2010. This makes it an important format for those targeting the delivery of any type of video content to mobile devices. The fact that YouTube is now providing access to full-length movies and other video content makes Flash Video a key format for delivery of mobile multimedia. In addition, many content delivery networks now use Flash Video. Flash Video files are denoted by .flv (for Flash Live Video). Flash Video is generated by high-compression codecs and is ideally suited for downloading or streaming on the web. Flash version 6 used a proprietary codec called the Sorenson codec. Flash 8 and higher versions use a modified version of H.264 (On2® VP6). Adobe Flash Lite is a runtime version of Flash that has been optimized for mobile devices.

Quick Facts Adobe Flash Lite Encoding Specifications for Flash Lite Content Encoder Support: Sorenson Spark and On2 VP6 Screen Resolution: 480320 or 320240, 176144; other resolutions possible. Flash Lite Player on Mobile Phones Flash Lite 3.1 Distributable Player is a runtime version of Adobe Flash Player for mobile devices. Having a size of about 400 KB (for Symbian S60), it does not require the Flash Lite software to be preinstalled in phones. Generation of Flash Lite Content Flash content targeted for mobile devices can be generated by the Adobe Flash Media Encoding Server. The Flash encoding server can generate Flash content for both mobile devices (Flash Lite) and regular Flash players. Most other video editing packages support the saving of video in Flash Lite format. These include Adobe CS3 Professional with the Flash Lite upgrade and higher versions. Adobe After Effects® can be used to export Flash content (.swf) as Flash Video (.flv). Adobe Mobile Client Adobe Mobile Client is the new generation of mobile software platform from Adobe that includes a Flash-based rendering engine and device APIs. An example of a mobile phone based on the Adobe Mobile Client is the LG KP500-Cookie Phone.

84

Chapter 3

Delivery of Flash Lite Content Flash Lite content can be delivered using: ● ● ● ●

Streaming video using RTSP Using Flash Media Server (with Flash Lite 3.0) Delivering video through HTTP connection Video download and play from local storage

YouTube Access for Mobile Devices http://m.youtube.com

3.6.1 Mobile TV Using Flash Video Delivery of video to mobile devices using Flash Video is now very common. Most of the user-generated content (UGC) sites now use Flash Video for delivery of video content to desktops as well as mobiles. One of the early mobile TV service providers using Flash Video Lite is Singtel, with its Mio TV service. The service operates on the 3G network of Singtel. Users need to have a supported handset with Flash Lite 2.0 or above and download the mio TV application before they can use the service. Other users can download the players from the http://www.adobe. com/products/flashlite/ website. mio TV channels were offered at Singapore $6.00 per month in bundled offerings at the time of its launch in May 2008. There is no separate data charge. The Flash content is streamed from an access point to which the application connects. Using Flash, subscribers have access to higher-quality content than would have been possible via .3GP codecs, due to superior compression at the same bit rates. Many other operators are moving from proprietary players in mobiles to Flash players. An example is the web TV service provided by Babelgum™ in the United Kingdom, Italy, and the United States. The freely downloadable application can be used on a range of mobile devices including the iPhone and the PSP.

3.7 DivX Mobile Movies that are downloadable in the DivX format have been very popular in home theaters and Internet-based TV. DivX mobile is now available on many DivX-certified phones. Many of the Qualcomm chipset–based mobile phones have DivX incorporated. It can also be downloaded from the DivX website for free for a wide range of mobile phones. The DivX mobile player is available for download for Windows Mobile 5 and 6 and Symbian S60 (third edition and higher), UIQ 3.0, and higher versions. DivX-enabled phones can play DivX video directly from any website hosting content in DivX format. Alternatively, videos can be downloaded from the website http://mm.divx.com. The website also has Google Video as a partner.

Introduction to Streaming and Mobile Multimedia

85

Figure 3.11: DivX mobile. (Courtesy of DivX)

FAQs on DivX Mobile 1. How is DivX Mobile content created? DivX mobile content can be created by the DivX converter that is downloadable from http:// www.divx.com. This software can convert from DVDs or other video formats to DivX mobile. While converting, various parameters such as picture resolution, audio bit rate, file size, and mobile profile can be selected. The default mobile profile is for 320240 (QVGA).There are also third-party converters available, such as Super™Converter. 2. Which phones can play DivX mobile content? All phones that are DivX certified can play DivX content. Examples are the Samsung Omnia, the SGH-F500 (Ultra Video), the LG Viewty, and the Cookie. DivX mobile can be downloaded on phones that are not DivX certified. This includes Windows Mobile, UIQ and Symbian phones. 3. Can DivX content be streamed? DivX content is meant for viewing on a video on demand (VOD) basis. The DivX encoder generates file sizes that are small enough for Internet download. However, the file can begin playing after download has started and it is not necessary to download the entire file before viewing. Typical bit rates needed are 400 Kbps for video and 128 Kbps for stereo audio. 4. From which sites can DivX mobile content be downloaded? DivX mobile content is available for download from http://mm.divx.com as well as a number of other mobile content websites, such as Nokia OVI(store.ovi.com), Cinemanow.com, and others.

86

Chapter 3

5. How can content producers sell content in DivX format online? Content producers can use an open hosted service such as an “Open Video System.” The platform provides encoding, encryption, hosting, and online delivery of content. Content producers can thus open their own video stores.

3.7.1 Microsoft Silverlight® Just as Flash Player has been available for free download or original equipment manufacturer OEM installation for many years and is now available in a majority of devices, Microsoft has been quietly but firmly increasing availability of its new web application framework called Silverlight. In fact, many events such as the Indian Premier League 2009 could be viewed only if the Silverlight plug-in was downloaded. Version 3.0 of Silverlight was released in July 2009 and is compatible with Windows and Mac OS X. Mobile versions compatible with Symbian S60 and Windows Mobile 6 are being released. Silverlight integrates multimedia, animations, graphics, and interactivity in a single runtime environment, providing a very efficient way to present rich media applications. It supports WMV, WMA, AAC, H.264, and VC-1 multimedia formats. Applications for Silverlight are written using a text-like language (XAML) and can be developed using the .NET framework. Visual elements are very easy to manage using XAML and are represented as “visual trees” that can be rendered along with other vector or bitmap graphics.

Figure 3.12: Microsoft Silverlight. (Courtesy of Microsoft®)

Introduction to Streaming and Mobile Multimedia

87

3.7.2 Microsoft Smooth Streaming Much like Apple, Microsoft has also adapted an HTTP-based streaming format called Smooth Streaming (something that seemed to be sorely needed by users annoyed by frequent buffering). Smooth Streaming dynamically switches the video quality of a file being delivered based on the bandwidth and CPU load. Microsoft Smooth Streaming, based on adaptive HTTP streaming, is an extension to the Microsoft’s Internet Information Services (IIS 7.0- Media Services) and signals a departure from its ASF format.

3.8 Rich Media–Synchronized Multimedia Integration Language (SMIL) Many applications require not just the display of a few images or graphics or audio files but also that these files be synchronized and presented as integrated media. An example is a voice-over associated with an image or a speech associated with a presentation. This type of synchronization enables the delivery of “rich media” and can effectively represent a playlist running at the originating end much like a TV station. Synchronized Multimedia Integration Language (SMIL), pronounced “smile,” is one technique than can accomplish this objective. SMIL is supported by Real as well as Apple’s QuickTime architectures. It is also possible to add media synchronization to HTML by using XML to allow the description of parameters for synchronization of streaming video, images, and text.

Figure 3.13: Rich media presentation using SMIL.

88

Chapter 3

In the absence of a synchronization language, the images and clips are delivered as separate units that when opened by users in differing sequence do not present an integrated picture, as the sender might have desired. SMIL is a World Wide Web Consortium (W3C) standard that allows the writing of interactive multimedia applications involving multimedia objects and hyperlinks and allows full control of the screen display. SMIL can be played out by SMIL-compatible players. The transmission can be either via in the streaming mode (PSS) or can be downloaded, stored, and played. SMIL is similar to HTML and can be created using a text-based editor. (SMIL files have the .smil extension.) The language has parameters that can define the location and sequence of displays in a sequential fashion and prescribe the content layout, i.e., windows for text, video, and graphics. As an example, the SMIL language has commands for sequencing of clips seq, parallel playing of clips par, switching between alternate choices (e.g., languages, bandwidth) switch, location of media clips on the screen region, and others. Detailed SMIL language authoring guidelines and tools are widely available.

Figure 3.14: SMIL-based content streaming.

A typical case of SMIL may be the streaming of two video clips one after another followed by a weather bulletin containing video, a text window, and a text ticker. The following code is an example of such an SMIL file using Real media files.

Introduction to Streaming and Mobile Multimedia

89







In Japan, NTT DoCoMo was one of the earliest implementers of the 3G technology and its i-mode and FOMA present interesting implementations of practical technologies to deliver rich calls and messaging using video, audio, text, and graphics. In Korea, the T-DMB services are based on the use of SMIL for presentation of information and multimedia content. At the basic level, the i-mode service provides simultaneous voice communications and packet data transmissions or Internet access. The i-motion service provides for transmission (or streaming) of a page composed for a mobile screen that can contain HTML, graphics files (.gif or .jpg), an i-motion file (streaming video in the ASF format), and a sound file (including MIDI-synthesized audio). An example of services provided on the FOMA mobile network is given in Figure 3.13. In order that the mobile phones be able to play back content in the 3GPP (or 3GPP2) SMIL format, the phones need to support this functionality. An example of a mobile phone with SMIL support on the FOMA network in Japan is the Fujitsu F900i. The phones for FOMA

90

Chapter 3

Figure 3.15: NTT DoCoMo i-motion rich media delivery. (Courtesy of NTT DoCoMo)

in Japan were supported by NetFront©, a mobile browser and content viewer module from ACCESS Japan (http://www.access.co.jp). The NetFront application supported, amongst other features, SMIL and SVG.

3.9 Delivering Multimedia Content In the previous sections, we had an opportunity to look at some of the file formats used in mobile media. The formats presented, although the most common, are by no means the only ones. There are many other media players, format converters, and encoders that prevail in this space. But we have covered by far the majority of the common formats. We will next look at the mode of delivery of multimedia content. This includes call setup, media transfer, and call release in a typical case. There may be a number of options available

Introduction to Streaming and Mobile Multimedia

91

for transferring and playing media such as download (or file transfer), delivery of streaming content, or HTTP access (web-based access). During the playback a number of controls may be available to the viewers, such as play, pause, rewind, or skipping parts of the video to start play at the next selected point. Based on the type of connection—i.e., a 3G connection established per 3GPP or an Internet connection via Wi-Fi—the modalities of delivery of multimedia content differ considerably. A potential service provider needs to understand these diverse modes of delivery of multimedia in order to maximize customer reach via various networks.

3.9.1 Importance of Target Devices While considering delivery of multimedia content, the importance of target devices cannot be overemphasized. In the case of networks using 3GPP standards, virtually all devices on the network that are 3G-enabled can be targeted, as these follow the industry-wide standards and protocols as recommended by 3GPP. However, the same is not the case while targeting devices using alternative technologies, such as Adobe Flash Lite, DivX, Windows Media, RealVideo, or SMIL. Due to historical reasons, support in mobiles for Flash, DivX , Java, Windows Media, Real, or other software and players may be specific to certain devices only. It may also be operatorspecific. For example, the use of Flash Lite for animation has been the highest in Japan (NTT DoCoMo, KDD, and Vodafone [Japan]). Some content types are network-specific (such as Flash Lite for i-mode) or device-specific (such as for the PSP, personal media players, and PlayStation 3). The manner in which the video is handled in the player may also differ considerably, such as in different versions of Flash Lite or in operating systems such as BREW™. The supported device type information is best obtained from the websites of Adobe (Device Central: http://www.adobe.com/go/dconline) and DivX (http://www.divx.com/en/ mobile/products). Additional information on classification of mobile handsets and supported capabilities is given in Chapter 15.

3.9.2 Streaming in 3G Networks Streaming, an important application, has been standardized for 3G networks under the 3GPP-PSS. The 3GPP-PSS defines the complete protocol stack for call establishment and the transfer of data using the IP layer. The audio and video file formats and formats for graphics, scene description, and presentation of information are also described. Complete protocol stacks such as 3GPP-PSS lend a uniformity to call setup and multimedia data transfers across various networks, even though they may be based on different air interfaces.

92

Chapter 3

Figure 3.16: 3GPP-PSS protocol stack.

3.9.3 Rich Media and 3GPP 3GPP recommendations support rich media applications. SMIL can be used in end-to-end packet-streaming networks with 3G technology (3GPP-PSS), as has been defined in 3GPP TS 26.234. These specifications are based on SMIL 2.0.

3.9.4 Messaging Applications in 3GPP In the 3G domain, with a multiplicity of networks and devices, it has also been necessary to precisely define the conformance for the Multimedia Messaging Service or MMS. This has been done in the MMS conformance document, which specifically addresses the coding and presentation of multimedia messages. For interoperability, the set of requirements has been defined at the following levels: ● ● ● ●

Message content Attributes of presentation language and presentation elements Media content format Lower-level capabilities

MMS SMIL has been defined in the conformance document. A SMIL document consists of two parts: the head and the body. The head contains the authordefined content control, metadata, and layout information. The information contained in the

Introduction to Streaming and Mobile Multimedia

93

head does not convey the sequences or parameters in which the various elements will be presented. The body contains the information related to the temporal or linking behavior of the document.

Figure 3.17: SMIL document structure.

3.9.5 Examples of Mobile Networks Using 3GPP Content In Japan, NTT DoCoMo launched its 3G service in 2001, which was based on the use of 3GPP content. This service was called FOMA (Freedom of Multimedia Access). FOMA service permitted circuit-switched data transmission at speeds up to 64 Kbps and packet data transmission up to 384 Kbps, and provided multitasking for up to three activities simultaneously (circuit-switched voice call, i-mode, and use of a terminal function such as scheduler, calculator, address book, and so on). A strong feature of the service was the use of single key functions. Video and audio files could also be downloaded and played. One of the services offered was the “Visual Net”, which enabled up to eight people to connect to a call simultaneously and permitting the mobile window to show a single user or up to four users simultaneously. M-Stage V-Live©, as the name suggests, was launched as a streaming service featuring one-to-many video streamed delivery. i-motion is a mail service for sending multimedia content, including video clips to other FOMA-compatible mobile phones. Mobile

94

Chapter 3

handsets for 3G services can play 3GPP content without any special players. However, most handsets provide for additional players to be able to play or handle other downloaded content. As an example, the Fujitsu F902is phone can play Windows Media 9, 3GPP, and i-motion content using technology supplied by PacketVideo. Most handsets are also equipped with Bluetooth, infrared, and contactless IC card technology that permits them to be used as mobile wallets for a wide range of applications.

3.9.6 Multimedia Formats for “Broadcast Mode” Mobile TV Networks The 3G file formats followed an evolutionary path from 2.5G networks such as GPRS and EDGE to 3G and were based on the need to support low-bit-rate connectivity at the lower end. Broadcast-based multimedia networks such as ISDB-T and DVB-H (discussed later in the book) were not constrained by the limitation to use codecs that needed to give very low bit rates, e.g., for conversational speech. Hence the use of H.264 for video and AAC for audio is quite common in these networks. Nevertheless, the phones that are used to receive such broadcasts are also used for conversational calls and the use of 3GPP formats is universal.

3.9.7 Multimedia Content Delivery Services Using Flash Lite Multimedia content using Flash can be delivered by any of the technologies such as HTTP (or web access), progressive download, or file download. Adobe Flash has also developed a new protocol for delivery of streaming media between Flash Media Servers and Flash Media Players called Real Time Messaging Protocol (RTMP). RTMP is a proprietary protocol that uses a TCP-based delivery with interaction between the server and the client (Adobe Flash Player). The most important technology for delivery of Flash Video Lite today remains progressive download. Flash progressive download Progressive download of Flash Video (.flv), as the name implies, progressively downloads content from a media server, and the playback can begin almost immediately. The content is cached locally at the receiving client; however, users have limited capability to skip ahead or rewind in video. There is no specific limitation on length of video clips. The implementation is inexpensive, as any web server can be used to host the files. The DRM issues in this case need to be handled outside of Flash. Flash Video streaming (RTMP) In Flash Video streaming using RTMP, there is a TCP connection maintained between the Flash Media Server (FMS) and Media Player (which functions as a client to the media server). The FMS implements the RTMP delivery protocol. In Flash Video streaming, the video is not stored on the hard disk, unlike in the case of progressive download. RTMP is

Introduction to Streaming and Mobile Multimedia

95

more secure than streaming using progressive download, as no files remain on the receiving client after the streaming is completed. Due to the proprietary nature of RTMP, it is common to use hosting services providing Flash Video Streaming Services (FVSS). These servers are usually hosted by Adobe-certified content delivery networks (CDN), such as Akamai, Limelight, and others.

3.9.8 3GPP Higher-Order Releases 3GPP release 7 HSDPA, a high-speed data service unveiled in 3GPP release 5, has been very successfully deployed, with more than 150 commercial networks operational by 2007. 3GPP release 7 (the work on which was complete in 2007) provides for further enhancements in the HSPA services (a combination of HSDPA and HSUPA services) to HSPA. Many of the enhancements in release 7 are designed to provide better support for interactive services such as picture and video sharing. It also aims to improve the real-time conversational services by providing for advanced features for voice and video over IP. The enhancements include, amongst others: ● ● ● ● ●

Higher-order modulations (HOM), i.e., 64 QAM vs. 16 QAM in release 6 Continuous packet connectivity (CPC) Multiple Input Multiple Output (MIMO) Evolved EDGE interfaces Enhanced receivers

The enhancements provided by Release 7 are backward-compatible with previous releases (Rel-99/Rel-5/Rel-6). The continuous packet connectivity feature is designed to increase the number of users that can be accommodated in a cell in active and inactive states. This is done by reducing the overhead for HSPA users and also introducing the concept of discontinuous uplink transmissions and downlink receptions (termed as DTX/DRX). 3GPP release 8 The specifications for 3GPP release 8 were frozen in December 2008. This is essentially a release that goes beyond the current network architectures to an all-IP network and the introduction of LTE technology. 3GPP2 networks At the same time, there has been a continued harmonization effort with the networks that have evolved to 3G from the CDMA-based networks, a subject that has been addressed by

96

Chapter 3

the 3GPP2. Both groups have been attempting to harmonize the radio access and terminal design as well as the core network design. In 1999, in joint meetings known as the “ Hooks and Extensions,” interoperability between the radio access technologies and the dual-mode terminals was finalized. After release 5 of 3GPP in 2001, there was a joint focus on common definitions for channels and traffic models, as well as common physical requirements for terminals. There was also a harmonization in the use of services across the two architectures—HSDPA of 3GPP and 1EVDV of 3GPP2. The all-IP network and radio interfaces harmonization work progressed throughout the following years. The issues of IPV6 in 3GPP and IPV4 in 3GPP2 are also being harmonized for interworking.

3.10 Graphics and Animations in the Mobile Environment Graphics and animations form a very important part of any mobile application. The information can be of any type—weather, news, games, cartoons, music with animated graphics, animated movies, online shopping options, and much more. The quality of presentation goes up manifold when it is presented as graphics or animated video. The technologies for graphics and animations have developed along with the Internet and are well established, with hundreds of millions of users. Websites have been using Flash graphics and animations or Java® applets for animated images and device-independent applications.

3.10.1 Graphics There are two methods for depicting graphics. The first is called “raster graphics,” where the images are represented as bitmaps. Bitmap images are stored with the details of all pixels that describe the image. For example, a 640480-pixel image is fully defined by 640  320  307,200 pixels. If each of these is represented by 3 bytes, the image needs 921 K bytes. A graphic represented by pixels would therefore be represented by these 307 K pixels. Although it may be possible to use compression (such as JPEG) to reduce the file size, the graphic still requires considerable memory for storage and bandwidth for its transportation. Moreover, it cannot be scaled up easily, as the same 307 K pixels that might be adequate for a VGA screen would be totally inadequate for a screen scaled 50 times. The alternative method of representation is by vector graphics. In vector graphics, all shapes are represented by mathematical equations. The information for generating the graphics is conveyed as equations that are computed (executed) prior to display. For example, a circle may be represented by the center point, radius, and fill color. In this case, the circle can be scaled to any level by variation of the radius without loss of quality. The use of vector graphics also requires the use of much smaller files, as only the executable instructions need

Introduction to Streaming and Mobile Multimedia

97

to be conveyed, instead of thousands (or millions) of pixels. Vector graphics used on websites are therefore fast to load, as the full picture need not be downloaded. The representation also produces sharp and crisp images, as no resolution is lost due to image compression. There are many programs that can be used for the preparation of vector graphics. These include Adobe Flash, Adobe Photoshop CSS, CorelDRAW, and others. When the graphics or animations (movies) are produced by vector graphics–based programs, they can be played back using a corresponding player. For example, Flash source files have the extension .fla, and the graphics or animations produced have the extension .swf (for Shockwave Flash). The receiving device (e.g., a mobile phone) needs to have a corresponding player that can play .swf files.

3.10.2 Scalable Vector Graphics for Mobiles (SVG-T) The leading mobile network operators and handset manufacturers were keen to provide standards-based SVG on the mobile phones so that the applications could work across various networks and handsets. At the same time, any standards to be adapted needed to heed the limited resources on a handset. The mobile SVG profile, also called SVG-T (SVG-Tiny), was adapted by the 3GPP and is now recommended for use in mobile phones conforming to the standards (http://www.w3c.org/TR/SVGMobile). Major network operators and mobile phone manufacturers have adopted SVG for the depiction of graphics and animation in mobile multimedia applications. The first formal adaptation of the SVG 1.1 profile for mobiles (SVG-T) was in 2003 by the W3C; following that, these were adapted by the 3GPP as 2D graphics standards for applications such as MMS. In 2005, the SVG 1.2 version was adopted and the W3C issued it as a recommendation in December 2008. SVG 1.2 remains the standard adopted for use even in release 7 of the 3GPP. SVG-T is a rich, XML-based language, and by the very nature of scalable vector graphics has the attribute to automatically resize and fit any size mobile display. It can provide a timebased (rather than frame-based) animation for accurate presentation and provides support for various commonly used video and audio formats and graphics files (JPEG, PNG, etc.). One of the powerful features supported is the “mouse-style pointer” and “pointer click” for providing the user with the control to steer the application through rich graphics.

3.10.3 Animation and Application Software A considerable amount of software work is done by using either Java or Adobe Flash. The use of these software tools helps generate applications that run uniformly in different environments and are appealing, owing to the animation and support in developing “lively” applications. It is therefore no surprise that these software tools find extensive use in developing applications for mobile phones.

98

Chapter 3

3.10.4 Adobe Flash Lite The Flash Lite version of Adobe Flash takes into account the mobile environment of a small screen size of 176208 or 320240 and lower color depth as well as lower bandwidth for transmissions. The applications are correspondingly made lighter in terms of memory usage and processor requirements. Adobe’s Flash Lite software has been available on mobile devices since 2003. It made its debut as Pocket PC Flash, and Flash Lite 1.1 was released in the same year. Flash Lite applications help develop animation, games, mobile wallpaper, ringtones, and other attractive content. Japan has been a major user of Adobe Flash Lite software, and nearly 50% of Japanese phones have Flash Lite players (NTT DoCoMo, Vodafone, KK, and KDDI). Flash Lite 2.0 is based on Flash 7/8 and Flash Lite ActionScript 2.0. It features text enhancements, XML, device video, and mobile shared objects, amongst other capabilities. The latest release is Adobe Flash Lite 3.1 released in 2009.

3.10.5 Java Mobile Edition (J2ME) Java is a competing set of software tools for providing rich software development environment. Java 2 Mobile Edition (J2ME) Mobile Information Device Profile (MIDP) was conceived as a basic platform or toolkit for mobile devices that operate in an environment where a reliable connection cannot be guaranteed all the time. The Connection Limited Device Configuration (CLDC), which is the basis of MIDP, is designed to provide Java support for mobile devices. The latest release of MIDP is MIDP 2.0, which provides advanced support for customer applications such as an enhanced security model and HTTPS support. It also provides for OTA (over-the-air deployment of applications) and enhanced graphics support.

3.11 Mobile Multimedia Applications There are a number of applications that have been developed over time for use over mobile networks. Many of these have been around for many years, while others are relatively new. Some of the common applications, such as SMS, MMS, and ringtones, MIDI, and so on, form an inseparable part of mobile networks. Other applications, such as Flash Video streaming, mobile video downloads, RSS feeds, interacting with Web 2.0 sites such as Facebook, and advanced media players, are examples of recently developed applications in the domain of media and content. The applications may be preinstalled or delivered to the mobile phone using Wireless Application Protocol (WAP) or Wireless Markup Language (WML) with an MMS. The application is then responsible for setting up and controlling the connection and enabling

Introduction to Streaming and Mobile Multimedia

99

video transfer. In most cases, such applications would reside as an icon on the mobile phone screen. Mobile TV is another application that uses the 3GPP-PSS protocol with high-compression file formats defined by 3GPP to enable a continuous stream to be delivered, decoded, and displayed as TV on a mobile device. Examples of some of the applications are given in Figure 3.18.

Figure 3.18: Mobile multimedia applications.

MMS (Multimedia Messaging Service) is an extension of the SMS protocol and is defined as a new standard in 3GPP. The MMS messages can have multiple content types such as formatted text, pictures (.jpg or .gif formats), audio and video. The information can be formatted for presentation using SMIL (Synchronized Multimedia Integration Language). There is no limit on the file size as per the standards, however network operators may limit the size (e.g., file size to 128kB or more) and video duration to a specific limit (e.g., 15-30 seconds). Video Clip Download Service is a commonly used service on the mobile networks. It works by the user sending a request for a clip, that may be by sending an SMS or using a WAP connection. The content can be received by an MMS or by downloading using WAP. It is expected that the phone will have the appropriate players such as Real® or Windows Media to play back the content downloaded as a video clip.

100

Chapter 3

Figure 3.19: Common Mobile Multimedia Services.

Video Streaming can be used to receive live content (such as a TV program or video generated by a camera such as a traffic or security camera, etc.).Video Streaming is essentially an on-demand service. Video Streaming services in 3GPP have been standardized through the use of Packet Switched Streaming Protocols (PSS). Video Streaming is delivered on a unicast basis from the server to the user. Video calling instead of plain voice call can be used if both the parties have a camera phone. Video calling standards have been formalized under 3G-324M standards that essentially use the network for a circuit switched connection ensuring a constant guaranteed bit rate. The video calling service can be extended to include videoconferencing.

3.11.1 Access to the Web from Mobile Devices More and more mobile phones are now used to access the web using browsers installed in mobile phones. The Internet connectivity is usually via GPRS, EDGE, 3G or WiFi, although newer media such as mobile WiMAX are becoming available.

Introduction to Streaming and Mobile Multimedia

101

In the initial years of mobile Internet access, a subset of the HTML was developed and christened as the wireless access protocol (WAP). The WAP was designed to be used with websites specially designed for mobiles. WAP capable micro-browsers were pre-installed in mobile phones that displayed the website as a series of “cards,” each able to fit a screen on the phone. WAP was associated with wireless markup language (WML) on the lines of HTML. However, very few WAP-enabled websites became popular, due to a number of factors, including the fact that initially carriers permitted access to only selected websites (a “walled garden approach”). In the meantime, the capabilities of mobile handsets grew significantly, to the extent that they could support browsers capable of displaying HTML (as accessed from desktops), CSS and ECMAScript, or Java in addition to WAP. Today, most mobile browsers provide full HTML support in addition to the XHTML Mobile profile (WAP 2.0) and compact HTML (cHTML, used in i-mode). Mobile users today access websites without any distinction regarding whether such sites have been designed for desktops (HTML) or for WAP access. However, the importance of separate websites designed for mobiles has not diminished, as mobile users need uncluttered screens and no heavy images or slow-loading content.

3.11.2 Browsers for Mobile Phones You are quite familiar with browsers that run on desktops and show a complete website with embedded videos and audio, graphics, animation, or Flash content without the user having to invoke any programs. However, on a mobile phone with limited resources, a browser needs to be specially designed to tailor content to the phone resources and screen size. Moreover, to enable access to regular websites, rather than websites designed only for mobiles, these browsers need to support small-screen rendering (SSR) or medium-screen rendering (MSR). Such browsers also need to have plug-ins for Flash Lite, for media players, and for display of animated content. The availability of J2ME or Adobe Flash Lite 2.0 in mobile phones means that other applications such as websites can use Java or Flash and the phones should be able to receive a richer content. Browsers for mobile phones have been developed for J2ME and Flash Lite support. An example is the Opera Mini browser that can run on Java-enabled phones. The Nokia Mobile Browser for Symbian OS (such as on the Nokia N95) has support for Flash Lite 2.0. Netfront is another browser commonly embedded on smartphones.

102

Chapter 3

Quick Facts Mobile Browsers 1. Preinstalled and user-installable browsers. Most smartphones today come with preinstalled browsers. Common preinstalled browsers include the following: Device Family

Common Preinstalled Browsers

iPhone, iPod Touch

Safari (based on WebKit)

Palm and Treo

Blazer

Palm Pre

Sony PSP UIQ 3 (Sony Ericsson, HTC)

WebOS browser (based on WebKit) Symphony (based on Netfront) Blackberry browser with Mango rendering engine Android Browser (Based on WebKit) Netfront Opera 8.0

Windows Mobile

Opera Mobile 9.5

Motorola Smartphones Blackberry Android phones

Features JavaScript, RSS, and Atom support Supports streaming Video. No support for Adobe Flash. Supports JavaScript, Adobe Flash support will be available. SMIL 2.0, SVG, RSS, Ajax JavaScript, RSS support JavaScript, RSS and Atom support SMIL, Ajax CSS, Ajax, small-screen rendering, SMIL, Flash CSS, Ajax, small-screen rendering, SMIL

In addition, many phone operating systems allow the installation of third-party browsers. Some of the common ones are Opera Mini, Netfront, MiniMo (Mozilla), SkyFire, and BOLT. Mobile browsers with Flash Video support include Opera, Skyfire, and BOLT, amongst others.

3.11.3 The Mobile Application Framework and the Open Mobile Alliance The area of applications standardization on mobile devices has been addressed by the Open Mobile Alliance (OMA). The OMA as an open standards body determines how the applications on mobiles should be configured so that they can work interchangeably. The OMA is thus concerned with the application stacks on the mobile devices where the underlying layers are provided by the 3G networks. The OMA was created in 2002 with the participation of over 200 companies that included broadly four important groups: mobile operators, application and content providers, manufacturers, and IT companies. Some of the previous groups working on interoperability also merged in the OMA, including the Mobile Gaming Interoperability Forum (MGIF), the Location Interoperability Forum (LIF), the Mobile Wireless Interoperability Forum (MWIF), the MMS Interoperability Group, and Wireless Village.

Introduction to Streaming and Mobile Multimedia

103

The mission of the OMA is to standardize applications that are based on open standards, protocols, and interfaces and are independent of the underlying networks (GSM, UMTS, CDMA, EDGE, etc.).

3.12 Summary of File Formats Used in Mobile Multimedia Digital multimedia involves the use of a number of file formats for representing audio, video, and pictures. Mobile multimedia is a special environment using file formats standardized by the 3GPP forums as well as industry alliances for use on mobile devices. Table 3.2 lists a summary of file formats and profiles that are commonly used for mobile multimedia. Table 3.2: Multimedia File Formats. Multimedia File Formats File Format

File Extension

Audio Codec

Video Codec

3GPP (3G partnership project)

.3gp

AMR, AAC

H.263 MPEG4 Simple Visual Profile

Windows Media MPEG-4 MPEG RealMedia QuickTime SMIL (Advanced Streaming Format) Scalable Vector Graphics (SVG)

.wmv .wma .mp4 .mpg .rma , .rmv .qt, .mov .asf

Windows Media Audio AAC MP3, AAC RealAudio AAC, AMR AAC, AMR

Windows Media Video MPEG-4 Visual MPEG-2 Real Video 9 MPEG-4 Visual MPEG-4 Visual

Sorenson or On2 VP6

Sorenson or On2 VP6

DivX Codec

DivX codec

Flash Lite Java2ME Archive DivX

.svg .swf .flv .jar .divx or .mkv

Before We Close: Some FAQs 1. Does iPhone support a Flash player? If not, how is Flash content (e.g., from YouTube) played on the iPhone? The iPhone does not provide native support for Flash Video. However, some plug-ins are available for playing Flash Lite content. These do not work on locked phones at present. 2. Are the following phones compatible with DivX: Blackberry, Nokia N95, Android, iPhone? N95 supports DivX. As of mid-2009, all other phones in the list were incompatible with DivX.

104

Chapter 3

3. If I have a high-resolution camera (such as 5 MP) on my mobile phone, can I get betterquality 3GP videos than, say, a 1.3 Mp camera? No, most cellphones take videos in VGA resolution at 30 fps. 4. I have a Linux GNU server that can stream videos using RTSP. Can I use it to stream Flash content? No, streaming Flash Video requires a Flash Media Server, a proprietary device. 5. What is MPEG surround sound? Where is it used? MPEG surround sound (known as MPEG-D) uses a side stream to transmit spatial data independent of the stereo audio streams. On devices that are designed to be compatible with MPEG surround, the audio is reproduced with surround effects. Standard devices are not affected by this transmission. It is being used by some HD radio stations. 6. Can a handset provide a TV-out function? In what format is it available? A TV-out function is essentially a VGA-to-TV converter, so that the display on the PC screen can be displayed on TVs. This is similar to PCs that have an S-video connector, except that mobiles may support resolution that is lower (e.g., QVGA). A TV-out connector properly frames video (or pictures) so that these can be displayed on a TV. For example, the FLY LX600 phone and Nokia N97 phones have TV-out connectors (CA-75U). 7. What is the advantage of Silverlight for mobiles? Silverlight for mobiles (runtime) is extending the web experience of a multimedia environment to mobile devices. The development environment provides for a mobile-optimized media and a vector-graphics UI. Silverlight for mobile devices can be used to target a large number of devices in a homogenous manner, utilizing the code developed for desktop devices in the .NET Framework.

CHAPTE R 4

Overview of Cellular Mobile Networks You think you understand the situation, but what you don’t understand is that the situation just changed. Putnam Investments advertisement

4.1 Introduction It is no secret that the cellular mobile networks have been the fastest-growing segment in the field of telecommunications over the past decade. Over 4.5 billion customers were estimated to use cellular mobile services such as GSM, 3G-GSM, and HSPA technologies in 3Q 2009; those using CDMA2000 and 1xEV-DO technologies exceeded 600 million, bringing the number of world subscribers to over 5 billion. However, what content providers and users have been awaiting for many years has now happened: the rapid migration of the networks to 3G or 3.5G technologies and a surprising growth in multimedia handsets and smartphones, creating a new ecosystem of users, operators, and content providers. Over 250 million 3GUMTS handsets were shipped worldwide in 2008 and the number is expected to treble to over 750 million in 2009 (ABI research, Jan 2009). This means that there is an ecosystem of more than a billion 3G users that can be targeted for advanced services. It provides an unchartered territory to users and content providers as they move toward 4G. This chapter is about this new ecosystem. These numbers alone do not tell the full story, however; what remains hidden in these numbers is the unprecedented growth of 3G networks and increasing use of nonvoice services. In the United States, the number of 3G connected devices jumped by 80% in 2008 to reach 70 million, a penetration of over 35%. In Japan, NTT DoCoMo has now 90% of its subscribers using 3G handsets and may discontinue 2G services in the next two years. China and India—accounting for over 1.1 billion users—will move into 3G services in 2010, creating one of the most surprising network transformations known in recent history. The usage of these networks is also finally improving, with most operators offering flat-rate plans that enable vary large download or uploads ranging to 5 giga bytes (GB) a month. This is now bringing many online multimedia services and mobile TV into the realm of practicality. Content providers now have as their target a much larger customer base, which makes their offerings so much more viable. © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00004-7

105

106

Chapter 4

Figure 4.1: World cellular subscribers.

4.2 Cellular Mobile Services: A Brief History The first-generation cellular mobile systems based on analog technology have been largely phased out; mobile networks today are based on second-generation (2G) and third-generation (3G) technologies. In the United States, the advanced mobile phone system (AMPS) systems were over time partly replaced by digital AMPS systems (D-AMPS), which has also reached the end of its life, and the only two major technologies of importance going forward are GSM and CDMA.

4.2.1 Second-Generation Cellular Mobile Systems: GSM The growth of 2G mobile networks has been based largely around two global standards (with many variants): GSM and CDMA. These technologies are so important that their operational capabilities and growth paths merit attention. The GSM standard was created in 1987 and was an improvement over the TDMA technologies. It uses 200 kHz carriers, each of which uses a time division multiplexed stream

Overview of Cellular Mobile Networks 107

Figure 4.2: The percentage of 3G users is rising steadily to create a critical mass.

with eight slots. The gross data is 270 kbps. GSM is thus a combination of FDMA and TDM technologies. The GSM channel slot had a capacity of 22 kbps per channel (270 kbps gross for eight channels) and the capability to carry circuit-switched data and fax data. The spectrum for GSM has been defined by three major frequency bands, i.e., 800, 1800, and 1900 MHz (in the United States), and is the most widely deployed technology today. GSM networks are primarily circuit-switched networks, although packet-switched data capabilities have been added in a majority of networks through enhancements to the core network (such as GPRS overlay and EDGE). GSM uses speech encoding based on the regular pulse excitation linear predictive coder, which gives encoded speech bit rates of 13, 12.2, and 6.5 kbps. The modulation used is GMSK (Gaussian minimum shift keying). GSM transmit and receive carriers operate in paired frequency bands. Following is an example of GSM parameters for a UK operator: ● ●

Frequency band 1710–1785 MHz mobile Tx, 1805–1880 MHz base Tx Channel spacing 200 kHz, 374 carriers, 8/16 users per carrier.

108

Chapter 4

Figure 4.3: Cellular systems evolution.

4.2.2 2.5G Technologies: GPRS GPRS is an overlay packet-switched network on the circuit-switch-based GSM networks. It uses the existing carrier frequencies and does not require a new spectrum. One of the TDM carriers with 8 slots (270 kbps gross rate) is dedicated to packet-switched data. This enables a 115 kbps packet data carrier to be available, which can be used on a shared basis by devices that are GPRS-enabled. This provides an “always-on” data connection to the devices and eliminates the unacceptable delays in circuit-switched connections. The packed-switched data under lightly loaded conditions can allow the users to get 50 kbps downlink/20 kbps uplink data speeds (depending on simultaneous users) and makes applications such as e-mail, MMS, and exchange of video and audio clips feasible. GPRS technology had for the first time a provision for classes of traffic, with four quality of service (QoS) classes being defined based on sensitivity to delays. Streaming live audio or video can be given a higher priority than packets carrying data for other applications, such as background applications or file transfer. The packets with higher QoS can be transmitted first with a higher priority over the packets for other services.

Overview of Cellular Mobile Networks 109

4.2.3 EDGE Networks (2.5G) The next evolution toward higher data rates is the Enhanced Data Rates for GSM Evolution (EDGE). The EDGE networks introduce new technologies at the physical layer, such as 8 PSK modulation, and better protocols for data compression and error recovery. The higher layer protocols remain largely the same. With 200 kHz radio channels (same as that in GPRS or GSM), EDGE networks have the capability to deliver gross data rates of up to 500 kbps per carrier. This data rate is a shared pool of bandwidth for all the users using the pool and is available under ideal transmission conditions with a good signal. Owing to the pool nature of the capacity, one user can thus on average expect to get no more than 200 kbps.

Figure 4.4: Data rates in cellular mobile systems.

4.3 CDMA Technologies 4.3.1 2G Technologies: cdmaOne The cdmaOne cellular services, as the name suggests, are based on the use of code division multiple access technology. CDMA-based mobile services were first standardized as IS-95

110

Chapter 4

(1993) and subsequently revised in 1995 to IS-95A, with added features, and are the basis of most cdmaOne networks worldwide. cdmaOne uses a carrier bandwidth of 1.25 MHz to provide the services in the frequency bands of 800 and 1900 MHz. The use of code division multiplexing has many advantages, such as use of common frequencies in adjacent cells, soft handoffs, and higher tolerance due to multipath fading as well as interference. Being based on the use of spread spectrum techniques, they have many unique features that distinguish them from the GSM or TDMA technologies.

Figure 4.5: GSM vs. CDMA.

First, in FDMA systems (including GSM), the neighboring cells cannot use the same frequencies. This mandates tough design requirements. In the field, transmission conditions can result in interference or nonusability of certain frequencies. In CDMA, all cells can use the same frequencies without any specific allocation of cell-wise resources. This makes the radio system easy to design and robust. Further, in spread-spectrum systems such as the CDMA, multipath propagation is not a problem, as the reflected waves received from all sources are added to receive the final signal. Frequency-specific fading has very little effect on the overall system performance. Moreover,

Overview of Cellular Mobile Networks 111 in CDMA the handoff between cells is a soft handoff because the switching of the frequencies from one cell to another is not required. cdmaOne can support data transmission on a circuitswitched basis at speeds up to 14.4 kbps.

4.3.2 2.5G and 3G CDMA Services: CDMA2000 The limitations of cdmaOne led to the evolution toward 2.5G systems with better capabilities for handling of data and higher capacities for voice. The first stage in the evolution was the 1xRTT system, which is the specification of the system using 1.25 MHz carriers. The 1xRTT air interface comprises two channels. The fundamental channel is an 8 kHz channel (with 9.6 kbps raw data rate) and carries signaling, voice, and low-data-rate services. This is the basic channel assigned for every session. The second channel is the supplemental channel (SCH), with a data rate of 144 kbps (153.2 kbps gross), which carries traffic in burst modes. The channel capacity is dynamically assigned to different users who can get data rates of 19.2, 38.4, and so on up to 144 kbps. The SCH channel is IP-based and thus provides efficient support for multiple devices and applications. The CDMA technologies provide for higher system capacity, owing to better data compression and modulation techniques as well as the structure of the channels employed. Privacy and security are also enhanced in CDMA2000 technologies. The CDMA2000 networks can operate on the same carriers as cdmaOne as an overlay network. The next stage of evolution of CDMA services is the 3xRTT, which essentially indicates the use of 3x1.25 MHz carriers. The 3xRTT technology can provide support of data up to rates of 384 kbps.

4.4 3G Networks 3G networks were designed specifically to address the needs of multimedia, i.e., streaming of video and audio clips and access to rich interactive content. 3G networks resolve the data limitations of earlier versions by assigning a larger bandwidth for the carrier, typically 5 MHz. The 3G architecture with an IP-based core also represents a major evolution over the 2G architecture of GSM and 2.5G overlays of GPRS and EDGE. The concept of 3G originated under the International Mobile Telephone 2000 (IMT2000) initiative of the International Telecommunication Union (ITU). IMT2000 envisages the use of macrocells, microcells, and picocells with predicted data rates of 144 kbps at driving speeds, 384 kbps for outside stationary use or walking speeds, and 2 Mbps indoors using picocells.

112

Chapter 4

Figure 4.6: IMT2000 vision. (Courtesy of ITU)

The 3G Partnership Projects (3GPP and 3GPP2) coordinate all aspects of 3G networks internationally, including multimedia file formats, resolutions, coding standards, and transmission air interfaces, so as to have a global compatibility in standards for the 3G of mobile services. The 3G core network protocols support video and VoIP with quality-of-service guarantees. The support of the IP multimedia system is a part of the 3GPP initiatives.

4.4.1 Classification of 3G Networks The network evolution from 2G networks (GSM or cdmaOne) to 3G has taken place in separate streams, i.e., those involving the CDMA networks and those involving the GSM networks. The GSM networks have evolved to 3GSM under the UMTS framework. 3GSM services use WCDMA air interface. The CDMA networks, on the other hand, evolved from cdmaOne to CDMA2000, which is a 3G standard. Higher evolutions have followed a path of multiple 1.25 MHz carriers, i.e., CDMA2000 1x, CDMA2000 3x, and CDMA2000 6x, in order to meet the demands of real-time mobile TV as well as other applications. Both the technology lines are within the IMT2000 framework. However, although the WCDMA standard (UMTS) is a direct spread technology, the CDMA2000 standards have grown using a multicarrier technology (CDMA2000) or time division duplex (TDD, UTRA TDD, and TD-SCDMA).

Overview of Cellular Mobile Networks 113

Figure 4.7: Wireless and mobile technology overview.

CDMA (CDMA2000) technologies ●

2G: CDMA (IS-95A, IS-95B) 2.5G: 1xRTT 3G: 3xRTT 3G: EV-DO Enhanced 3G: EV-DO and EV-DO revisions A and B ● ● ●



GSM-based technologies ●

2G: GSM 2.5G/2.5G: GSM/GPRS/EDGE 3G: WCDMA (UMTS) 3G MBMS Enhanced 3G (3.5G): HSDPA(High Speed Downlink Packet Access) ● ● ●



An example of how the two separate planes of GSM-evolved networks (UMTS and WCDMA) and CDMA-evolved networks exist in the United States is shown in Figure 4.8.

114

Chapter 4

Figure 4.8: 3G services in the United States: CDMA and UMTS.

4.5 3G Technologies: CDMA and GSM 4.5.1 3G Technology Evolved from GSM Networks 3GSM uses an air interface called the UMTS terrestrial radio access network (UTRAN). In the initial implementations (that is, those based on 3GPP release ’99), the core network of the GSM/GPRS remained largely unchanged for backward compatibility. The GSM base station system and the UTRAN share the same GPRS core network comprising the SGSN and GGSN. The radio network controllers (RNCs) are connected to the mobile switching centers, and, in the case of UTRAN, to the SGSN for UMTS (3G) (Figure 4.9).

Overview of Cellular Mobile Networks 115

Figure 4.9: 3GPP release ’99 UMTS/GSM Mobile Network Architecture.

The release ’99 core architecture has a provision for two domains, i.e., the circuit-switched (CS) domain and packet-switched (PS) domain. The CS domain supports the PSTN/ISDN architectures and interfaces; the PS domain connects to the IP networks. The air interface— i.e., Wideband Direct Sequence Code Division Multiple Access (WCDMA)—uses a bandwidth of approximately 5 MHz. There are two basic modes of operation—in the FDD mode, separate 5-MHz frequencies are used for the uplink and downlink. This mode thus uses the paired spectrum for UMTS. In the TDD mode, the same 5-MHz bandwidth is shared between uplink and downlink and is primarily intended to use the unpaired spectrum for UMTS. The UMTS frequency bands assigned for WCDMA are 1920–1980 and 2110–2170 MHz (frequency division duplex), with each frequency being used in a paired mode with 5 MHz (i.e., 2  5 MHz). This can accommodate approximately 196 voice channels with AMR coding of 7.95 kHz. Alternatively, it can support a total physical level data rate of 5.76 Mbps.

116

Chapter 4

4.5.2 CDMA2000 and EV-DO Standards for 3G networks that have evolved from CDMA networks are being developed by the 3GPP2 forum. A multicarrier approach was adapted in CDMA2000 for compatibility with the older IS-95 networks with 1.25 MHz carrier spacing. This was particularly the case for the United States, where no separate spectrum for 3G services was available. In multicarrier mode transmissions, in the downward transmission direction, up to 12 carriers can be transmitted from the same base station, with each carrier being of 1.25 MHz and a chip rate of 1.2288 Mcps. The 3x version with three carriers can therefore provide a data rate of 3.6864 Mcps, which compares well with the UMTS (WCDMA) chip rate of 3.84 Mcps. The CDMA2000 standards of the ITU have adapted data rates for up to three carriers, with the CDMA2000 3x standards having been defined in the CDMA2000 standard. The multiple carrier approach provides compatibility with the IS-95 networks, which have the same carrier bandwidth and chip rate. Another branch of developments has been the 1x mode with the evolution of the 1xEV-DO (data-only option). This system uses a separate carrier for voice and data services.

Figure 4.10: Evolution of IS-95 services to 3G.

Overview of Cellular Mobile Networks 117

4.5.3 Enabling 3G Packet Cores for Voice Traffic With the two types of platforms and evolution paths for 3G (GSM and CDMA), the standardization efforts have also progressed under two streams in the 3G partnership projects, i.e., 3GPP for GSM evolved networks and 3GPP2 for CDMA networks. The efforts have resulted in the following IP-based multimedia platform architectures for the networks: ● ●

IP Multimedia System (IMS) in 3GPP Multi-Media Domain (MMD) in 3GPP2

Both these architectures provide for the transmission of voice and video over IP with QoS guarantees.

Figure 4.11: Standardization of 3G services.

There has been an active effort toward the convergence of the two standards of 3GPP and 3GPP2, particularly with a view toward interworking and roaming. However, the convergence is far from complete.

118

Chapter 4

4.5.4 The IP Multimedia System Although a GPRS-based network became a part of the 3G architecture, there was no way to use it to provide any voice-based services in the absence of a signaling mechanism in the earlier releases of 3GPP. The IP Multimedia System, introduced in 3GPP release 5, is designed as a service network for handling signaling. The handling of signaling information is based on the use of SIP and a proxy server. The sessions set up are independent of the content type that will be used. The IMS provides support for voice over IP (VoIP) calls, which have already been commercially offered at a significant advantage.

4.6 4G Technologies 4.6.1 Why 4G? 3G services as they exist today are based on the ITU-IMT 2000 initiative (of the early 1990s), with relatively low data rate capabilities, e.g., a total of 5.76 Mbps raw data rate with a 5 MHz FDD carrier. The 3.5G evolution technologies, such as the HSDPA (3GPP release 6), provide for a separate downlink data carrier with 16 QAM modulation, which gives a high shared downlink data rate averaging 7.5–12 Mbps—still too low for mass video usage. Also, the 2G to 3G migration path (3GPP release ‘99) left the core switching and transport architecture of networks unchanged and provided for circuit-switching and packet-switching cores as separate entities. IP-based services such as VoIP became possible with the IP multimedia system (IMS) in release 5 of the 3GPP. The 3G architecture remains embedded with gateways and signaling converters for multiple types of services—voice, data, video streaming, and so on—and a visualization for a common network with a common core, universal signaling, and new radio interfaces with high data rates became the objective of the 3G long-term evolution technologies (3G-LTE).

4.6.2 3G-LTE These objectives of the radio access network for long-term evolution were laid down in 3GPP-TR25.913 and are briefly as follows: ● ● ● ●

Higher peak data rates of 100 Mbps downlink and 50 Mbps uplink with new air interfaces Improved cell edge rates and spectral efficiency; higher in-cell rates with MIMO Core network based on packet-only core Scalable bandwidths: 1.25, 1.6, 2.5, 5, 10, 15, 20 MHz

Figure 4.12 shows the essential elements of 3G-LTE.

Overview of Cellular Mobile Networks 119

Figure 4.12: Basic elements of 3G-LTE.

The 3G-LTE network architecture is a fully meshed architecture with two nodes. Its elements are the access gateways (AGW) and enhanced node B (eNB). Each eNB and the user entity (UE) has a minimum of two antennas each, creating a 2x2 MIMO network. The downlink is based on the use of OFDM/OFDMA with FFT sizes of 128–2048 based on bandwidth. The uplink is based on FDMA by dividing the frequency space in a number of blocks (e.g., 5 MHz has 15 blocks). Each UE is allocated the best part of the frequency spectrum based on transmission conditions. In the downlink side, each UE can be allotted one or more resource blocks. A resource block size is 12 OFDM subcarriers (i.e., 20 MHz bandwidth has 1201 occupied subcarriers or 100 resource blocks). The biggest change in the 3G-LTE is the use of a simplified network architecture, doing away with the circuit-switched and packet-switched domains, and the multiple types of nodes. The new architecture also supports mobility between different systems such as 3G, WLAN, or mobile WiMAX.

4.7 Data and Multimedia Over Mobile Networks The handling of data and mobile applications has always been of interest to users. In fact, mobile networks remain one of the primary means to access the Internet even for many PCs

120

Chapter 4

and laptops. In countries where 3G services are introduced, the use of video clips and music downloads now generate a significant portion of the revenues of an operator.

Figure 4.13: Handling data over mobile networks.

The mobile scene has changed dramatically with the introduction of user-friendly devices such as the iPhone 3G, which have brought the mainstream of users into using mobile multimedia in a big way. These are assisted with social networking sites such as Facebook, Twitter, YouTube, and Google Video to have a rapidly growing universe of devices capable of handling multimedia.

4.7.1 Data Capabilities of GSM Networks The networks were designed essentially for voice, with a focus on high efficiency in bit rate encoding. Compression technology had not advanced sufficiently to make H.264 video a reality on these networks. GSM could handle only circuit-switched data; IP data capabilities were added in the 2.5G networks such as GPRS under the GSM umbrella and CDMA2000 1xRTT in the CDMA domain.

Overview of Cellular Mobile Networks 121

Figure 4.14: Data handling in 2G and 2.5G networks.

GPRS, for example, provides a shared bearer of 115 kbps, which needs to be shared (on average) among eight users. Even the CDMA2000 1x, which gives a shared data rate of 144 kbps, can get severely limited in actual per-user speeds with simultaneous usage, with users getting only 70–80 kbps on average. The 2.5G networks, despite various enhancements such as IP-based connectivity, QoS, and class-of-traffic-based traffic handling, continued to suffer from serious shortfalls with respect to capacity. The number of simultaneous users, particularly at peak times, has the potential to degrade the service. The actual data rate usable thus depends on the number of simultaneous users and their locations within the cell sites. Users at the edge of the cell will get a lower data rate due to transmission impairments of the higher modulation scheme signals. To enable the use of rich media applications involving video clips, audio downloads, or browsing websites with multimedia content, 2G and even 2.5G technologies needed to be improved and migration toward 3G was seen as the only way forward.

122

Chapter 4

Figure 4.15: Development of applications on mobile networks.

4.8 Multimedia and Data Over 3G Networks 4.8.1 3G-UMTS Networks The 2  5-MHz paired band for WCDMA can support a raw channel data rate of 5.76 Mbps, which translates into data rates of up to 2 Mbps per user (dependent on network design). 3G systems evolved from GSM technologies are now referred to as 3GSM. In various networks, the users can be offered data rates from 384 kbps to 2.4 Mbps (spreading factor 4, parallel codes [3 DL/6 UL], 1/2 rate coding), depending on the usage patterns, location of the user, and other factors. There are higher rate implementations—i.e., HSDPA, with data rates of 7.5–12 Mbps or higher, as described later—as well as 20 Mbps for MIMO systems.

Overview of Cellular Mobile Networks 123 Table 4.1: WCDMA Frame Characteristics. Channel bandwidth Chip rate Frame length Number of slots per frame Number of chips/slot RF structure (Forward ch)

5 MHz 3.84 Mcps 10 ms 15 2650 Direct spread

The data transmitted on WCDMA systems is in the form of frames, with each frame being 10 msec. The channel data rate is 5.76 Mbps and with the QPSK coding used this gives a chip rate of 3.84 Mchips/sec or a frame chip capacity of 38,400 chips. Each frame has 15 slots, which can carry 2560 chips (Table 4.1).

4.8.2 Data Channels in 3G-UMTS WCDMA In the uplink direction, the physical layer control information is carried by a dedicated physical control channel, which has a spreading factor of 256 (i.e., bit rate of 15 kbps) in 5 MHz WCDMA systems. The user data and higher layer control information are carried by dedicated physical data channels, which can number more than one. These physical data channels can have different spreading factors ranging from 4 to 256 depending on the data rate requirements. The transport channels, which are derived from the physical channels, can be divided into three categories: ● ● ●

Common channels Dedicated transport channels Shared transport channels

The connections are set up using a common channel (forward access channel [FACH] in downlink and random access channel [RACH] in uplink). The data is then transferred over a dedicated channel (DCH). The DCHs are not very well suited to very bursty IP data, as the setup time of any reconfiguration changes in the channel can be as much as 500 ms. An alternative method of data transmission for very bursty data is the downlink shared channel (DSCH), with data rates being variable on a frame-to-frame basis for each user in the shared pool.

124

Chapter 4

Figure 4.16: WCDMA transport channels.

4.8.3 Classes of Service in WCDMA 3GPP Table 4.2 lists the classes of service that are supported in the 3GPP release ‘99 architecture of UMTS under 3GPP standardization.

Table 4.2: Classes of Service in WCDMA. S.No.

Class of Service

Application

1 2 3 4 5 6

32 Kbps Class 64 Kbps Class 128 kbps Class 384 Kbps Class 784 Kbps Class 2 Mbps Class

AMR speech and Data upto 32 Kbps Speech and Data with AMR speech Video Telephony Class or other Data Enhanced rate, Packet Mode data Intermediate between 384 and 2 Mbps Downlink data only

Overview of Cellular Mobile Networks 125

4.8.4 HSDPA HSDPA is a feature added in release 5 of the 3GPP specifications. HSDPA extends the DSCH, allowing packets destined for many users to be shared on one higher bandwidth channel called the high-speed DSCH. To achieve higher raw data rates, HSDPA uses, at the physical layer, level modulation schemes such as 16-point quadrature amplitude modulation (16 QAM) with an adaptive coding scheme. The HSDPA also changes the control of the medium access control (MAC) function from the radio network controller to the base station. This allows the use of fast adaptation algorithms to improve channel quality and throughput under poor reception conditions. On the average, download speeds for DSCH can be 10 Mbps (total shared among the users). However, lab tests and theoretical predictions suggest the rates can be as high as 14.4 Mbps. Of course, the maximum data rate falls as the users move outward in the cell and can fall to 1–1.5 Mbps at the cell edge. HSDPA also uses IPv6 in the core network, together with improved protocol support for bursty traffic.

Quick FAQs HSDPA 1. What Operators Provide HSDPA Services? All major 3G operators now provide HSDPA as well. In the United States, AT&T and T-Mobile provide HSDPA services. In Great Britain, 3UK, O2, Orange UK, T-Mobile UK, and Vodafone provide HSDPA services. 2. What Bit Rates are Offered on HSDPA Services? Although the HSDPA services can offer peak data rates of 7.2 Mbps, these will be dependent on receiving devices as well, many of which today support 3.6 Mbps. Users can expect the following typical download speeds: ● ● ●

Downlink 400 Kbps–1.7 Mbps Uplink 500 Kbps–1.5 Mbps (using HSUPA) Uplink 220–320 Kbps (using 3G)

3. On Which Devices can HSDPA Services be used? HSDPA can be used on mobile phones that have 3G and HSDPA capability. A large number of phones now support HSDPA (examples: Samsung Propel Pro, Nokia N95, iPhone 3GS). These services can also be received on laptops using a HSDPA PCMCIA card. 4. How are the HSDPA Services Charged? A $20–$30 monthly plan enables users to download up to 5 GB a month. Carrier plans vary considerably. 5. Is the Coverage for 3G the same as for HSDPA? HSDPA may not be offered in all the markets where 3G services are available.

126

Chapter 4

4.8.5 3G Networks Based on CDMA Technologies: EV-DO EV-DO networks achieve high throughputs by using advanced modulation and RF technology. This includes first an adaptive modulation (QPSK, 8 PSK, and 16 QAM) that allows the radio node to increase its transmission rate based on the receive quality feedback from the mobile. It also uses advanced “turbo-coding” and multilevel modulation, which acts to increase the data rates at the physical layer. Macro-diversity is used via a sector selection process and a feature called multiuser diversity, which permits a more efficient sharing of resources among the active users. 1xEV-DO uses a carrier with a 1.25-MHz bandwidth (or its multiples). The peak data rate in the EV-DO carrier is 3 Mbps using a bandwidth of 1.25 MHz. EV-DO is a data-only carrier independent from voice networks. The EV-DO networks are described by the term 1xEV-DO, which relates to their origin from CDMA 1x standards and the use of high data rate technology from Qualcomm (IS-856). There are currently three main versions of 1xEV-DO: Rev 0, Rev A, and Rev B.

Quick Facts 1xEV-DO 1. EV-DO Rev 0 Rev 0 offers peak data rates up to 2.4 Mbps and average download shared data rates of 300–600 Kbps per user. The uploads are typically 50–100 Kbps. Rev 0 represents the first implementations of 1xEV-DO. 2. EV-DO Rev A Peak download rates of 2 Mbps (700–1400 Kbps average) and upload rate of 1.8 Mbps (500–800 Kbps average). 3. EV-DO Rev B This is a multicarrier version of Rev A with some improvements so that each carrier can provide peak data rates of 4.9 Mbps (three carriers—14.7 Mbps). 4. Which Carriers Provide EV-DO? In the United States, Verizon and Sprint provide EV-DO services. Implementation of Rev A is most common. 5. Which Devices can be Used on EV-DO Networks? A large number of smartphones are available for use on EV-DO networks. For example, on Verizon or Sprint are the Blackberry Storm, Curve, Palm Pre, Motorola Krave ZN4, Samsung Highnote, and others. 6. Where can one get Coverage Information on EV-DO? Coverage information on EV-DO can be obtained from the carrier website or http:// EVDOmaps.com.

Overview of Cellular Mobile Networks 127 The core architectures of the 1xEV-DO networks are also moving toward an IP core with a packet-routed network for efficient handling of data. The 1xEV-DO networks can provide average user data rates for downloads at 300–600 kbps, with the peak data rate capability of the network being 2.4 Mbps.

4.8.6 1xEV-DO Architecture Figure 4.17 shows the 1xEV-DO architecture, in which the radio nodes installed at cell sites perform packet scheduling, base-band modulation/demodulation, and RF processing. Handoff functions are provided by the radio network controller installed at the central office. The radio network controllers connect to the core network using a packet data serving node (PSDN).

Figure 4.17: 1xEV-DO architecture.

The 1xEV-DO network goes beyond the circuit-switched architecture of IS-95 networks (a feature that still exists in 3G release ’99 architecture). This flexibility has led the operators to build their networks based on an IP core for switching, transport, and backhaul applications. The use of an IP core leads to better cost efficiency through the use of standard routers and switches rather than proprietary equipment.

128

Chapter 4

4.9 Mobile Networks: A Few Country-Specific Examples An analysis of global trends shows that the GSM family of customers (GSM and 3GSM) had grown to 4.4 billion in 2009, while the CDMA customers numbered about 600 million at the end of 2009. The migration to the 3G networks is happening very rapidly along with the usage of 3G-enabled phones. The initial growth was highest in the United States, Europe, and Japan, followed by the rest of Asia, with the result that the penetration levels of mobile networks reached close to 100% in Western Europe. The growth markets have now shifted to the BRIC countries: China (700 million subscribers), Russia (200 million), Brazil (150 million) and India (400 million).

Figure 4.18: Growth of mobile customers in BRIC countries.

4.9.1 The United States of America The U.S. market has traditionally been a mix of technologies of CDMA, D-AMPS, i-Den, and TDMA. GSM was a late entrant to the U.S. market. The situation has dramatically changed now after the merger of Cingular with AT&T and Alltel with Verizon to create a marketplace that is now dominated by four carriers (Figure 4.19). The fifth-largest carrier, U.S. Cellular, has about 6.1 million customers. The United States had about 260 million mobile users by mid-2009 and the top five carriers accounted for 244 million customers.

Overview of Cellular Mobile Networks 129

Figure 4.19: Top U.S. cellular operators.

AT&T and T-Mobile provide GSM, 3GSM, and HSDPA services; offerings from Verizon and Sprint include cdmaOne, CDMA2000, and EV-DO-based services. AT&T provides HSDPA in the 850 MHz band and T-Mobile operates in the AWS band. A major change in the usage pattern occurred in 2008 with the introduction of iPhone 3G (of which about 7 million sold in the second half of 2008 alone). The 3G and HSDPA networks, which were the mainstay of office applications, now for the first time acquired a decent-sized home consumer user base. This has led to a flurry of activity in multimedia applications, access to the applications store, as well as social networking (MySpace, Facebook, YouTube, and so on) using mobile phones. T-Mobile USA has launched a phone dedicated to 3G and social networking (Sidekick LX). At the same time, it has been building its consumer applications strength using the open Android platform and its 3G services. Table 4.3 gives the current status of the U.S. carriers.

130

Chapter 4 Table 4.3: Cellular Operators in the United States. Carriers with Own Network

Carrier

Features

AT&T

Technology Band of Operation Area of Operation Future Migration Path

GSM.GPRS,EDGE 3G,HSPA 800/850 and 1900 MHz PCS bands Countrywide Deploying LTE as an upgrade

Verizon Wireless

Technology Band of Operation Area of Operation Future Migration Path

CDMA2000,EV-DO 800/850 and 1900 MHz PCS bands Countrywide LTE

T-Mobile

Technology Band of Operation Area of Operation Future Migration Path

GSM.GPRS,EDGE, 3G-UMTS,HSPA 1900 MHz Countrywide HSPA (Higher Speeds)

Sprint (Merged with Sprint Nextel)

Technology Band of Operation Area of Operation Future Migration Path

CDMA2000 and EV-DO,iDEN 800–900 MHz,1900 MHZ Countrywide EV-DO Rev B, Mobile WiMAX, LTE

US Cellular

Technology Band of Operation Area of Operation Future Migration Path

CDMA2000, EV-DO 800 and 1900 MHz 26 States EV-DO Rev B

4.9.2 India India is the fastest growing cellular mobile market in the world, with average monthly subscriber additions of 12 million on a base of 400 million in mid-2009. The cellular mobile scene in India has been equally volatile in terms of the number of operators. Cellular mobile services commenced in India in 1995–1996 after the issue of licenses for GSM-based cellular mobile services in 1994. The licenses initially granted to two operators were subsequently issued to the state-owned operators BSNL and MTNL, and a fourth operator for operation in the 1800 MHz band. Licenses were also issued for CDMA services, which initially permitted as a fixed wireless service, were allowed to be converted into a full-fledged mobile service. In 2008, the CDMA operators were also allowed to seek the GSM spectrum and provide GSM services. Subsequently licensing was opened up for additional operators on a first-come-first-served basis, and there are thus now more than 13 operators in some areas—a number that could rise.

Overview of Cellular Mobile Networks 131

Figure 4.20: Major mobile operators in India (millions of subscribers).

The major operators (Bharti, Vodafone, Reliance BSNL, and Idea) have already introduced data services through GPRS, EDGE, and CDMA2000. All the CDMA operators (Reliance Infocom, Tata Teleservices, and BSNL) are providing CDMA2000 services with high-speed data access facility on their handsets. 3G services have been launched by BSNL and MTNL and are expected to be launched by other operators after the auction for the 3G spectrum. GSM subscribers now account for 80% of the subscriber base with fewer than 20% being with the CDMA operators.

4.9.3 South Korea Korea warrants special mention in the field of mobile communications because of its innovative approach to introduction of new services. Korea has three major operators: South Korea Telecom (SKT), KTF, and LG Telecom. The largest mobile phone company in Korea is SKT. KT garnered 20 million subscribers as early as 1993, when other countries in Asia were just beginning to introduce cellular services. KTF, which provides CDMA services and has more than 15 million subscribers, has been aggressively promoting its 3G-WCDMA network since 2007 and has both HSDPA and HSUPA offerings.

132

Chapter 4

4.9.4 Japan Japanese mobile services are dominated by three players: NTT DoCoMo, KDDI, and Softbank. NTT DoCoMo operates the largest network using WCDMA technology; KDDI has a largely CDMA2000 network(au) which has been upgraded to 1xEV-DO Rev A (WIN). Japan was an early starter in 3G services. FOMA (Freedom of Mobile Multimedia Access) from NTT DoCoMo began in October 2002, based on WCDMA (3G) technology. The rich interactivity and unified payment facility provided in service led to a very rapid growth in the number of users as well as high ARPUs. Japan was also the first country to deploy the 3GPP-compliant network by Jphone (now part of Softbank.) in 2002. Softbank had a good success with 3G services with the introduction of the 8 GB iPhone3G, which is being provided free with subscription. It also unveiled a Wi-Fi Mobile TV tuner (1-Seg tuner) for reception of mobile TV on iPhones. Japan in mid-2009 had 106 million mobile customers, over 80% of whom were using 3G-based services.

Figure 4.21: Mobile subscribers—Japan (2009).

FOMA—First 3G Service from Japan Japan was the first country to introduce the 3G services, with the launch of FOMA in 2002.

Overview of Cellular Mobile Networks 133

Figure 4.22: FOMA services: Japan. (Courtesy of NTT DoCoMo)

No. of subscribers (millions)

The service, which had rich applications driving the underlying technology of 3G, had over 45 million subscribers in 2009. Japan has a history of successful launches of interactive mobile services ever since NTT DoCoMo launched its i-mode service in 1999, which essentially brought the Internet to phones in Japan. The service proved very popular, with its yellow “I” button, which gave mobile users the Internet access option with a number of predefined application menus. The services proved so popular that by 2004 one-third of the Japanese, over 44 million, were using i-mode services. The services later gave way to FOMA, a new 3G service of NTT DoCoMo.

40

20

′94

′95

′96

′97

′98

′99

′00

′01

′02

′03

′04

′05

Figure 4.23: Evolution of 3G mobile services in Japan. (Courtesy of NTT DoCoMo)

′06

134

Chapter 4

i-mode The use of mobile screens needed a new hypertext coding language, as the normal web pages were unsuitable for viewing on mobile screens. The WAP specifications were designed by the WAP forum in 1995 as Wireless Markup Language (WML), which was an evolution from XML. The primary target of WAP was the delivery of text-based information with the capability to handle monochromatic images as well. The reason for the success of the i-mode services was believed to be ready applications from participating companies such as rail and air ticket booking, e-mail, music download, and shopping. WAP was designed to work with most cellular networks, including IS-95, cdmaOne, CDMA2000, GSM, GPRS, and EDGE, as well as 3G implementations. Despite an initially promising picture, WAP failed to find strong favor among the users. It required them to connect to WAP-enabled websites, while many applications started developing around SMS exchanges, which turned out to be the dark horse of mobile applications. A variant of WAP called the i-mode packet data transfer service was introduced by NTT DoCoMo in 1999. The service used CWML (compact wireless markup language) instead of WAP WML for data display.

4.9.5 China There are now three major carriers in China: China Unicom (now merged with China Netcom), China Mobile, and China Telecom. All three provide both fixed-line and mobile services. China has issued licenses for the 3G services as a part of the basic telecom business. China Telecom is a CDMA player; China Mobile provides services based on GSM and GPRS technologies. China Unicom had both GSM and CDMA and in fact provides dual-mode handsets. However, it has divested its CDMA business to China Telecom, thus consolidating the companies based on technologies. The three companies have been issued licenses for 3G as follows: China Unicom: WCDMA China Mobile: TD-SCDMA China Telecom: CDMA2000 TD-SCDMA is a Chinese standard for 3G services. With the issue of licenses and rollout of networks through 2013, it is expected that over 500 million users will use 3G services.

Overview of Cellular Mobile Networks 135 Before We Close: Some FAQs 1. What is a femtocell? A femtocell is a small base station used in a residential or small business environment. It connects to the service provider via a broadband media (DSL, cable, or fiber) and can support from three to five mobile phones. Femtocell is a 3GPP standard. Vodafone UK provides femtocells to customers for about $250. T-Mobile (Germany) is also offering femtocells. 2. What is the FCC policy on dual-SIM phones? Are any dual-SIM phones available in U.S. markets? FCC approves phones with dual SIMs, provided that the handset has a valid international mobile equipment identity(IMEI) number. An example is the DUET phones (E-Tech). 3. How do EV-DO USB modems provide voice calls when EV-DO is a data-only mode? They use EV-DO with backward compatibility to IS-95A/B and 1xRTT. The voice provided is based on CDMA standards, i.e., 13K QCELP and 8K EVRC. 4. What is a locked phone? A phone can be locked to a SIM card in case a carrier would like the handset to be used only on its network. The lock can be removed by providing an unlock code. 5. Why do we not get a uniform rate data connection while using EDGE? EDGE data connections are based on all users sharing the capacity of a data bearer. The data rate varies based on instantaneous usage by customers. 6. Do mobile operators permit VoIP on their networks? If not, how do they block it? Most mobile operators do not permit VoIP on the mobile phones via 3G. They block services such as Skype. Some operators such as T-Mobile and O2 Germany have begun permitting VoIP calls to customers on premium plans. VoIP is blocked by firewalling the ports used by VoIP software such as Skype (e.g., port 80 VoIP Proxy over HTTP, port 5060 [SIP]). The FCC has recently also fined some broadband companies for blocking VoIP traffic.

This page intentionally left blank

CHAPTE R 5

Overview of Technologies for Mobile TV Everything should be made as simple as possible but not simpler. Albert Einstein

5.1 Why New Technologies for Mobile TV? In October 2003, Vodafone KK of Japan introduced a mobile phone with an analog TV tuner, the V601N from NEC. It could be used to receive analog NTSC broadcasts from local stations. New offerings followed such as Sharp V402SH and V602 SH. The phones have a QVGA (320260 pixels) display capable of displaying 30 frames per second, i.e., the normal telecast frame rate. The tuner in these phones is designed for NTSC reception. Similar handsets are available for receiving PAL broadcasts. Pocket PCs are available with Windows Mobile OS and secure digital input output (SDIO) tuner for PAL and NTSC. In Europe, many handsets can receive DVB-T. The question now is very simple: If mobile phones can receive analog terrestrial broadcast stations, just as they do FM stations, why do we need new technologies for mobile TV? The answer to this question holds the key to new technologies that have emerged for mobile TV.

Figure 5.1: Mobile phones with analog tuners. © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00005-9

137

138

Chapter 5

5.1.1 Signal Strength of Terrestrial TV Broadcasts Terrestrial TV, whether analog or digital, requires TV sets to support an external antenna for reception. The analog tuner-based mobile TV handsets have an antenna, which needs to be designed for the VHF band (channels 2–13) and the UHF band (channels 14–83) and thus needs to cater to wavelengths of 35 cm to 5.5 m. This implies the use of the earphone leads (wires) as de facto antennas for the FM/VHF band. In general, a strong signal is required for broadcast reception of analog broadcasts. The reception can vary based on location. Inside buildings, the phone must be connected to an RF socket connected to an external antenna. The quality of reception may also depend on the orientation of the mobile phone and whether the user is moving. The transmissions are essentially designed for stationary reception rather than mobile reception. The effects of fading due to transmission are also prominent.1

5.1.2 TV Transcoding to Mobile Screens The transmissions being in standard analog formats, the decoders generate the decoded signal in 720480 (NTSC) or 720576 (PAL) resolutions, which need to be converted to QCIF (176144) or QVGA (320240) formats. The transcoding needs processing power within the cellular chips and creates a drain on the battery.

5.1.3 Mobile Handset Battery Life The technologies of a normal TV are designed for a wall-socket-connected receiver, for which limitation of power is not a major issue. Using conventional tuners and decoders as in analog sets limits the battery life of the phone to around 1 to 2 hours, even with the new advanced batteries. This is due to current analog tuner technologies, which require 200–800 mW. (As you will see later, mobile TV technologies such as DVB-H reduce this to about 90 mW.) Also, the frame rate of NTSC transmissions is 30 fps, which due to the screen characteristics, leaves a streaking trace on the screen of the mobile phones, for which the desirable refresh rate is 50–60 fps.

5.1.4 Mobile vs. Stationary Environment Mobile phones are meant to be used on the move, which means use in cars or trains traveling up to 200 km/hour or even more. Even with advanced internal antennas, mobility means ghost images due to the Doppler effects and fading due to transmission for analog TV reception.

1

Despite the various disadvantages of analog or ATSC/ DVB-T digital tuners for mobile TV, these are still popular in receiver devices, particularly for the reception of freely available local transmissions.

Overview of Technologies for Mobile TV 139 The fact remains that terrestrial TV transmissions, whether analog or digital, use transmissions that are meant for large screens and are inherently inefficient if displayed on mobile devices that have limitations on display size, refresh rates, and power consumption. There is also a need for the handsets to be usable in mobile environments, the speeds of which can reach 200 km/hour and above. Further, a mobile user may leave the local TV transmitter station reception area. The technology of mobile TV should support reception across large regions.

5.2 What Does a Mobile TV Service Require? It is therefore quite evident that TV reception on mobile devices requires new technologies. The requirements of any technology that can support transmission of mobile TV are thus: ●

● ● ● ● ● ●

Transmission in formats ideally suited to mobile TV devices, e.g., QCIF, CIF, or QVGA resolution with high-efficiency coding Low power consumption technology Stable reception with mobility Higher built-in resilience against error rates in reception Clear picture quality despite severe loss of signals due to fading and multipath effects Mobility at speeds of 250 km/hour or more Ability to receive over large areas while traveling

None of the technologies that have been in use, such as analog TV or digital TV (Digital Video Broadcast for Television [DVB-T] or ATSC), are capable of providing these features without certain enhancements in terms of robust error correction, better compression, advanced power-saving technologies, and features to support mobility and roaming. This has led to the evolution of technologies designed specifically for mobile TV. The evolution of technologies has also been dependent on the service providers and operators in the individual fields of cellular mobile services, broadcast services, and broadband wireless, each of which has moved toward extending the scope of its existing networks to include mobile TV as an additional service. For example: The mobile operators have launched mobile TV based on the 3G networks or networks based on HSDPA and EV-DO. The broadcasters have launched mobile TV–based terrestrial broadcast technologies for handhelds derived from DVB-T, ATSC, or ISDB ( Integrated Services Digital Broadcasting) terrestrial TV networks. Other operators used digital audio broadcasting (DAB) and moved in with extensions of the DAB services to an evolved standard of DMB (Digital Multimedia Broadcast) based on both satellite and terrestrial transmission variants.

140

Chapter 5

Broadband wireless providers, such as those providing mobile WiMAX, have enabled their networks for carriage of mobile TV. Mobile broadcasting using WiMAX is now a fast-developing technology. Satellite/terrestrial hybrid networks have also evolved in order to more efficiently deliver mobile TV in areas that fall outside the coverage areas of terrestrial transmitters.

Figure 5.2: Modes of providing mobile TV services.

We will briefly preview the general features of each of these modes of services in this section, with a greater technology-based discussion in subsequent sections of this chapter.

5.2.1 Mobile TV Services on Cellular Networks Mobile operators have been attempting to provide TV video streaming since the introduction of 2.5G technologies (EDGE, CDMA2000), which permitted data transmission. The aim was to provide video and audio streaming/download services similar to what could be used over the Internet using IP streaming and file downloading. The video clips that could be transferred were generally short (on the order of a few seconds). Where streaming services

Overview of Technologies for Mobile TV 141 were available, these generally offered jerky video (due to the low frame rates) and occasional freezes (due to the network and transmission conditions and need for buffering). As the networks migrated to 3G, the data rates increased support of bit rates such as 128 kbps needed for video and audio delivery became possible. This led to the offering of live video channels by the 3G carriers at speeds of 128 kbps or more, which—when coupled with efficient coding under MPEG-4—could provide an acceptable video service. What was then required was to standardize the encoding and protocols to provide video services uniformly across networks and receivable on a wide range of handsets. This led to an effort under the 3G partnership forums to standardize the file formats that could be transferred (i.e., how the audio and video will be coded), the compression algorithms that could be used (MPEG-2, MPEG-4, or MPEG-4-AVC/H.264), and the procedure for setting up streaming sessions. As a result of this standardization under 3GPP2, mobile TV services are available almost universally on 3G networks.

5.2.2 Mobile TV on Terrestrial Broadcast Networks In the meantime, the TV broadcasters, who had been left out of the quest by mobile operators to provide mobile TV services, looked at the extension of their own networks for the rollout of the mobile TV. The obvious choices were the terrestrial broadcasting networks. These networks broadcast in the VHF and UHF bands. Most of these networks in Europe, the United States, Japan, and other countries are migrating to digital TV, which helps in reducing bandwidth by packing eight or more standard definition TV programs into the same frequency slot that was occupied by only one analog carrier. The concept of mobile TV using terrestrial broadcasting networks is somewhat similar to that of the FM radio receivers built into the mobile handsets. Here the radio reception is from the FM channels and does not use the capacity of the 2G or 3G networks on which the handset may be working. The handsets have a separate built-in tuner and demodulator for the FM signals. In fact, the receiving device does not need to be a mobile phone at all. It can be a car-mounted receiver or a handheld video player. For the purpose of carrying mobile TV, the TV broadcasting community found it expedient to modify the digital TV transmission standards so that mobile TV could “piggyback” on terrestrial transmissions. This was done by bringing out enhancements to the standards by specifying precisely how a mobile TV signal would be carried. Different techniques for carriage of the “mobile TV” stream have been used under different terrestrial broadcasting standards. Injecting the “Mobile TV” stream in the MPEG-2 transport stream Both ATSC and DVB-T are based on the use of MPEG-2 transport stream, which acts as a conduit for many types of content, each of which can be encoded in many different compression

142

Chapter 5

formats such as MPEG-2 or MPEG-4. These content types are defined by elementary streams and can include SDTV, HDTV, audio, subtitling, EPG, and data. The standards that evolved as a result of such provision of carriage of mobile TV are known by DVB-H (DVB-Handheld) for DVB based systems and ATSC Mobile DTV for ATSC-based systems. DVB-T is the standard used for digital TV broadcasting in Europe, Asia, and the Middle East, and ATSC is used primarily in the United States, Canada, Taiwan, and Korea. Japan uses a different system for terrestrial broadcasting known as Integrated Services Digital Broadcasting (ISDB-T). Assigning a portion of terrestrial carriers for mobile TV An alternative approach to multiplexing the mobile TV signals in the MPEG-2 transport stream is to use a part (or whole) of the terrestrial carrier’s bandwidth for mobile TV broadcasting. This approach is used in ISDB-T, which uses OFDM-based subcarriers. The total number of OFDM carriers are split into 13 segments. One or more of segments can be used to carry SDTV, HDTV, or mobile TV. In this case, the mobile TV stream need not be multiplexed with other TV channels. Mobile TV needs only one segment, so it is sometimes called 1-Seg broadcasting. A similar approach is used in MediaFLO, except that the whole of the carrier is used for carrying mobile TV channels, which are encoded with a layered coding scheme that enables higher bit rate receiving devices to get higher quality. DVB-H can in principle be also used for carriage of only mobile TV signals without any SDTV or HDTV being present. What does the “mobile TV” stream contain? We have talked about the terrestrial broadcast standards being modified so as to add mobile TV capabilities and carry a stream of “mobile TV” signals. What does this mobile TV stream contain? The answer is fairly straightforward. The mobile TV stream is generated in order to enable its carriage over “not-so-robust” terrestrial networks directly to mobiles and to meet the requirements specific to mobiles such as low power consumption. The following is a summary of the attributes of a mobile TV stream: (i) (ii)

The mobile TV stream contains TV or video signals encoded per mobile screen resolutions, (e.g. QVGA, QCIF). The encoding is done using high-efficiency encoders such as MPEG-4 or H.264 for video and AAC, AMR-WB, or aacPlus for audio. The encoding schemes to be used are specified in the relevant standards, such as DVB-H or ATSC Mobile DTV. However, use of other formats such as VC-1 is not uncommon and the protocol headers carry information on the content type carried in the stream.

Overview of Technologies for Mobile TV 143 (iii)

The encoded TV streams are multiplexed in a “mobile TV multiplex.” This multiplex is separate from the main transport stream (such as MPEG-2). (iv) The mobile TV stream requires additional transmission resilience, as it will not be received using rooftop antennas as is the case for the DVB-T or ATSC on which it is carried. This error resilience is built in using an additional layer of very robust Tourbo, FEC, or convolutional coding. This gives a C/N gain of 4–6 dB for the receiver. (v) A separate electronic service guide (ESG) is transmitted for mobile TV. As an example, consider the case of a terrestrial TV transmitter, with 100 KW transmitted power and a 10 dbi antenna giving an effective radiated power (ERP) of 1 MW, with a tower height of 200 m. Figure 5.3 shows the how the requirements for a terrestrial ATSC reception contrast with those of the mobiles: ● ●

Field Strength at Digital TV Reciever: ERP-path loss 10dB (antenna gain) Field Strength at a Mobile Receiver: ERP-path loss-10dB (suburban terrain loss) 10dB (antenna gain)

For example, while a DTV customer uses an external antenna with a height of 10 m (typical) and a gain of 10 dB, a mobile user is at 1m height and the mobile handset antenna has a gain of –8 dBi to 10 dBi. The suburban environment of buildings also presents a loss of 10–12 dB, depending on the field conditions. This means that a mobile handset must make do with a signal that is about 45 dB lower than a terrestrial receiver (40dBu vs. 85 dBu).

Figure 5.3: A comparison of field strengths at a DTV receiver and a mobile TV receiver.

The significantly lower signal strength as seen by a mobile receiver requires a very robust error coding to be built in the mobile handset signals as opposed to a DTV receiver, even though they may be transmitted from the same tower. A transmitter may cover only about

144

Chapter 5

1000 Km2 for mobile receivers as opposed to 75,000 Km2 for DTV. It also implies a number of additional repeaters or transmitters, which need to be provided for mobile TV, even in an area that was being well served by a DTV transmitter for ATSC (or DVB-T). In order to increase the signal strength, the transmitters for mobile TV may operate in a single-frequency network (SFN) configuration, which requires all transmitters to be synchronous and operate off a common clock for timing. The number of additional transmitters required for mobile TV depends on the C/N threshold that the physical layer technology provides, and there are some major differences between the different technologies such as ATSC Mobile DTV, DVB-H, and MediaFLO. A number of different signal propagation models are used in the industry for estimating signal strength. ITU-R recommendation ITU-R P.1546 is commonly used for field strength estimations and network planning. The planning of transmitter networks is also governed by country-specific regulations on frequency planning. (vi)

The time division multiplexing scheme of mobile TV uses a “time slicing” mechanism whereby a specific mobile TV channel is first buffered and then transmitted over a very short portion of time, enabling the receiver to go into a sleep mode the rest of the time and save power. (vii) The time slicing mechanism and the transmission of mobile TV signals require a large amount of buffering and transmission of data in bursts. The DVB mechanism of packetized elementary streams (PES) is not well suited for such transmission due to buffering and delays. Hence a mechanism of IP-data casting is used in which layer 3 packets are carried using a feature called multiprotocol encapsulation (MPE) in DVB-H. IP datacasting is also used in other standards such as ATSC Mobile DTV and MediaFLO. (viii) A mechanism to carry the EPG information to enable fast switching between channels. This is done by adding special signaling features to the transport stream frame such as transmission parameter signaling (TPS) in DVB-H. (ix) A mechanism to minimize the effects of Doppler shifts on subcarriers used in terrestrial transmission, such as COFDM in DVB-H by specifying a special mode for modulation (e.g., 4K mode in DVB-H instead of the normal 8K carriers). This is quite a long list of “things to do” in order to carry mobile TV using existing terrestrial transmitters! But all of these are required in order to cater to the requirements of a mobile receiver with a low gain antenna, mobility at high speeds, and high demands on low power consumption. All the terrestrial transmission standards deal with these requirements of a mobile TV stream in their own way. We will be going into depth of each of these in dedicated chapters on DVBH, DMB, and MediaFLO, amongst others.

Overview of Technologies for Mobile TV 145

5.2.3 Mobile TV on Satellite–Terrestrial Hybrid Systems Terrestrial mobile TV services are potentially quite attractive, thanks to the broadcast mode of transmission, which saves valuable 3G spectrum. However, the terrestrial transmitting networks cannot reach everywhere and are limited by the line of sight of the main transmitters (and repeaters, if any). 3G networks, on the other hand, traverse the length and breadth of most countries and can provide uninterrupted viewing for individuals traveling anywhere in the coverage area.

Figure 5.4: Typical coverage of mobile and digital terrestrial transmissions.

In order to overcome these shortcomings of terrestrial transmission, satellite-based transmissions of mobile TV have been used by using very high-powered satellites that can deliver signals directly to mobiles. An example of such a system is the S-DMB (satellite) mobile TV available in South Korea. In urban areas, where a direct line of sight is not possible, terrestrial repeaters are used. Such satellite or hybrid satellite systems can be used using different technologies.

5.2.4 Mobile TV Using Broadband Wireless Networks Broadband wireless networks such as Wi-Fi are used extensively today by all devices, fixed or mobile. These enable Internet applications such as video streaming, downloads, and web access on mobile devices as well. However Wi-Fi has limited application outside of “hotspots.” Mobile WiMAX (IEEE 802.16e-2005) is the technology designed to provide broadband access with QoS classes across wider coverage areas such as a city. WiMAX networks have witnessed a strong growth recently and share the stage with 3G technologies in providing

146

Chapter 5

Figure 5.5: Satellite and terrestrial hybrid mobile TV services, current and forthcoming.

broadband wireless access. Mobile WiMAX can provide bit rates of 2–20 Mbps over a 5 Km radius, based on the bandwidth of the carrier.2 Mobile WiMAX has opened up a new dimension in the use of mobile multimedia services owing to: ●





The majority of technologies for delivery of mobile multimedia are based on IP unicast or multicast. WiMAX provides both unicast- and multicast-based (called multicast and broadcast service, or MBS) connectivity. The WiMAX technologies provide an alternate medium for the delivery of IP-based multimedia and are seen as potentially useful in the constrained environment of the 3G and terrestrial spectrum. Mobile phones have started providing Wi-Fi (802.16b), WiMAX, or WiBro interfaces (such as Samsung i730 for Wi-Fi and Samsung M800 for WiMAX in Korea).

Applications are available that can provide mobile TV over WiMAX or wireless broadband with global compatibility: ●

2

Mobile WiMAX provides for connections with guaranteed QoS classes. Hence if a connection is set up for video streaming, the guaranteed bit rates will be maintained despite degradation of transmission environment by changing the modulation scheme and assigning of additional subcarriers. For more details on WiMAX Systems and mobile broadcasting, refer to Mobile Broadcasting with WiMAX (Focal, 2008) by Amitabh Kumar.

Overview of Technologies for Mobile TV 147 ●

Mobile WiMAX, having been designed for mobile devices, has features for power saving and roaming across base stations without interruption in reception.

WiMAX-enabled phones have been introduced by multiple operators (e.g., TIM Italy, Sprint Clearwire, “3” in the United Kingdom). They are characterized by constant access to the Internet, ability to place video calls to one or multiple recipients, and streaming video and audio services. Multiple handheld terminals (such as Samsung “Mondi”) smartphones and PDAs with WiMAX capabilities, including Samsung i730, Samsung M800, and Samsung H1000, are available for use on WiMAX networks. WiMAX-enabled devices differ from mobile phones in many respects. These are, for example, not limited to 3GPP file formats and 3GPP streaming services; instead, they deliver full-screen Internet capabilities that are available on desktops to mobile devices.

5.2.5 Geographical Distribution of Mobile TV Technologies The mobile TV services also show a significant geographical variation in terms of the standards used, even though a large number of countries in Asia and Europe have opted

Figure 5.6: Geographical distribution of mobile TV technologies (the term 3G is used to include 3G UMTS and CDMA2000, as well as evolution technologies such as HSPA and EVDO).

148

Chapter 5

for the use of the DVB-H networks. At the same time, Korea presents the most successful implementation example in the world of mobile TV based on T-DMB, with about 16 million customers. Countries in Europe such as Italy and Germany also have T-DMB networks. The United States is now committed to ATSC Mobile DTV standards together with MediaFLO, networks that are already operational.

5.3 Mobile TV Using 3G Technologies With a brief preview of the four types of networks used for providing mobile TV (mobile 3G networks, terrestrial broadcasting, satellites, and broadband networks) in the previous section, we now review the specifics of these technologies.

Figure 5.7: Technology overview: mobile TV.

5.3.1 How Are 3G Platforms Used to Provide Mobile TV? 3G networks and their evolved versions—i.e., HSDPA and EV-DO—provide data rates from 128 Kbps to upwards of 2 Mbps per user, which is adequate to provide streaming mobile TV

Overview of Technologies for Mobile TV 149 services to users on the network. Mobile TV services are now common on 3G platforms as both streaming live TV and video on demand. Unicast and multicast There are two approaches to delivering content to a mobile device. These are the broadcast (or multicast) mode and the unicast mode. In the broadcast mode, the same content is made available to an unlimited number of users via the network used. The broadcast mode is thus ideal for the delivery of broadcast TV channels with universal demand.

Figure 5.8: Broadcast and unicast technologies for mobile TV.

In unicast mode, each user sets up a streaming connection via RTSP and receives his or her own video stream. If 100,000 users in a given area are watching a channel, the streaming server must serve 100,000 different instances of the same TV channel. This has some advantages, as each user can signal the bit rate at which it can operate. However, this is more than offset by the loading of the cell radio capacity and consequent degradation in capacity. Multimedia Broadcast and Multicast Services (defined in release 6 of 3GPP) have two modes: ●



The multicast mode involves the transmission from the source to all the devices in a multicast group. These devices can lie in different cell areas or be mobile. The multicast transmissions are not delivered to all recipients; rather, delivery is selective. The broadcast mode involves the transmission of multimedia data as packets through the bearer service to all recipients in a given area.

150

Chapter 5

Examples of MBMS services are: ● ●

O2 trials in the UHF band. TDtv services of IPWireless. TDtv operates in the universal unpaired (TDD) 3G spectrum bands that are available across Europe and Asia at 1900 MHz and 2010 MHz.

At present, most 3G-based mobile TV services use the unicast mode, as the technology for multicasting (MBMS) is yet to be implemented in commercial-scale networks. MBMS is discussed in greater detail in Chapter 6. 3G-GSM-based and CDMA-based networks 3G networks are composed of two streams: the 3G-GSM-evolved networks, which have been standardized under the 3GPP, and the 3G-CDMA-evolved networks, standardized under the 3GPP2. Both the 3G-GSM-evolved networks and the CDMA-evolved networks support unicasting, broadcasting, and multicasting of content to be delivered as mobile TV. MBMS is used for multicasting under the 3GPP framework; for CDMA networks under 3GPP2, the Broadcast and Multicast Service, BCMCS, is used. 3G Packet streaming services Mobile TV services are provided on 3G networks by the following process: ●

● ● ●

Encoding of TV signals using encoders that meet the specifications of 3GPP (3GPP2 in the case of CDMA-based networks) Application of DRM to the signals in the case of paid TV content Providing a website that provides details of channels available Providing a streaming server that can set up streaming connections per specifications of 3GPP

An example of 3G-based mobile TV: mobiTV MobiTV is perhaps the single best example of a mobile TV service over the 3G networks. MobiTV provides over 50 popular channels live from broadcasters, including CNN, CNBC, ABC News, Fox News, ESPN, The Weather Channel, and Discovery, with many others being continually added to the list. It provides its services through a number of operators in many countries with 3G networks. These include: ● ● ● ●

United States—Sprint, AT&T, Verizon, T-Mobile Mexico—Telcel Peru—Moviestar Canada—Bell, Rogers, TELUS

Overview of Technologies for Mobile TV 151

Figure 5.9: Example of 3G mobile TV: MobiTV.

The service grew to include over 6 million users within four years of launch. MobiTV has recently demonstrated MixTV, which is a combination of free and on-demand pay channels. The on-demand services can also be provided using Mobile WiMAX offerings of Sprint-Clearwire.

5.3.2 More Examples of 3G Mobile Services and Networks Another example of content aggregator for mobile TV is the GoTV network. It offers content from ABC, Univision, and Fox Sports, as well as original programming produced specifically for mobile phones. Its services are available on the Sprint Nextel, AT&T, and other wireless networks. The delivery networks include Wi-Fi and WiMAX networks. Channels specifically launched for mobile networks include GoTV, SportsTracker, Hip Hop Official, and Univision. Verizon V CAST is a live TV streaming service from Verizon available on its CDMA2000 network, and has proved quite popular for downloads of songs, music, news, and cartoons. Sprint TV Live! offers Sprint PCS Vision subscribers over 20 channels of continuously streamed content. In the United Kingdom, British Sky Broadcasting Group PLC’s mobile television service tieup with Vodafone Group PLC has made a number of channels available across the EU. BskyB has over 8 million customers on its satellite TV platform.

152

Chapter 5

Figure 5.10: Live mobile TV in the United Kingdom. (Courtesy of BBC)

5.3.3 Characteristics of TV Services Using 3G Networks Mobile TV channels provided using 3G networks are in many ways unique and different from those provided using terrestrial or broadcast technologies. Some of these differences are listed in the following sections. Interactivity Multimedia services offered on 3G networks have traditionally been focused on the bidirectional nature of the mobile networks and include services such as video calling, video conferencing, instant chats, sharing of pictures, and downloading of music. They also include interactive applications such as video on demand, mobile commerce applications, betting, auction and trading services, and exchange of user-generated content. In some cases, alternative back channels such as WiMAX, WiBro, and wireless LANs have also been planned as back channels. In case of broadcast mode networks (such as terrestrial broadcasting), the operators are much more focused on the provision of unidirectional broadcast mode TV to very large audiences. Nevertheless, the operators of such broadcast networks (traditionally broadcasters) have recognized that there is an underlying communication capability in the mobile receiver, which can help provide interactive applications, and such interactive applications now are quite common. Wide availability 3G networks now cover large parts of the United States, Europe, and many countries in Asia and Africa. This makes 3G-based mobile TV available to wide audiences. Common handsets A big advantage of providers of 3G services is the native support that mobile phones provide to support 3GPP protocols such as those used for streaming, DRM, encoding and decoding

Overview of Technologies for Mobile TV 153

Figure 5.11: 3G Coverage of AT&T in the United States in mid-2009. (Courtesy of AT&T)

of content types, and finally being able to display mobile TV. Moreover, users do not need to buy a phone with a new type of tuner just to watch mobile TV. Channels specific to mobile TV The candidates for such content are the headlines, sports events, music, weather, fashion, and even full-length serials, as the HBO offerings have demonstrated. Examples of some broadcasters with 3G-specific channels are: ●



● ● ●

Discovery Mobile, featuring its premium show MTV with content prepared especially for mobile sets. HBO also offers premium content in packages of 90 minutes especially for mobile markets. CNBC prepares bulletins and headlines especially for mobile TV. Eurosport and ESPN content is also available for display on mobile sets. Mobi4BIZ: A business channel available on MobiTV.

154

Chapter 5

Figure 5.12: Mobi4BIZ channel on MobiTV.

The list of such broadcasters is quite large and it is certain that almost all major broadcasters will either offer their content directly on mobile platforms or prepare content especially for mobile TV.

5.3.4 Quick Migration to 3Gⴙ Networks The realization that the mobile TV will be used much more intensively than envisaged at the time the 3G standards were finalized is leading the operators to quickly roll out extensions to the 3G networks including EV-DO and HSDPA (extra spectrum for data). By mid-2009, over 300 HSDPA networks were already in operation in 150 countries and over 120 were in the advanced stages of planning.

5.4 Terrestrial TV Technology Overview Before going to mobile terrestrial TV, it is helpful to understand the terrestrial TV broadcast environment. Terrestrial television was the earliest form of broadcast TV. PAL- or NTSCbased terrestrial broadcasts have been in vogue for over 50 years. Terrestrial broadcast transmitters use high-power transmission (in kilowatts) and are designed to reach receivers in the areas extending around a 30 km radius. The high powers transmitted make them ideal for direct indoor reception, as opposed to satellite-based transmission, for which a line of sight is required. The analog broadcasts have been giving way to digital terrestrial broadcasts in most countries in a progressive manner with the objective of phasing out the analog broadcasts over a period. In the United States, the transition to digital TV has already been completed; in Europe, the target for the phase-out of terrestrial analog television is the year 2012. Terrestrial broadcasting uses the UHF and VHF bands, which give a total capacity of around 450 MHz in the two bands, permitting up to around 60 channels of analog TV. DVB-T, which

Overview of Technologies for Mobile TV 155 Table 5.1: TV and Radio Standards.

Television Radio

United States

Europe

Japan

Korea

ATSC HD RADIO XM/SIRUS

DVB DAB

ISDB ISDB

ATSC DMB

is the DVB standard for digital TV as well as ATSC, uses MPEG-2 multiplexed video and audio carriers. Each channel on the UHF and VHF bands, which can carry one PAL or NTSC program, can carry, using the DVB-T MPEG-2 multiplex, six to eight digital channels, thus enhancing the capacity in the existing spectrum. The digitalization of television is primarily happening via the DVB-T and ATSC terrestrial broadcast technologies. The ATSC standard is used in the United States, Canada, South Korea, and other countries, which have the NTSC transmission standard and follow the 6 MHz channel plan (see Table 5.1).

5.4.1 DVB-T: Digital Terrestrial Broadcast Television DVB-T is the standard for digital terrestrial TV in Europe, Asia, and elsewhere. DVB-T uses the same spectrum as is used for analog TV, i.e., in the UHF and VHF bands in the frequency ranges of 174–230 MHz (VHF band III) and 470–862 MHz (UHF band). Each channel slot,

Figure 5.13: Terrestrial TV.

156

Chapter 5

which can be used for the carriage of analog TV (one channel), can be alternatively used by a digital carrier (DVB-T carrier) to carry five to eight digital channels. DVB-T uses COFDM (Coded OFDM) modulation, which is designed to be very rugged for terrestrial transmission. Whereas an analog signal suffers degradation in quality due to multipath transmission and reflected signals, which can cause ghost images, digital transmission is immune to the reflected signals, echoes, and cochannel interference. This is achieved by spreading the data across a large number of closely spaced either 2 K or 8 K subcarriers (for example, 1705 subcarriers in the 2 K mode and 6817 subcarriers in the 8 K mode). A typical DVB-T carrier can have a flexible bit rate of 4.98 to 31.67 Mbps. An example would be a carrier rate of 19.35 Mbps with Reed Solomon (RS) coding 188/204 and IF bandwidth of 6.67 MHz. DVB-T also uses frequency interleaving in addition to a large number of carriers to overcome the multipath fading. The carrier modulation is QPSK, 16 QAM, or 64 QAM. DVB-T can also be used for transmission to mobile devices with appropriate tuners, but the use of 64 QAM with 8 K subcarriers is limited to moving speeds of less than 50 km/hour. This is because the small symbol duration limits the maximum delay of accepted echoes due to reflection and Doppler effects.

5.4.2 TV Transmission Technologies It is also useful to understand common transmission technologies of digital TV signals, which are used across different standards as well as for mobile TV and SDTV/HDTV. These technologies include the use of COFDM as the physical layer of single-frequency networks (SFNs). COFDM The term COFDM stands for “coded OFDM.” COFDM is the physical layer transmission format for DVB-T and DVB-H; OFDM is used in MediaFLO, WiMAX, and ISDB-T. The physical layer transmission in both COFDM and OFDM is orthogonal frequency division multiplexing, which involves the transmission of a number of subcarriers. In real-life transmission environments, multipath propagation and echoes from objects lead to the received signals arriving at the destination in a time-delayed fashion. These signals suffer frequency selective fading as a result of the multipath propagation effects. When a carrier is used to carry high data rates (i.e., a short symbol time), the received signals have enough delay spread to fall into the slots of other symbols thereby causing intersymbol interference. In case of a single carrier modulation, this type of propagation limits the data rates that can be used in non–line of sight (NLOS) environments. The technology of OFDM is based on the use of a large number of carriers spread in the allocated bandwidth, with each carrier being modulated by a proportionately lower data rate than would be the case in a single carrier scenario.

Overview of Technologies for Mobile TV 157

Figure 5.14: Intersymbol interference in single-carrier operations.

Example of Using Multiple Carriers for Data Transmission As an example, a bandwidth of 10 MHz may be used with 1024 (1 K) carriers, each carrying only 1/1000 of the data rate carried by the single carrier. The lower data rate per carrier increases the symbol time proportionately. In case of 1 K carriers, for example, it increases by 1000 times. Using a data rate of 10 Mbps with QPSK coding (2 bits per symbol) gives a symbol rate of 5 Msymbols/sec or a time per symbol of 0.2 μ seconds. At the speed of light, an object in urban environment (typically 1 Km away) generates a round-trip delay of 6.6 μ seconds. This reflected signal would be completely out of sync with the direct signal. However, if 1 K carriers are used, each carrier carries only 5 K symbols/sec and the symbol time is 200 μ seconds. The delay of 6.6 μ seconds is thus less than 1/30 of the symbol duration.

The technology of increasing the number of carriers evidently helps in making the signals robust against multipath propagation. Single-frequency networks (SFNs) Another key element of frequency planning is the implementation of transmitters in a given service area. In most cases such an area may constitute either a city (with outlying suburban communities) or a region where an operator is licensed the use of one or more frequencies. SFNs are networks of transmitters with repeaters, each of which transmits the OFDM signals in a time synchronized manner. The signals need to be frame accurate across transmitters

158

Chapter 5

Figure 5.15: Symbol times increase in multicarrier operation.

to qualify as SFNs. The advantage of operating networks as SFN is that the signals from adjacent transmitters add to each other in the receiver rather than being a source of interference (and adding to the noise floor). This is useful in conditions where line of sight is not available (e.g., in a mobile environment). Multiple signals from the main transmitter and the repeaters result in an improved C/N as seen by the receiver. The C/N can be further improved by techniques such as multiple antennas at the receiver.

5.4.3 ATSC Standard for Terrestrial Broadcast The ATSC standard uses a different modulation scheme called 8-level vestigial sideband (8VSB). Typically a data rate of 19.39 Mbps can be accommodated in a bandwidth of 5.38 MHz including a RS coding of 187/207. ATSC is an “umbrella standard,” which specifies all components of the broadcast stream. ●



Audio coding—Dolby AC-3 audio compression (proprietary standard used under license ATSC A/53) Video—MPEG-2 video compression (ITU H222) or MPEG 4 compression per H.264 and ATSC A/72

Overview of Technologies for Mobile TV 159

Figure 5.16: Single-frequency networks (SFNs) constitute an important element of frequency planning.

Figure 5.17: ATSC transport stream.



MPEG transport stream (ETSI TR 101 890): Program service and information protocol PSIP (ATSC A/65) DASE, data applications software environment (ATSC A/100), Java (JVM), and HTML standards; data broadcast standard—TCP-IP (ATSC A/90) and MPEG (ETSI TR 101 890) standards ● ●

160

Chapter 5

The 8 VSB networks are not well suited to single-frequency networks as well as high-speed reception. As an alternative a distributed transmitter system (DTS) is used. In fact, the ATSC limit for reception in vehicles in motion can be as low as 50–100 km/hour.

5.4.4 DVB-T for Mobile Applications The digital video broadcast standard for terrestrial television (DVB-T) has proven effective in meeting more than purely stationary digital TV requirements. For example, DVB-T has been used to provide television services in public transportation, as is the case in Singapore and Taiwan, and recent receiver developments make its use possible in cars and high-speed trains. DVB-T has been adopted in Australia to provide HDTV and in Europe and Asia to provide HDTV and multichannel standard definition television. DVB-T-based receivers have been tested at speeds up to 200 km/hour in Germany. However, it has many drawbacks, which limit its use in mobile phones, including high power consumption, transcoding requirements from standard definition TV to the QVGA screen, and poor signal reception due to its antenna limitations. These have been modified in the mobile terrestrial TV technologies under the DVB-H standards.

Figure 5.18: DVB-T reception in vehicles. (Courtesy of DiBcom)

5.4.5 Digital Audio Broadcasting and Digital Multimedia Broadcasting Digital audio broadcasting, which is delivered through satellites as well as terrestrial media, has been used in Europe, Canada, Korea, and other countries and is popularly known as the Eureka-147 standard. It is a replacement for the traditional analog FM transmissions and has the capability to deliver high-quality stereo audio and data through direct broadcasts from the satellite or terrestrial transmitters to DAB receivers, including those installed in cars and

Overview of Technologies for Mobile TV 161

Figure 5.19: DAB Eureka-147 system.

moving vehicles. As the DAB services have been allocated spectrum in many countries, this was seen as an expedient way to introduce multimedia broadcasting services, including mobile TV. The digital multimedia broadcasting standard was an extension of the DAB standards to incorporate the necessary features to enable the transmission of mobile TV services. DMB developments were led by Korea and have seen implementations in Europe recently. DAB involves the use of a digital multiplex. The multiplex or ensemble in Figure 5.19 carries a number of programs at different bit rates. DAB uses OFDM modulation with DQPSK( differential QPSK). It also uses robust error correction via 1/4-rate convolution code and bit interleaving. The total bandwidth of the transmitted carrier is 1.5 MHz. The WARC ’92 has allocated satellite digital audio broadcasting spectrum in the L-band at 1452–1492 MHz. For terrestrial transmission, the VHF band (300 MHz) is used. Spectrum has also been allocated in the S-band (2.6 GHz) for DAB services. DAB has four transmission modes based on the band used for the transmission of the signals, with each mode using a different number of carriers as listed in Table 5.2.

162

Chapter 5 Table 5.2: DAB Transmission Modes.

Transmission Mode Frame Duration No of Radiated Carriers Frequency Band Max Transmitter Separation for SFN

I

II

III

IV

96 ms 1536 Up to 375MHz 96 Km

24 ms 284 Up to 1.5 GHz 24 Km

24 ms 192 Up to 3GHz 12 Km

48 ms 768 Up to 1.5 GHz 48 Km

In the L-band, the DAB uses mode III with 192 carriers of 16 kHz each and with 8-kHz spacing. Mode III can be used up to 3GHz. Each DAB frame in this mode has a duration of 24 ms and can carry 144 symbols of main service channel data or payload data (154 symbols including overheads, synchronization, and service information). The main service channel can have a number of subchannels. One symbol carries 384 bits (using the underlying multicarrier structure). One symbol per frame thus implies 384 bits every 24 ms or 16,000 bits per second (16 kbps) of capacity. To carry an audio service coded at 128 kbps with 1/2 FEC gives a total bit rate of 256 kbps. This can be carried using 16 symbols per frame as each symbol gives a bit rate of 16 kbps. The frame, which carries 144 symbols, can thus be used for 144/16  9 services. The total available bit rate for various services is 128 kbps  9  1.152 Mbps per 1.537 MHz of spectrum slot using 1/2 FEC. DAB is used in 35 countries around the globe. Countries with DAB broadcast include Canada, Australia, South Africa, and those in Europe and Asia, including China. DAB broadcasts can be received by using a wide range of portable as well as stationary receivers.

Figure 5.20: A portable DAB receiver.

Overview of Technologies for Mobile TV 163 Table 5.3: Audio and Video Codecs for DAB and DMB Technologies. System DAB DAB Ver 2 DAB-IP DMB

Audio Codecs

Video Codecs

Coding

MPEG-2 Layer-II AAC, MPEG-2 Layer-II (MP2) Windows Media Audio (WMA9) BSAC, MP2

None None Windows Media Video (WMV9) H.264

Convolutional Convolutional  RS Convolutional  RS Convolutional  RS

As the original DAB standards provide for the use of MPEG-2 Layer 2 coding for audio, which is not very efficient, there is a move toward using DAB standards with AAC or WMA9 codecs, or the use of DAB-IP, which uses WMA9 and WMV9 codecs for audio and video, respectively (DAB version 2). Ofcom in the United Kingdom has urged broadcasters to work with receiver manufacturers to have receivers available for the new DAB version. Sweden has stopped expansion of the DAB network, and France has decided not to use it at all (see Table 5.3).

5.5 Mobile TV Using Terrestrial Broadcasting Networks Mobile TV services using terrestrial broadcasting form a very important class of delivery of TV to mobile devices. This is because of the use of high-power terrestrial transmitters that can reach mobile phones, even with small built-in antennas, and cover indoor areas. Also, the spectrum used does not need to be allocated from the 3G pool, which is highly priced and scarce. This is not to say, however, that the spectrum is available very easily in the VHF and UHF bands used for terrestrial broadcasting. However, even one channel slot (8 MHz) can provide 20–40 channels of mobile TV and many countries are now focused on providing such resources. In the area of terrestrial broadcast mobile TV, three broad streams of technologies have also evolved: ●



Mobile TV broadcasting using modified terrestrial broadcasting standards: DVB-T, which is widely being implemented for the digitalization of broadcast networks in Europe, Asia, and other parts of the world. The standards that result from such modification are known as DVB-H. This is a major standard, on which many commercial networks have started offering services. Similarly, the ATSC standards have been enhanced to permit the carriage of mobile/handheld TV signals on the same transport stream. The new standards are known as ATSC Mobile DTV. ISDB-T, used in Japan for digital terrestrial TV, similarly uses an extension of the standard to provide broadcasting for mobile devices called 1-Seg broadcasting. Mobile TV broadcasting using modified digital audio broadcasting standards: The DAB standards provide a robust medium of terrestrial broadcasting of multimedia signals, including data, audio, and music, and are being used in many parts of the world. These standards have been modified as DMB standards. The advantage is that the technologies

164





Chapter 5

have been well tested and spectrum has been allocated by the ITU for DAB services. The Terrestrial Digital Multimedia Broadcast (T-DMB) is such a broadcast standard. Terrestrial broadcasting using new technologies: MediaFLO is a new technology that uses IP multicasting of mobile TV channels using a special air interface called the FLO interface. MediaFLO technology is promoted by Qualcomm and is based on a 6 MHz bandwidth slot. The system uses a frequency of 700 MHz in the United States with radiating towers being provided source signals via satellite. Country-specific technologies: China uses its home-grown technology for digital terrestrial broadcasting called DTMB. The mobile TV broadcasting standards approved by the State Administration of Radio, Film & Television (SARFT) are called CMMB (China Multimedia Broadcasting).

Following is a summary of the common terrestrial broadcast mobile TV technologies: ● ● ● ● ●

DVB-H T-DMB ISDB-T (1-Seg Broadcasting) MediaFLO CMMB

Figure 5.21: Mobile TV technologies. (ATSC M/H in the figure refers to ATSC Mobile DTV.)

Overview of Technologies for Mobile TV 165

5.5.1 Characteristics of Mobile TV Using Terrestrial Transmission Even though terrestrial transmission may appear to be just an alternative technology to 3G based delivery of Mobile TV, there are significant differences between the two: ●











Delivery to unlimited number of users: Mobile TV can be delivered to an unlimited number of users in the coverage area of transmitters without overloading the backend systems, as is the case for unicast mobile TV. Content protection: As the content is delivered using unprotected transmissions, the content needs to be protected by an encryption scheme or by using a DRM. Content protection is discussed in Chapter 21. Requirement of tuners in mobile phones: Reception of terrestrial mobile TV requires a tuner to be built into the receiving devices, such as mobile phones. The near-universal availability of ISDB-T and T-DMB tuners in phones sold in Japan and Korea is one of the reasons for the success of these technologies. DVB-H has faced a much tougher regime due to encrypted transmissions and very few phones supporting tuners, as we will review later in the chapter on DVB-H (Chapter 8). Interactivity: Mobile TV is all about interactivity and targeted advertising. However, this can be supported using terrestrial transmission, by associating another medium for a reverse path such as a 3G network or wireless Internet. Flexibility in the use of transmission formats: Unlike 3G networks, which rely on the ready availability of 3GPP decoders in all the mobile phones, the terrestrial transmission services can use a variety of formats, including MPEG-4, H.264, Flash Video, DivX, or Windows Media (VC-1). The receiving handsets require an appropriate player. Buffering and frame rates: The terrestrial transmissions take place at full frame rates, irrespective of the type of reception individual mobile devices may be witnessing. Hence the reception is always at their full frame rate and resolution, giving a better quality. When signal is poor, the reception fails. In some cases (e.g., MediaFLO), the received signal can degrade itself gracefully by ignoring higher encoding layers that convey finer scene details.

5.5.2 DVB-H Building upon the portable and mobile capabilities of DVB-T, the DVB Project developed the DVB-H standard for the delivery of audio and video content to mobile handheld devices. DVB-H overcomes three key limitations of the DVB-T standard when used for handheld devices—it lowers battery power consumption, improves robustness of reception on mobile devices by using additional forward error correction, and provides for Doppler effect compensation by using an optimized mode for modulation (4 K mode). DVB-H technology is seen as very promising, as it can effectively use the DVB-T infrastructure and spectrum already available and provide broadcast-quality TV services to an unlimited number of users.

166

Chapter 5

Figure 5.22: DVB-H transmission system showing enhancements over DVB-T.

The first trial of DVB-H service was in 2005 by Finnish Mobile TV for 500 users with Nokia 7710 receivers. The initial package included three television and three radio channels. Since then, a large number of trials have been completed and commercial networks launched across Europe and Asia. DVB-H has been standardized by the DVB and the ETSI under EN 302 304 in Nov 2004 and was formally ratified by ETSI as the mobile TV standard to be used across Europe in 2008. Major DVB-H networks in operation include 3 Italia, Digita (Finland), KPN MobielTV, (Netherlands), 3 Austria, Luxe.tv (Austria), TDF France, MiTV Malaysia, and others. However, further growth of DVB-H in individual countries is dependent on the release of spectrum from the DVB-T and analog bands when the analog transmissions are stopped. This could take until 2012.

5.5.3 A DVB-H Transmission System Figure 5.23 shows a typical DVB-H transmission system in which the same transport stream is used for carrying both the SDTV programs (encoded in MPEG-2) and mobile TV programs (encoded in H.264/AVC). The SDTV-encoded streams are delivered to the DVB-T/DVB-H modulator as ASI low-priority streams. The channels meant for mobile TV are encoded as H.264 and are connected by an IP switch to an IP encapsulator, which then combines all the video and audio services as well as the PSI and SI signals and EPG data into IP frames. The IP encapsulator also provides for channel data to be organized into time slices so that the receiver can remain active only during the times for which the data for the actively selected channel is expected to be on the air.

Overview of Technologies for Mobile TV 167

Figure 5.23: A DVB-H mobile TV transmission system.

For mobile TV, a more robust FEC coding is also applied in the IP encapsulator to ensure delivery of signals to the mobiles characterized by low antenna gains and widely varying signal strengths. The additional FEC applied helps operate in an environment characterized by higher error rates. The output of the IP encapsulator, which is in ASI format, is then modulated by a COFDM modulator with 4 K (or 8 K) carriers. The COFDM modulation provides the necessary resilience against selective fading and other propagation conditions. The DVB-T standard provides for 2 K or 8 K carriers in the COFDM modulation. The 4 K mode has been envisaged for use in DVB-H, as 2 K carriers would not give adequate protection against frequency selective fading and also provide for a smaller cell size owing to the guard interval requirement for SFNs. At the same time, the 8 K carrier mode has the carriers placed too close in frequency for the Doppler shifts to be significant for moving receivers. Hence the new mode of 4 K carriers has been incorporated as part of DVB-H standards. The modulation used for the carriers can be QPSK, 16 QAM, or 64 QAM. The previous example is of a DVB-H shared network. However, it can also operate as a dedicated network, i.e., operating only DVB-H services.

168

Chapter 5

5.5.4 DMB Services DMB delivers mobile television services using the Eureka-147 DAB standard enhancements, which led to the formulation of the DMB standard. The new standard for DMB was formalized by ETSI under ETSI TS 102 428. The DMB has a satellite delivery option (S-DMB) or a terrestrial delivery option (T-DMB). T-DMB uses the terrestrial network in VHF band III and/or band L; S-DMB uses the satellite network in band L or band S. The most successful S-DMB and T-DMB implementations have been in Korea with the satellite MBSAT at 145.5E being used for S-band transmissions. The DMB technology has its origin in Korea, where operators, mobile vendors such as Samsung and LG, and the Korean government lent their weight to the technology. Operators in Korea were assigned a spectrum for T-DMB services, which led to its rapid adoption. The standards for DMB services and DAB services have been adapted by the ETSI and are now considered global standards (ETSI Standards TS 102 427 and TS 102 428). The adaptation of the standards is paving the way for the use of the technologies providing mobile TV services in Europe, 80% of which are already covered by DAB services. Characteristics of DMB services Use of DAB Spectrum: One of the advantages of the DMB services is the availability of the spectrum (for DAB) in Europe and Asia, which makes the rollout less dependent on spectrum allocations. DMB services are a modification of the DAB standard that adds an additional layer of error correction to handle multimedia services. The DMB services make use of the same 1.537 MHz carriers and spectrum allocated for DAB services. Use of Advanced Compression Formats: DMB uses MPEG-4 part 10 (H.264) for the video and MPEG-4 part 3 BSAC (Bit Sliced Arithmetic Coding) or HE-AAC V2 for the audio. The audio and video are encapsulated in MPEG-2 TS. The stream is RS encoded. There is convolution interleaving made on this stream and the stream is broadcast in data-stream mode on DAB. Korean T-DMB services T-DMB services were launched in Korea as a result of six operators being licensed by the government, each with approximately 1.54 MHz of bandwidth. This enables 1.15 Mbps per carrier, which can carry VCD-quality (352288-pixel) video at 30 fps (for the NTSC standard). The video is coded using the H.264 compression protocol. It also carries CDquality audio (DAB MUSICAM). The terrestrial DMB standards also have provision for carriage of interactive data or presentations. The T-DMB services are free in Korea.

Overview of Technologies for Mobile TV 169

Figure 5.24: T-DMB transmission system.

Samsung chip sets such as SPH B-4100 provide the capability of dual satellite and terrestrial reception on the same phone. Six broadcasters in Korea are taking part in this service, with sharing of transmitters and providing free-to-air services. Commercial services using the T-DMB technology have been launched in Europe as well. Mobile operator Debitel has launched T-DMB services in Germany (Berlin, Cologne, Munich, Stuttgart, and Frankfurt) in cooperation with the broadcaster MFD. DMB services can be provided via satellite or terrestrial transmission. Satellite based services are covered in Chapter 10. DMB services in China DMB and DAB services are on air in a number of cities in China. These include Dalian, Beijing, Guangdong, Hennan, Yunnan, and others. Beijing Jolon (a public broadcaster) and Guangdong Mobile Television Media (GTM) Co. Ltd. are the two major operators providing these services. There is a plan to provide “push radio services” in the future.

5.5.5 Mobile TV Using ISDB-T (1-Seg) Mobile TV using ISDB-T terrestrial broadcast is being provided in Japan and Brazil. ISDB-T stands for Integrated Services Digital Broadcasting and is a proprietary standard.

170

Chapter 5

Digital terrestrial broadcasting (DTTB) began in Japan in December 2003 and has since been progressively replacing the analog transmissions in the NTSC format. The broadcast spectrum consists of 6 MHz channels. The analog carriers are progressively making way for DTTB services. A majority of broadcasts on DTTB are now in HDTV. Mobile TV services using ISDB-T began in Japan in 2006 using 1 segment of the 13 in a 5.6-MHz channel. The ISDB-T services, which were launched as free to air services by NHK Japan (1-Seg), have seen an impressive growth in the number of handsets shipped with ISDBT-capable handsets. By mid 2009, over 50 million handsets had been shipped with ISDB-T tuners and 85% of all new phones sold were ISDB-T-enabled. This is unparalleled in any market in the world. The video and audio coding parameters for ISDB-T are: ●



Video: Coded using H.264 MPEG-4/AVC baseline profile L1.2 at 15 fps resolution QVGA (3203240) Audio: MPEG-2 AAC with 24.48 kHz sampling

One segment, which has a bandwidth of 5.6/13  0.43 MHz, or 430 kHz, can support a carrier of 312 kbps with QPSK modulation and a code ratio of 1/2 (giving a guard interval of 1/8). This carrier of 312 kbps can typically carry video coded at 180 kbps, audio at 48 kbps, and Internet data and program stream information at 80 kbps. A single segment can thus carry one channel of video and data along with program information.

Figure 5.25: ISDB-T services.

Overview of Technologies for Mobile TV 171 Japan has been one of the most successful countries in terrestrial mobile TV. This is owing to a multiplicity of factors, including the ready availability of 1-Seg ISDB-T tuners in a majority of high-end phones sold and the ease of accommodating mobile TV transmissions without having to wait for additional spectrum. Apart from being a free to air service, other factors have led to the success of the ISDB-T services. The 1-Seg carrier can be transmitted from a location that is different from a central transmitter, as there is no common multiplex with SD or HD channels. This permits highly local programming including in-store programming to be transmitted to mobiles. There is also a BML data feed (BML is a modified XML feed for data broadcasting) that allows items such as purchase tokens to be delivered to mobile devices, enabling a shopper to use the mobile for shopping and payments. Most mobile phones in Japan are shipped with a FeliCa chip (also called the mobile wallet) that enables personal information and electronic cash to be securely maintained. Brazil has also adopted the use of the ISDB-T system with some modifications such as the use of MPEG-4 instead of MPEG-2 and different frequency bands. 1-seg transmissions have started in Brazil and receivers are available for handheld as well as automotive use.

5.5.6 Mobile TV Using MediaFLO™ FLO (Forward Link Only) is a TV multicasting technology from MediaFLO, a Qualcomm company. Verizon Wireless and AT&T in the United States have launched mobile TV technologies based on FLO technology (called as the Verizon VCAST™ mobile TV services and AT&T Mobile TV Services). Qualcomm has access to spectrum in the UHF band at 700 MHz, which is being used to roll out the services, although this is not a limitation of the technology. FLO is an overlay network that uses a separate spectrum slot to deliver multicasting services. As 700 MHz spectrum gets freed in additional markets with the completion of digital transmission, these services are expected to cover almost all major markets in the United States.

5.5.7 FLO™ Technology The FLO technology has many features that make it well suited for mobile TV applications: ●

The technology is based on the use of OFDM-based carriers (4 K), which allows high tolerance to the intersymbol interference in NLOS environments, unlike the ATSC or CDMA technologies. Transmitter networks are possible in both the SFN and MFN configurations. The FLO also uses cyclical prefixes similar to WiMAX and uses Turbo codes for higher error resilience.

172 ●

● ●







Chapter 5

FLO has powerful control layer functions. A FLO control channel carries messages from the network to the receivers that contain the configuration parameters for each upper layer “flow” and RF channel configuration. The receivers maintain a synchronized copy of this information and are thus aware of all parameters and service flows. FLO can support QoS for different services. The FLO layer is designed to be application-independent. The FLO interface has multiple-level error correction and efficient coding built in, which permits efficiencies of 2 bits per sec per Hz (or 2 Mbps per MHz). This allows a slot of 6 MHz to carry up to 12 Mbps of data. This can provide over 30 live TV channels (QVGA at 30 fps), 10 audio channels coded in HE AAC, in addition to video on demand channels and multimedia data. FLO has adopted a flexible layered source coding and modulation scheme. The layered modulation scheme has been designed to provide a high-quality service (QVGA (352240) at 30 fps (or 25 fps for PAL), which degrades gracefully to 15 fps in case of degradation of the signal-to-noise ratio (S/N) due to higher distance to user, adverse propagation conditions, or noisy environment. This means that a picture that would have otherwise frozen due to the higher bit rates not being supported for reception due to receiver S/N ratio can still be received but at lower bit rates. Being in the UHF band, which is the host to high-power transmitters, FLO radio transmitters can be designed to be installed up to 50 Kms apart (depending on the power transmitted) and thus cover a metropolitan area with fewer transmitters than DVBH or ATSC Mobile DTV. This is against thousands of gap fillers needed in technologies such as S-DMB, which are power limited. FLO uses a single-frequency network with timing synchronization between transmitters. It supports frequency slots of 5, 6, 7, and 8 MHz. FLO technology has also taken into account the need to conserve power in the mobile handsets and the receiver can access only that part of the signal that contains the channel being viewed. The FLO data unit is only one seventh of an OFDM symbol. The typical viewing time claimed is 4 hours on a standard handset (850 mAH battery). It uses a technology called overhead information symbol (OIS), which allows fast synchronization and allows the users to switch channels in switching times of less than 1.5 seconds.

MediaFLO is not restricted to the use of the 700 MHz band. It can operate at any frequency between 300 MHz and 1.5 GHz. However, it is optimized for use in the UHF band 300–700 MHz. As other countries begin services, we should see the networks roll out in other frequencies. In November 2009, MediaFLO technology has been recognized by Japan’s Ministry of Internal Affairs and Communications as an official technology for mobile multimedia broadcasting. This may see its use in Japan as well as in countries in South America.

Overview of Technologies for Mobile TV 173

Figure 5.26: A FLO network.

5.5.8 Terrestrial Mobile TV in China Terrestrial Mobile TV in China has taken off since the 2008 Summer Olympics, where it was showcased based on the CMMB mobile TV standards approved by SARFT(State Administration of Radio, Film and Television). China has multiple standards for mobile TV, which have their roots in different technologies and standards approved by Chinese government standardized bodies. These standards include the following: DTMB: The national terrestrial broadcasting standard in China (GB 20600-2006). DTMB is designed to be used for all screen sizes, ranging from HD resolution to mobile devices. DTMB was earlier called the Digital Multimedia Broadcast-Terrestrial/ Handheld (T-DMB/H). The standard is based on two major standardization efforts specifically for China. First, T-DMB/H is a standard developed for mobile broadcasting by the Tsinghua University of China, and second, the ADTB-T standard developed at the Jiaotong University. The use of the DTMB standard is mandatory in rural and semiurban areas of China.

174

Chapter 5

CDMB: China Digital Multimedia Broadcasting (CDMB) is a mobile broadcasting standard based on the Digital Audio Broadcasting (DAB) standard as its core technology. It uses audio and video coding based on the AVS standard, which is a national standard for source coding by China National Standardization Administration (CNSA); GB/ T20090. The CDMB standard was formulated jointly by over 40 entities, including private companies, Beijing University of Posts and Telecommunications, China Electronics Group Corporation of China, and others. The CDMB standard been approved by the SARFT (GY/T214-2006). The China Association for Standardization has approved the CDMB mobile TV standards via standard No. CAS158-2007. DAB systems are operational in China in Beijing, Guangdong, and Dalian. CMMB: China Multimedia Mobile Broadcasting (CMMB) is a multimedia broadcasting standard backed by SARFT. The standard is based on satellite transmission with terrestrial ground repeaters. It uses a technology called “satellite and terrestrial interactive multiservice infrastructure” (STiMi), developed by TiMiTech, a company formed by the Chinese Academy of Broadcasting Science. In many respects, it is very similar to the DVB-SH standard. CMMB is based on the use of S-band broadcast at 2.6 GHz, where a 25 MHz bandwidth allocation permits a payload of about 12 Mbps, which is adequate for 25–30 video channels (QVGA or QCIF ) and 30 radio channels. It also has a contribution channel for distribution via satellite to terrestrial repeaters (12.2-12.25 GHz) and UHF terrestrial repeaters (470–798 MHz). CMMB at present is being provided using only UHF terrestrial transmitters, pending the launch of the satellite for S-Band direct broadcasts to mobile devices. However, the receivers are all dual-band with satellite (S-band) and terrestrial (UHF) reception capability. CMMB is meant to be used only for small-screen devices. TD-MBMS and CMB: The standards supported by the Department of Telecom, China. The TD-MBMS is a multicasting technology using the 3G networks in China, which are based on TD-SCDMA. CMB: A mobile broadcasting standard developed by Huweai. In summary, two standards are of key importance today in China. For digital terrestrial broadcasting, the DTMB standard is used, for standard definition and high definition standards. For mobile devices, transmissions are based on the CMMB standard.

5.5.9 Comparison of Technologies Based on Terrestrial Transmission A comparison of technologies based on terrestrial transmission is given in Table 5.4.

Overview of Technologies for Mobile TV 175 Table 5.4: Comparison of Terrestrial Broadcast–Based Mobile TV Technologies. Feature Video & Audio Formats Transport Stream Modulation RF Bandwidth Power-Saving Technology

DVB-H MPEG-4 or WM9 (Video) AAC or WM Audio IP over MPEG-2 TS QPSK or 16 QAM WITH COFDM 5–8 MHz Time slicing

T-DMB MPEG-4 Video BSAC Audio MPEG-2 TS DQPSK WITH FDM 1.54 MHz (Korea) Bandwidth reduction

ISDB-T MPEG-4 video AAC Audio MPEG-2 TS QPSK or 16 QAM With COFDM 433 KHz (Japan) Bandwidth reduction

5.6 Comparison of Mobile TV Services Any comparison of mobile TV services is a difficult task, as the services are at present offered based on a number of constraints such as spectrum availability, country standards, legacy of operator networks, and so on, which has led to a host of approaches in the quest for early delivery. The features of unicast-based services and those based on multicast and broadcast are completely different. Typically, the following parameters are important in evaluation of the technologies: ●

● ● ● ● ● ●



Robustness of transmission and quality of service expected in indoor and outdoor environments Power-saving features Channel-switching times Handset features needed to support service Efficient spectrum utilization Costs of operating services Many features, such as the quality, charges, and reception characteristics, dependent on the underlying networks (i.e., 3G) The user’s requirements, such as countrywide availability, roaming capability, types of handsets, and services available

5.6.1 Mobile Services Using 3G (UMTS/WCDMA/CDMA2000) 3G-based mobile TV services are able to deliver acceptable quality streaming TV at rates up to 300 kbps. This is equivalent to consuming resources for around 10 voice calls on the network. Hence when a user sets up a streaming session, he or she commences using a data bandwidth that is chargeable unless he or she has signed for a data plan. This is a constraint of 3G services, which may be addressed as the industry moves to MBMS-based services. Broadcast TV is not the best application for 3G networks, particularly when important events that may be watched by millions of users are broadcast.

176

Chapter 5

On the other hand, mobile networks have significant advantages. First, they provide extensive coverage of the countries and geographical regions in the world. Hence, the users are likely to be in a zone covered by the service. Second, the unicast nature of the services can provide better support to features such as video on demand. The handsets are ready to receive 3GPP services and need not be encumbered with additional antennas and tuners for various bands. The degree of interoperability and roaming is very high in mobile networks. Interactivity is also high, due to the availability of mobile return path, which is easy to integrate. Multicast mode technologies such as MBMS and MCBCS overcome the limitations in providing unicast services to multiple users.

5.6.2 Mobile Services Using Terrestrial Technologies DVB-H is designed to easily integrate into existing DVB-T networks and share the same infrastructure, resulting in lower cost and time to market. It also provides for power saving by using the time-slicing technique, which saves tuner power. The use of the 4 K mode in the modulator can provide better protection against Doppler shifts while the receiver is in motion. Being based on an IP datacast technology, the network architectures can be fully IP. ATSC mobile DTV is similar and is designed to use the existing DTV infrastructure and spectrum. DVB-H, however, is not a standard that is designed specifically for mobile TV, like MediaFLO. The channel coding provided for mobile TV signals is less robust than that of MediaFLO and higher C/N is required for reception. Also, the channel-switching time is higher, due to the time-slice mode, as the tuner is in a sleep mode for 80% of the time and is activated just prior to the anticipated reception of the packets for a particular channel. Using the mobile networks, limited interactivity can also be supported. However, applications involving video on demand or downloads specific to users are less suited to the broadcast nature of the networks. Although digital terrestrial television can be handled across a large city with only one or two towers, the same is not the case for mobile TV. Considerations of acceptable signal strength particularly indoors imply much higher power transmission or, alternatively, multiple repeaters across the city. The T-DMB services are derived from the standards for digital audio broadcasting and have a robust error correction layer and mobility features. The handsets can be used at vehicular speeds in excess of 250 km/hour in the VHF band. T-DMB does not have any features that are designed specifically for the support of power-saving tuner technologies. However, the relatively lower bandwidth (1.7 MHz compared to 8 MHz for DVB-H) leads to lower power consumption, although higher than DVB-H. MediaFLO is a mobile TV delivery technology that was designed from the beginning for mobile TV rather than an amendment of standards for regular TV transmissions. The air interface based on OFDM transmissions is very robust. It also has more powerful features for an ESG and for fast changing of channels. FLO air interface allows transmitters near the edges of coverage to still receive the signals through graceful degradation and the receptions feature highest battery life (see Table 5.5).

Table 5.5: Comparison of Mobile TV Services. Technology Classification Air Interface Standardization Body Payload Capacity

DVB-H

FLO

T-DMB

S-DMB

Broadcast TDAB,COFDM ETSI, DAB Forum 1.5Mbps in 1.54 MHz Channel Time demultiplexing, selective Fourier transformation VHF, UHF

Broadcast UTRA WCDMA 3GPP 384 Kbps in 5 MHz channel Code selection

Broadcast Proprietary CDMA ETSI 7 Mbps in 25 MHz channel Code selection

IMTS 2000

Broadcast DVBT,COFDM DVB 12 Mbps in 8 MHz channel Time slicing

Broadcast CDMA (Qualcomm) Qualcomm 11 Mbps in 6 MHz channel Qualcom CDMA code selection

Frequency Bands of Operation

UHF, L-band (US)

700 MHz (US), UHF or L-Band

Average Channel Switching Time Viewing Time with 850 mAh Battery*

⬃5 secs

⬃1.5 secs

⬃1.5 secs

⬃1.5 secs

S-band (Korea), IMTS 2000 (Europe) ⬃5 secs

⬃4 hours

⬃4 hours

⬃2 hours

⬃4 hours

⬃1.5 hours

Power-Saving Technology

Overview of Technologies for Mobile TV 177

MBMS

178

Chapter 5

5.7 Outlook for Mobile TV Services The initial years of the mobile TV rollout have belonged to 3G networks and just two other networks: DMB in Korea and ISDB-T in Japan. In all the cases, one of the enabling factors has been the ready availability of handsets with built-in capabilities to receive the telecasts. In case of 3G, such capabilities are ready built through 3GPP standardization; most phones sold in Korea and Japan have tended to support T-DMB and ISDB-T, respectively. None of the other terrestrial networks, such as DVB-H, has managed to reach even a million users per network, which has made it tough for operators to support these business models. We are also likely to continue to witness a flux in the technologies deployed as various issues such as spectrum allocation, licensing, and standards evolution continue to move toward a globally harmonized set of accepted solutions. Mobile TV based on MediaFLO is now showing a strong growth with availability in additional markets as well as the launch of dedicated receivers in contrast to targeting only mobile phones. The new ATSC Mobile DTV standards will in all likelihood lead to universal availability of ATSC Mobile DTV tuners in mobile phones and results similar to Japan and Korea can be expected in terms of user base. The networks being deployed are capable of delivering much more than just mobile TV, and we are likely to witness a continued growth in the use of multimedia services as well, enriched by animation and rich content made available by the networks and playable on the growing base of multimedia phones.

Before We Close: Some FAQs 1. Are common tuners available for Japanese and Brazilian ISDB-T transmissions, even though these are not identical? Yes, integrated tuner ICs such as MAX2163A from Maxim are available for 1-Seg and 3-Seg ISDB-T for all frequency bands (VHF and UHF). 2. What type of video is delivered using MMS? MMS video is in 3GPP format only. 3. How can mobile TV content be delivered to handsets using a Wi-Fi network? Client–server applications are available, such as Penthera Virtuoso, for delivery over Wi-Fi. The handset needs a Penthera viewer to be downloaded. There are also services like QuickPlay, a Wi-Fi-based service available for download on the Blackberry Storm (AT&T) and Curve 8900 (T-Mobile). The service, PrimeTime2Go, provides full episodes of TV shows from networks such as NBC, CBS, and so on. 4. What are the frame rates for terrestrial broadcast TV and 3G-based TV dependent upon? Terrestrial TV broadcasts are at the full frame rate of 25/30 fps. However, the limitations of receivers such as Nokia 7710 in the initial periods had limited the frame rates to about 15 fps.

Overview of Technologies for Mobile TV 179 In the case of 3G-based mobile TV, the frame rates range between 5 fps to 15 fps based on network resources. In HSPA networks, full frame rates of 30 fps are achieved. 5. Why is MP3 not used for delivering audio using 3G systems, MMS, or even terrestrial broadcasting when there are so many MP3 players? Most 3GPP services (including podcasting) encode audio at 30–60 Kbps. This is too low for MP3 codecs. The audio formats used are AMR-WB in 3GPP (release 6) DVB-IPDC encoding. These provide good audio quality at 32 Kbps or 48 Kbps. T-DMB uses BSAC audio encoding, thanks to its DAB parentage. Other encoders used in terrestrial broadcast are HE-AAC. 6. As an alternative to streaming video services, is it possible to offer a video call connection to deliver mobile video? Yes, it is possible to set up a service to deliver live or on demand video using a dialed phone number on 3G. Voxbone (Belgium) is a dial-in service that provides direct inward dial (DID) numbers for playing out selected video or audio. The service uses I6net’s (http://www.i6net. com) VoiceXML browser. Such a service was also used in Singapore to deliver live video.

This page intentionally left blank

PA R T I I

Technologies for Mobile TV and Multimedia Broadcasting The wireless telegraph is not difficult to understand. The ordinary telegraph is like a very long cat. You pull the tail in New York, and it meows in Los Angeles. The wireless is the same, only without the cat. Albert Einstein

CHAPTE R 6

Mobile TV Using 3G Technologies There is only one thing more painful than learning from experience, and that is not learning from experience. Archibald McLeish

6.1 Introduction It began with the 2.5G networks such as GPRS, EDGE, and cdmaOne in the late 1990s as a service for streaming of short clips. The operators had upgraded the networks from pure voice to being data capable. cdmaOne and GPRS users had “always-on” connectivity using packetswitched connections. The Wireless Application Protocol (WAP) was formalized and was intended to be the protocol of choice for accessing wireless applications over the air. However, in the initial period, at least, the data usage of the networks was limited. Internet access, though possible, had limited attraction owing to the tiny screens, limitations of keypads, and indeed of the cellphones themselves. Operators keen to derive maximum benefit from the networks saw big opportunity in video streaming and music downloads just as was the case over the Internet. © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00006-0

181

182

Chapter 6

The availability of highly compressed video clips under the new compression algorithms such as Windows Media or MPEG-4 and their progressive standardization under the 3GPP forum made it advantageous for mobile operators to leverage on the capacities for data in their networks and provide video clips. Many GSM-, GPRS-, and CDMA-based networks started offering the clip download services as well as limited video streaming. This was in no small measure facilitated by the increasing power of mobile phones for handling multimedia applications such as audiovisual content. The initial video streaming services were limited to small clips of, say, 30 sec and low frame rates of 7–15 fps. As the availability of handsets and the usage grew, the 2.5G networks were already straining the limits of their capacity in terms of streaming or downloading video to a large number of users, and the limitations were quite obvious in the form of frozen frames and interrupted video viewing, as average bit rates on 2/2.5G connections averaged 40–50 kbps. This brought the focus back on the 3G networks, which were designed to have a greater capability for data. The era of true mobile TV services began with 3G networks, and the need for higher data rates for a larger number of users led to enhancements, such as 1x Evolution Data Optimized (1xEV-DO), and High-Speed Downlink Packet Data Access (HSDPA). These networks provide average bit rates from 384 Kbps to over 1200 Kbps and enable a range of video streaming, download, and other multimedia services. It also led to the conceptualization of technologies such as MBMS, which provide a multicast of mobile TV content to a large base of users. Today, video streaming services as well as live TV are widely available over 3G networks. They cover the entire range of content, such as news headlines, music, weather, sports, and cartoons. Mobile versions of popular programs were the first choice for such implementations, together with content and programs designed specifically for mobile TV. For example: ● ●







Mobile ESPN, a wireless service, designed for sports fans. The GoTV network offers content from ABC and Fox Sports, as well as original programming. Verizon V CAST is a video streaming service offered over Verizon’s EV-DO and CDMA2000 networks. Sprint TV Live! provided by Sprint over its PCS Vision network offers a number of channels of continuously streamed content, most of them live. MobiTV™ and SmartVideo™ are examples of content aggregators that provide services across a range of operators.

Operators in Europe, Asia, and Latin America all offer such services. We need also to recognize that the IMT2000 framework that led to the 3G services was formulated in the 1990s, is not the ultimate solution for providing live-TV-type applications to an unlimited user base owing to the resource constraints within the network. Videos using the 3GPP technology are fairly restricted in terms of viewer acceptance due to the low resolution of 176144 and low bit rate streaming.

Mobile TV Using 3G Technologies

183

6.1.1 What are TV Services over Mobile Networks? Getting video on the mobile handset is no great mystery. If a user accesses a video portal (points the browser to a website where video is available as embedded content), the video will play on the mobile screen. It does not require any specialized software except a mobile browser. The website may be specific to mobile content, e.g., a WAP 2.0–enabled website or a regular website accessed using HTTP and RTSP. The mobile browser in this case just points to an http or WAP address.

Figure 6.1: A PC accessing a web page with embedded video.

Figure 6.2: A mobile device accessing video-embedded WAP site.

184

Chapter 6

TV channels encoded in resolutions suited for mobile phones are however more commonly delivered by streaming in a manner very similar to streaming over the Internet, i.e., using rtsp. This has led to a number of streaming websites going online that have live TV encoded as video streams as well as video on demand.

6.1.2 Why Special Protocols for Streaming on Mobile Phones? Mobile devices on 3G networks can access any website, right? It should then be able to play video just like a desktop would. There are certain differences, however, in the way in which video is streamed over the mobile networks (as opposed to the Internet), which essentially relate to the characteristics of the mobile networks and devices. ●





Content Formats: Internet streaming is commonly done using Real, Windows Media, and Flash and MP3 for audio. Even though these formats are compressed formats for video, these are not good enough for mobile devices. Standard 3GPP-compliant mobile devices need the content to be in 3GPP format. This implies that the streaming site should have content encoded in H.263, MPEG-4, or H.264 (simple profile) for video, and AMRWB or AAC for audio. Such sites are specific to mobile networks. (A PC will not stream from such a website unless it has a 3GPP player downloaded on it!) Handset Compatibility Issues: Mobile devices are characterized by a small screen size and low data rate channels, requiring the streaming to be adapted to each device. The mobile phones also do not always support a wide range of players, which may be required for streaming from sources other than containing 3GPP content. Nature of Wireless Data Channel: Streaming over the Internet is done by using realtime streaming protocols (RTSP/RTCP) over UDP. In case of mobile networks, the same protocols are used but as per the standards set by 3GPP for encoding of content and establishment of connections. This type of streaming is commonly referred to as 3GPP-PSS. Wireless data channels present a highly variable data connection and a need to provide adaptive bit rates by the server. As you will see, the 3GPP release 6 has the provision for adaptive rate streaming called dynamic bit adaption. The 3GPP-PSS also ensures that the 3G core networks (GGSN and SGSN) recognize the nature of traffic and provide a smooth streaming of video packets.

6.2 The Beginning: Streaming on Mobile Devices How do the users begin to stream video on the mobile devices? In order to carry multimedia files or live streaming TV the following are the requirements that need to be met by cellular mobile networks: ●

Make available the websites, program guide and preview of content: The users need to be able to see the content available for streaming or download. This function is similar

Mobile TV Using 3G Technologies











185

to EPG in a TV environment. In case of mobile networks, it usually involves access to the WAP site of the provider to view content available. Establish connection: A mobile user should be able to establish a connection with the streaming server and should have a player that can begin to play the content in a compatible format. Hence, the first requirement is to have protocols standardized and agreed on for calling, answering, and establishing a video streaming session. These protocols need to be followed identically across networks so that the calls can be established between users on different networks. Having well-defined protocols also helps the handset manufacturers to deliver phones that can work identically on various networks. The procedures for setting up calls as well as the packet-switched streaming have been formalized under the 3G-324M specifications for video calls and the 3GPP-PSS for video streaming, respectively. Video and audio encoding in compatible formats: The networks must have standards for encoding of video and audio defined for different applications, such as video calling or video streaming. Ideally, the protocols would use high-efficiency compression algorithms such as MPEG-4 or H.264 in order to reduce bandwidth requirements for encoded video and audio. With small screen sizes, it is also possible to use simple profiles for video that do not require coding of a large number of objects. It is common to use visual simple profile for video, which has been formalized under the 3GPP. However, it is not necessary to use only 3GPP-recommended formats. It is possible to have content encoded and streamed using other formats such as Windows Media (.asf), RealMedia (.rm), or Flash Video (.flv), but these are network-, operator-, and handset-specific. Connections need to sustain multiple bit rates for video: The networks must have an adequate data rate available for sustained transfer of video frames. Although the instantaneous video throughput data rate provided by the connection may vary, the average rate needs to be maintained above the set connection speed. For example, a minimum of 64 kbps but preferably 128 kbps or above is needed for 3GPP video streaming. In practice, only 2.5G and higher networks can provide barely minimum quality streaming video and 3G networks are needed to provide satisfactory services. HSDPA and EV-DO networks are now common to cater to high traffic of data and video. Streaming servers need to handle multiple handset types: A streamer in a mobile environment needs to be able to deal with multiple handset types. Although conformance to 3GPP specifications for streaming video (which in turn implies the support of RTP, RTSP, and encoding per the specified formats of MPEG-4 video and AMR audio, etc.), there are variations in devices capabilities. Many devices may support only the mandatory features of 3GPP encoding and others may support many optional features and media types. Media streamers need to be able to suit the stream type during negotiation of parameters to the handset type in use to minimize handset compatibility issues. Mobile handsets may need clients for media playback: Although 3GPP media is universally supported in handsets, many streaming services use Flash Lite, Windows Media (VC-1), Real, QuickTime, or other formats. It may be necessary to have downloadable media players available. This will require the service to be aware of operating system of mobile phones.

186

Chapter 6

In this chapter, we look at the protocols to set up and initiate video data transfer and release such calls. We also look at the 3GPP packet-switching protocols, which make multimedia data transfer possible. Mobile networks are characterized by either unicast or multicast capabilities for carrying video and we look at these modes of video delivery as well. Quick FAQs Mobile Streaming 1. What are .mobi websites? Is streaming video available only on such websites? .mobi websites are designed especially for access by mobile devices. This makes it convenient for mobile users to bookmark and know for sure that it is a mobile site. Examples of such sites are http://foxnews.mobi or http://businessweek.mobi. Although there are sites for .3gpp video in the .mobi domain, .3gpp video may be available in any other domain.

Figure 6.3: http://businessweek.mobi. 2. What are .tv websites? Do they contain .3gpp content? Many TV stations and networks have their websites in the .tv domain. Most such websites provide information about their shows. There are no syntax rules for such websites to carry only TV. Many of these however also carry video, as they do on their websites in the .com or .us domains, and so on. However, these sites are not especially favored for .3gpp video. Where the TV stations have not bothered to register the .tv domains, these sites are held by squatters. 3. Apart from operator networks that offer TV channels for subscription, is .3gpp content also available on the Internet? Yes, many websites offer content for streaming or download in .3gpp format. An example is http://zoovision.com, which carries channels such as CBS mobileNEWS and free streaming videos.

Mobile TV Using 3G Technologies

187

Figure 6.4: Example of a website with .3gpp content.

6.2.1 Streaming Vs. Download Streaming as a method of transferring video, audio files, or live data has the advantage that the user need not await the full file download and can commence viewing the content while receiving the data. However, as an alternative, it is also possible to download video (including full-length movies) and view these without the annoyance of buffering.

6.2.2 Unicasting and Multicasting The streaming applications fall into two broad categories: web broadcast (or multicast) and unicast. You had a brief overview of unicasting and multicasting in Chapter 5. As the name suggests, web casting is for an unlimited number of receivers, all of which receive the same content, and unicast is a server-to-client connection by which the client can communicate with and control the packet rates and so on, and request retransmission of missed packets.

Figure 6.5: Unicast mobile TV.

188

Chapter 6

In unicasting, each individual user sets up his or her own connection with the content source and receives a stream or TV channel for his or her own use. This can quickly load the cell radio capacity, as the RF resources are limited. Simultaneous viewing also builds up backend traffic with the same channel being streamed as hundreds or thousands of instances. Due to this, as experience with video streaming over Internet has shown, the streaming service quality can degrade significantly. Hence the quality of streamed video and the number of users that can use such services are dependent on the underlying mobile network. Multicasting in cellular networks implies constant usage of a certain bandwidth in every cell, which reduces the capacity available for other purposes and uses by an equivalent amount. We will be discussing multicasting later in this chapter.

Figure 6.6: Multicast mobile TV.

Unicast services do have the advantage that the number channels can be virtually unlimited (including the video on demand channels) as no resource is used in the idle condition. When a user sets up a connection with the server the content is delivered and resources are used. However, a unicast service does not scale well with the number of users or high-usage patterns. In a unicast service, users may incur the data transmission charges unless they are covered by an unlimited usage plan. The limitations of unicasting live TV content can be overcome using broadcast and multicast technologies such as MBMS.IP. An example of a broadcast mode technology is “IP Datacasting” which is used in terrestrial transmissions of mobile TV such as DVB-H or ATSC mobile DTV.

6.2.3 3GPP, FlashCasting, and RealVideo As the 3G networks closely follow the recommendations of 3GPP, most implementations of mobile TV using 3G networks are based on the use of 3GPP as standardized in 3GPP-PSS. However,

Mobile TV Using 3G Technologies many operators do not want to be limited to the relatively low level of screen resolution and encoding used in 3GPP and provide services using Flash Video, Windows, or RealVideo, amongst other formats. Most commercial streaming solutions available today support streaming in multiple formats that include Flash, Windows Media (VC-1), Real, QuickTime, 3GPP, and MP4.

6.3 Overview of Cellular Network Capabilities for Carrying Mobile TV We had a brief overview of data capabilities of 2G, 2.5G, and 3G networks in Chapter 4. Live video carriage requires at least 64–128 kbps with 15 fps and QCIF resolution with MPEG-4 coding. This is obviously not possible in 2.5G networks, at least not for any significant number of users, and 3G networks have became the prime choice for offering such services.

Figure 6.7: Mobile data evolution toward higher data rates.

189

190

Chapter 6

Quick Facts Data Rates in Mobile Networks Table 6.1: Data Rates and Video Users in Mobile Environment (figures given are broad estimates). Network

Peak Data Rate

Capacity Type 64 Kbps Video

128 Kbps Video 256 Kbps Video

GSM GPRS EDGE cdmaOne 3G HSDPA 1xEV-DO 3xEV-DO HSPA (Release 7) HSPA (Release 8)

9.6 115 Kbps 384 Kbps 256 Kbps 2 Mbps 3.8 Mbps 1.9 Mbps 5.7 Mbps 14.4 Mbps

Individual Pool Pool Pool Pool Pool Pool Pool Pool

NIL 0–1 10–12 7–8 Not Used Not Used Not Used Not Used Not Used

NIL NIL 3–4 2–5 14–18 25–30 12–14 36–40 125

NIL NIL 2–3 1–2 7–9 12–15 6–7 18–20 60–80

42 Mbps

Pool

Not Used

Not Used

200–250*

*

Expected usage

6.3.1 Data Capabilities of 3G Networks Video streaming and downloads work well in 3G networks provided that statistically the total data usage is within 5 Mbps total data rate. In a typical usage environment, where a mix of applications is used, a 3G network can support 14–25 simultaneous users per cell. This number may fall to 7–18 when a majority of users are using unicast video streaming.

6.3.2 An Example of Mobile TV over 3G Network Cingular Wireless (now part of AT&T) in the United States launched a countrywide commercial service for live TV as well as streaming video on-demand services tailored for the mobiles. Customers need to sign up for Cingular’s (now AT&T’s) MEdia Net Unlimited package to receive the video services. The use of video is simple with a click of the icon. The service is now available countrywide as CV.

6.3.3 Data Capabilities of the HSDPA Network for Video Streaming Under normal conditions, the HSDPA network (4.2 Mbps) can deliver 384 kbps to up to 50 users in a cell area, which is a 10-fold improvement over the release ’99 WCDMA, under which only five users could be provided such throughput.

Mobile TV Using 3G Technologies

191

Figure 6.8: Cingular (now AT&T) 3G mobile TV at the time of its initial launch. It is available countrywide now.

Figure 6.9: Dimensioning of mobile TV on HSDPA networks.

192

Chapter 6

According to an analysis (Ericsson) of HSDPA networks with 95% of satisfied users, 128 kbps streaming service can be provided at 12 erlangs of traffic. Under low usage conditions (i.e., 25 min per day), all the users in the cell area (assumed user density per cell of 600) can get satisfactory service. For medium usage (assumed 510 min per day), the number of users that can be catered to within the satisfaction level falls to 171 per cell or 28%, while for high usage (e.g., 420 min), the usage falls to 108 users per cell or 18%. Operators are now moving to HSPA networks. Most upgrades in 2009–2010 are for HSPA 3GPP-release 7, which can provide data rates of up to 14.4 Mbps. However, there are already plans for moving to HSPA release 8 (3GPP/3GPP2 dual carrier mode) with up to 42 Mbps of data rate capacity. Using such high data rate and larger capacities derived through smaller cell sizes, it is possible to statistically provide services to much larger number of simultaneous users in a cell area.

6.4 Understanding a 3G Streaming Service A 3G streaming service is fairly straightforward. However, as 3G networks are not just for data, it requires some understanding of their nature. As you are aware, circuit-switched and packet-switched domains exist simultaneously in 3G networks. Voice in 3G networks is carried via the circuit-switched domain, and data is delivered using a packet-switched domain. This is the legacy architecture of 3G, which the 3GPP is trying to change to an integrated packet switched core that will be effectively achieved with LTE. At the present stage of technology, the user handset connects to both the circuit-switched and packet-switched domains, with the former being used for voice-switched calls and the packet domain for data, which enables streaming to take place. The radio interface is called UTRAN. The user can therefore be on a voice call while a podcast downloads in the handset at the same time. The network may also serve GSM cells, via base station controllers. In this network, a packet-switched connection is made immediately when the user device is in a coverage area using a packet data protocol (PDP) and the connection is continuous, enabling mail, IM, and other services. A voice call is made when required by the user and disconnects when done. The data call does not use any radio resources until there are packets of data to be transferred. The streaming server connects to the network using a Gateway GPRS Support Node (GGSN). The streaming server need not be aware of the actual location of the user as the GGSN maintains the connection to the server. The GGSN has a database containing the IP address of the serving GPRS support node connecting the user and tunnels data to the SGSN. The user

Mobile TV Using 3G Technologies

193

Figure 6.10: Streaming takes place via the packet-switched core of a 3G network.

handset, via the radio interface UTRAN, connects to the SGSN from its current base station. The SGSN then maintains contact with different radio network controllers (RNCs) so as to maintain connection while the user roams. The use of such a dual-core network for VoIP calls is possible through the use of the IMS. However, the same is not considered further in this chapter, as it is not relevant for the moment.

6.5 Mobile TV Streaming Using 3GPP Standards: Packet-Switched Streaming Service Mobile TV streaming via the 3G networks is governed by the 3GPP standards, termed 3GPPPSS. The basic purpose of the specifications for the streaming service is that there should be uniformity in: ● ● ● ● ● ●

Definition of protocols for the streaming service Definition of video and audio formats to be handled Definition of call setup procedures for the streaming service Definition of coding standards Assuring QoS Digital rights management for the streamed services

194

Chapter 6

6.5.1 Definitions of Protocols for the Streaming Service The 3GPP-PSS services have evolved from a simple packet-streaming service as in release 4 of the 3GPP-PSS specifications (2001) to more advanced services under 3GPP release 6 while maintaining backward compatibility. The latest release is release 9 (2009). We will discuss why enhancements were required to the initial specifications, which started off in release 4. As may be seen from Figure 6.11, the mobile TV streaming architecture seems very similar to the “ normal streaming” architecture over the Internet. However, this similarity is quite superficial. 3GPP-PSS caters to a different set of video and audio coding formats and file types, and includes negotiation of streaming attributes, bit rate, screen size, and quality of service.

Figure 6.11: Wireless streaming architecture.

6.5.2 Definition of Video and Audio Formats The file formats used in Internet streaming cover a wide range. Generally, streaming over the Internet uses Real, Windows Media (VC1), or Flash-based media compression and streaming formats. On the other hand, 3GPP networks are characterized by handling video, audio, and rich media data per the file formats in GPP releases (releases 4, 5, and 6). The most commonly used 3GPP version, as of this writing, is release 6. 3GPP also describes image sizes and bandwidth, so content is correctly sized for mobile display screens. The resolutions of QCIF and QVGA for video have been formalized under these standards and .3gp is the file format used in mobile phones to store media (audio and video). Audio is now standardized as AMR-WB or

Mobile TV Using 3G Technologies

195

AAC-LC formats (for backward compatibility). This file format is a simpler version of ISO 14496-1 (MPEG-4) Media Format. The .3gp format stores video as MPEG-4 or H.263. Release 5 had added new media types, including synthetic audio (MIDI), subtitles (timestamped text), and vector graphics. Release 6 includes support for H.264 (MEPEG-4/AVC Basic profile). The support of audio AMR-WB is now common. The support of H.264 is also mandatory for MBMS services that were introduced in 3GPP release 6. The bit rates generated under release 6 can be up to 128 Kbps. Table 6.2 provides the comparative features of 3GPP in releases 4, 5, and 6. Table 6.2: Comparative Features of 3GPP-PSS in Releases 4, 5, and 6. Feature Device Capability Information Video Codecs

Audio & Speech Codecs

Media File Format

Release 4

Release 5

NONE

User Agent Profile Exchange

H.263 P0L 10 (Mandatory), P3L 10 (Optional) MPEG-4 VSP L0 (Optional)

H.263 P0 Level 10(M). P3 H.263 P0 Level 10(M), P3 Level 10, Level 10, MPEG-4, Video MPEG-4. Video Simple Profile Level 0 Simple Profile Level 0 H.263 P0 L45 (Optional)

AMR-NB & WB (M)

AMR-NB & WB (Mandatory), MPEG-4 AAC LC, LTP (Optional)

MPEG-4 AAC LC, LTP (Optional) 3GPP File Format (.3gp) .amr

3GPP File Format ISO Base Format Conformance (M) Timed-text (Optional)

Session Establishment

Release 6

RTSP (Mandatory) SDP (Mandatory) HTTP (Optional)

RTSP (Mandatory) SDP (Mandatory) HTTP (Optional)

Data Transport

RTP/RTCP (Mandatory) RTP/RTCP (Mandatory) Progressive Download (Optional)

QoS/QoE

NONE

None

Rate Control

NONE

Video only as per Annex G

User Agent Profile Excahange

MPEG-4 VSP L0b (Optional) H.264 Full Baseline (Optional) AMR-NB and WB (Mandatory) AMR-WB or AACPlus (Optional) 3GPP File Format Various 3GP file profiles (server, MMS, progr. downloadable, generic) (Optional) DRM (O) RTSP (Mandatory) SDP (Mandatory) HTTP (Optional) Media Alternatives in SDP Metadata signalling in SDP (O) MBMS – FLUTE (Mandatory) As in Release 5 MBMS Download (M) DRM (Optional) SRTP (Optional) Additional RTSP & SDP level signalling (Optional) RTCP extentions (Optional) QoE Protocol (Optional) Video Only as per Annex G 3GPP Rate Adaptation (Optional)

196

Chapter 6

Table 6.3 describes the specifications of audio and video formats and codecs for streaming service PSS release 6. As may be seen, video is prescribed to have a maximum frame size of 176144 pixels and a coded bit rate of 128 Kbps (maximum). Standard software such as Apple’s QuickTime Broadcaster supports video codecs for MPEG-4 or H.264 and together with a QuickTime streaming server provides for a broadcasting solution that can reach any MPEG-4 player. A number of other broadcast solutions provide real-time MPEG-4 or 3GPP encoding and streaming applications as well as downloadable players.

6.5.3 Mobile TV: Streaming Using a Unicast Session in 3GPP-PSS The procedure for setting up unicast real-time streaming protocol (RTSP) sessions between a mobile device and a streaming server are quite intuitive and straightforward: The client on the mobile (e.g., HTTP client) selects the location of a media file with an RTSP URL (i.e., web link). The following sequence of events takes place: ●

The media player connects to the streaming server and gives an RTSP DESCRIBE command. Table 6.3: Codec Formats for Packet-Switched Streaming Service Release 6. Content

Codec

Support

Max Bit Rate

Video

H.263 profile 0 Level 10

Required

128 Kbps

Video

H.263 Profile 3 Level 10

Recommended

128 Kbps

Video

MPEG-4 simple Visual Profile H.264 Baseline Profile main Subset AMR-NB AMR-WB MPEG-4-AAC-LTP AMR-WB, Enhanced aacPlus Mobile DLS, XMF

Recommended

128 Kbps

Recommended

128 Kbps

Required Required Required Recommended

12.2 Kbps 23.8 Kbps 48 Kbps 48 Kbps

XHTML Mobile, UTF-2, UCS-2 SVG-T Mandatory, SVGBasic-Optional SMIL 2.0

Mandatory

RTSP/UDP

Mandatory

Video Speech Speech Audio Audio Synthetic Audio (MIDI) Text Vector Graphics Presentation Layout (SDP) Session Setup

Remarks Max Frame Size 176144 Interactive and wireless streaming profile Max Frame Size 176r144

New implementations use AMR-WB

Recommended

Mandatory

Support of other bit rates, languages optional in release 6

Mobile TV Using 3G Technologies ●









197

The server responds with a session description protocol message (SDP) giving the description of media types, number of streams, and required bandwidth. The player or the media client analyzes the description and issues an RTSP SETUP command. This command is issued for each stream to be connected. After the streams are set up the client issues a PLAY command. On receiving the PLAY command, the streaming server starts sending the RTP packets to the client using UDP. The connection is cleared by the client when desired by issuing a TEARDOWN command. The RTP, RTCP, RTSP, and SDP commands are as per the relevant RFC standards.

Figure 6.12: Streaming session setup in 3GPP-PSS.

The service announcement will typically be provided by an XML code that has the description of each channel and the media type (3GPP). For example, the XML code for displaying three channels—Discovery, Fox News, and ESPN—by an operator having a website live.XYZtv.com may be as follows (note that 554 is the TCP or UDP port used by rtcp protocol and the file type is micro content description file; mcd):



198

Chapter 6

Such code will be interpreted on the mobile to show the listing of channels Discovery, Fox News and ESPN, which can be then selected by the user, in turn opening the rtsp session with the server.

Figure 6.13: 3GPP-PSS protocol stack.

6.5.4 Maintaining Quality During Streaming: RTCP Once the data transmission starts using the RTP, the RTCP session remains active and is used for feedback between the user terminal and the streaming server. In release 4 of 3GPP-PSS, the feedback is once every five seconds and the RTCP bandwidth restricted to 5% of the total bandwidth for connections at 64 Kbps. Release 5 of the PSS has introduced the concept of the user agent profile. Using this feature the client on the mobile can signal to the server its capabilities in terms of the number of channels of audio, media types supported, bits per pixel, and screen size. This information is used by the streaming server to connect the appropriate streams to the client. The information is furnished during the session initiation. Release 5 has two modes of RTCP feedback. Mode 1 is the normal feedback, i.e., once every 5 seconds. Mode 2 (urgent feedback) permits the feedback more frequently and at a higher transmission rate to ensure good streaming quality.

6.5.5 Progressive Download PSS release 6 (2004) added a number of new features for reliable streaming and, most importantly, digital rights management. New protocols have been proposed for reliable streaming, which include features for retransmission of information including progressive

Mobile TV Using 3G Technologies

199

download using HTTP and RTP/RTSP over TCP. These ensure that no information is lost in the streaming process. This type of streaming is suitable for downloads and less so for live TV. The PSS protocols have also been enhanced to provide QoS feedback to the server. This conveys the information on lost packets, error rate, and so on. New codec types, i.e., MPEG4/AVC or H.264 and Windows Media 9, have also been recommended. PSS release 6 also requires support of digital rights management per 3GPP-TS 22.242.

6.5.6 Enhancements in 3GPP-PSS in Release 7 Although the inclusion of new media types—i.e., H.264 for video and AMR-WB— improved the streaming experience, there was further need for improvement. Release 7 of 3GPP has introduced new features. These include: ●







Fast Channel Switching (FCS): This improves a user’s viewing experience by reducing the initial channel setup time and providing fast channel changing capabilities. 3GPP Rate Adaptation: These features improve the viewer experience by enabling quick and uninterrupted adjustment of rate to match the actual wireless channel and reduction of instances of buffering, which interrupt viewing. Firewall Transversal: These features are designed to reduce user problems due to firewalls, which block different categories of traffic. Keep Alive: The connection over a wireless network can be lost due to transmission conditions. Features in release 7 permit the link to be kept alive, even beyond a 1-minute time limit that was used for earlier releases.

6.5.7 Quality of Experience in Mobile Video Mobile networks provide a highly variable environment for transport of video. This is as a result of both the variations in wireless reception as well as overall loading by multiple applications. As a result, the data rates as well as quality of video received can vary significantly during a single session. A viewer’s quality of experience (QoE) is now widely used as a parameter in setting up and maintaining video applications over mobile networks. The variation of video quality in a session can happen due to a number of reasons. If the channel data rate cannot sustain the video stream being received, the streamer may switch streams to a lower bit rate, drop frames (or move to a lower frame rate), and cap bit rate of the original video stream. A user will experience these as poor frame quality, instances of video buffering, frame stalling, and loss of audio video synchronization. As an example, 3G streaming TV services for live TV (such as MobiTV) are subject to handset as well as network limitations. The resources are very limited, even in 3G networks, and the initial offering, e.g., by Sprint Nextel, offered a frame rate of 7 fps. The rate dropped lower to 1–3 fps for some networks using Java applets for delivery. All these can potentially lead to a perception of lower QoE. Maintaining audio quality is also very important in mobile networks,

200

Chapter 6

because although many users may be able to tolerate lower quality video, a lower audio quality is much less tolerated. For this reason, audio bit rates are always maintained to ensure good QoE. 3GPP release 5 defined both QoS and QoE as attributes in a 3GPP packet streaming service. These include parameters such as frame size (i.e., maximum screen size supported), predecoder buffer attribute, QoE Metrics, adaptation attribute (provides bit rates available), QoS attribute, and others.

6.6 Broadcasting to 3GPP Networks 6.6.1 A Simplified View of a 3GPP Broadcasting Headend A simplified view of a 3GPP “headend” comprises of two servers: ●



A broadcast server for encoding of audio and video content and IP encapsulation into IP UDP RTP packets A streaming server for providing multiple unicast RTP streams to multiple mobile handsets

Figure 6.14: A simplified view of a 3GPP headend.

In a headend, the video and audio sources can be a satellite decoder/receiver or stored audio and video content. The broadcasting server encodes the stream using H.263/MPEG-4 encoders and AAC-LC or AMR-WB audio encoders. The streaming server sets up one-to-one unicast connections to mobile sets whose users desire the particular video to be accessed by streaming. For this purpose, the mobiles would access the website through a command such as rtsp:// server/filename. The session is then established via RTSP and IP packets use RTP delivered over UDP. The video resolution

Mobile TV Using 3G Technologies

201

of the encoded stream will be limited to 3GPP, e.g., a QCIF size with 15–30 fps with bit rates up to 128 Kbps. The audio video data is decoded at the receiving end (i.e., a mobile phone) using a 3GPP player embedded in the handset.

Quick Facts Steps a 3G Operator Must Take for a Mobile TV Service ● ●





Make available unlimited data download packages to be offered as a part of mobile TV. Check capacity available in each cell. This requires study of hourly voice and data traffic including traffic to sites such as YouTube. If the network is HSPA or EV-DO, the capacity restraints will be much less. Prepare application clients that can be downloaded by users. These clients obviate the need for users to type in http or rtsp links. For video on demand, a payment mechanism needs to be integrated in the application client. Where monthly subscriptions are involved, link these to the IMEI number of the handset.

6.7 Examples of Streaming Platforms 6.7.1 QuickTime Broadcaster Apple’s QuickTime Broadcaster comprises a Mac OS X 10.5 server (or a Darwin Streaming Server 6) with QuickTime 9 and QuickTime Broadcaster software. The QuickTime streaming server forms part of the software. The server supports MPEG-4, H.264, AAC, AMR-WB, MP3, and 3GPP as permissible file formats. It can serve streams with 3GPP on either a unicast or a multicast basis for mobile networks. Both modes of delivery, i.e., streaming or progressive download, are supported. It uses industry standard protocols for streaming, i.e., RTP/RTSP. Live programs can be encoded using QuickTime Broadcaster. It is also possible to have on-demand streaming.

6.7.2 Model 4Caster™ Mobile Encoder Solution from Envivio The Model 4Caster has been designed for broadcasting on mobile TV networks with multistandard support. The 4Caster can accept a video input in any format. It presents its output in eight simultaneous profiles, which include 2.5G, 3G, 3.5G, DVB-H, DMB, ISDB-T, and Wi-Fi or WiMAX networks. It supports both 3GPP and 3GPP2 network standards. The encoder outputs offer simultaneous streaming and broadcasting of mobile video at multiple bit rates. It offers live bit rate switching, which enables the encoder to adjust the bit rates based on network conditions. The encoder also supports content protection using ISMAcrypt.

202

Chapter 6

Figure 6.15: Mobile TV streaming using Envivio encoders and multiformat streaming server.

6.7.3 Vidiator Xenon™ Platform The Xenon Platform from Vidiator is another example of a mobile TV platform including live TV encoding (using Xenon Live Encoder XLE), streaming (Xenon Streamer), and offline encoding for preparing VoD content in mobile suitable formats (Xenon Off Line Encoder, or XOE). The platform also has other components, such as multicast to multiple streaming servers (Xenon Live Relay) and admin modules for back end and billing support. One of the strong points of the Xenon platform is its fast channel-switching feature (provided by the fast channel switching module). Xenon Streamer also supports dynamic bit rate adaptation (DBA), which enables the platform to tailor its stream delivery per wireless conditions and client profile. The Xenon servers support both release 5 codecs (H.263, MPEG-4, AAC, and AMR) and release 6 codecs (H.264, AMR-WB, aacplus). An example of a service using the Xenon platform is the mobile TV service provided by 3G operator “3” in Scandinavia. It is also used in the Italian music and mobile TV network Buongiorno.

6.8 Practical Implementation of Video Services over 3G Networks 6.8.1 Setting Up a 3GPP Streaming Service Setting up a 3GPP streaming service involves the following steps: ● ●



Setting up a website that can be accessed by users for service information (SDP). Setting up a streaming server on which content in H.263/H.264 and AAC is supported. The encoding services can provide encoding of live content. Accepting RTSP requests for connection and providing live streaming using RTP/UTP.

Mobile TV Using 3G Technologies

203

An open source server available for streaming 3GPP content is the Darwin Server.

6.8.2 Sizing for Streaming Loads Mobile streams present a streaming requirement of 20 Kbps to 384 Kbps per unicast stream, depending on the encoding format, screen resolution, and frame rate selected. In itself, such a requirement is quite modest. For example, a Mac QuickTime Streaming Server (QTSS) can handle the following simultaneous-stream performance:

Live streams: 20 kbps (AAC audio)—10,000 64 kbps (MPEG-4 video & AAC audio)—2,500 300 kbps (MPEG-4 video & AAC audio)—1,500 On-demand streams:

● ● ●

● ● ●

20 kbps (AAC Audio)—8,000 64 kbps (MPEG-4 video & AAC audio)—2,000 300 kbps (MPEG-4 video & AAC audio)—1,000

These specifications demonstrate a streaming throughput of about 300 Mbps delivered over a 1G Ethernet interface. However, this will not be enough for most servers, which are designed to serve a large region and carry popular channels. At the same time, it is also impractical to just multiply the servers, as the load on backhaul links also multiplies to quickly reach unmanageable levels. It is therefore necessary to use one of the following techniques: ●



Use of multicast to the routers within the network and use unicast from routers at the network edge. This multicast/unicast combination transmits each channel in only one instance up to the network edge, and uses unicast to thousands of users thereafter, keeping the load on network links to manageable levels. Use a content delivery network, where the content is cached at the network edge in content delivery servers and is streamed from these servers rather than from any central server. Content delivery solutions are provided by many vendors. An example is the Cisco Content Delivery System (CDS). However, for most operators and content providers it is more practical to use third-party services for content delivery networks such as those provided by Akamai, LimeLight networks, and others.

6.8.3 Multicast–Unicast Transmission and Transcoding Services A multicast/unicast solution is shown in Figure 6.16. The network core has a multicast structure, which is implemented by all routers. A multicast to unicast conversion then takes place at the network edge by a device such as X-works Stream Switcher™, which converts a multicast stream to unicast streams at the same time providing transcoding of content. In view of a large range of devices types such as iPhone 3G, Blackberry, Nokia N95,96, HTC, Samsung Cookie, Android G1, and Palm Treo (examples are selected at random from a wide range of devices),

204

Chapter 6

all with different screen resolutions and characteristics and multiple formats in which live TV streams may be available, it is common to use on demand and online transcoding services.

Figure 6.16: A typical mobile TV multicast-unicast solution.

6.8.4 Fast Channel Switching One of the important parameters in the user’s perception of the quality of mobile video services is the ease and rapidity with which he or she can select channels. 3GPP streaming until release 6 supported channel selection by the viewer by visiting the website where content options are displayed and making request for a new stream. This involves ceasing to watch the present channel, making an HTTP request, and waiting until new channel playout begins. This can be an annoying process. 3GPP has now standardized the fast channel switching (FCS) feature in release 7 of 3GPP. Interoperability testing of this feature was also completed in January 2009. In this option, the channel information (or channel list) is conveyed to the mobile handset at the time of initial connection. The handset then maintains this information. When the user selects a new channel, this goes out as an “out-of-band” HTTP request for the selected channel. The server then directly switches to the new channel. Not only does it provide a faster channel switching time, it also allows uninterrupted viewing of channels, by avoiding visits to the home page for channel selection. The RTSP session in this case is maintained without interruption.

Mobile TV Using 3G Technologies

205

Figure 6.17: Fast channel switching by maintaining RTSP session.

Most commercially available mobile streaming servers today support this capability. Examples include Xenon Server and Helix Server. The mobile handsets need to have a browser or client that can interact with the web server and play the streams.

6.8.5 Streaming RealVideo to Mobile Devices It is also possible to stream video in RealVideo format. This requires the use of a Helix streaming server. However, unlike 3GPP content, which plays natively on all 3G phones, RealVideo will be able to be played on only “supported devices” or devices on which a RealPlayer has been downloaded. RealPlayer is supported on select Symbian S60 (e.g., Nokia 7650 and 3650) and Palm OS 5 phones amongst others. The latest list of supported devices can be obtained from: http://www.realnetworks.com/industries/serviceproviders/mobile/products/player/index.html Mobile devices with RealPlayer will also play 3GPP content.

206

Chapter 6

The Helix Streaming Server can be installed on Windows or Linux machines and can handle both live and on demand video.

6.8.6 Streaming Windows Media to Mobile Devices Many content providers may like to stream video and audio using Windows Media (VC-1 format). This can be done by using Windows Media Server for Mobile. Streams in Windows Media will play only on supported devices, unlike 3GPP content. The latest list of Windows Mobile supported devices can be obtained from: http://www.microsoft.com/windowsmobile/devices/default.mspx Windows Media Server Mobile streaming is available as a service of Windows Server 2003. The Windows server for streaming can be used in conjunction with Windows Media 9 Encoder and Windows Media Audio 10 Professional encoders. It is possible to encode both live TV and video files for delivery via streaming. Windows media is one of the supported formats in MobiTV, along with Flash Video, MPEG4, and 3GPP.

6.8.7 Delivering Flash Video Lite to Mobile Devices Flash Video Lite content has gained popularity on account of its widespread use on sites such as YouTube and many other video delivery networks. Flash Video is delivered by using a technology called “Progressive Download.” It requires Flash Media Server (FMS) to provide progressive download support. Flash Lite content can be played on any mobile device that has a Flash Lite plug-in for the browser. It is also possible to download the Flash Lite player in many mobile phones. The number of devices on which Flash Lite is supported is very large. The list of devices that are supported for Flash Video Lite can be obtained from: http://www.adobe-flashlite.com/?page_id4

6.8.8 Streaming DivX Content to Mobile Phones DivX is one of the popular video formats for home theaters; it provides for efficient video coding and small file sizes. For this reason, its use in mobile devices has also been growing, particularly with the availability of 8 GB and higher storage in memory cards. DivX player mobile is a downloadable player and works in specified mobile devices. A list of such devices can be obtained from: http://labs.divx.com/MobileCommunity

Mobile TV Using 3G Technologies

207

Figure 6.18: DivX mobile media.

DivX content can be streamed using DivX Server 1.3 (and other versions). It is available for download from the DivX labs website.

6.8.9 Example of a Multiformat Mobile TV Delivery Service MobiTV An example of a mobile TV service that features multiformat content aggregation and delivery is MobiTV. The service is available in the United States (AT&T, Sprint, Verizon, and regional carriers), Canada (Bell Canada, Rogers, and TELUS Mobility), Central and South America, Mexico, and the Caribbean, and provides over 40 channels as live TV content and video on demand. In 2009, the service was available at a flat rate of $9.95 U.S. per month, which was charged over and above the data rate plan. The channels available included Fox, ABC News, Animal Planet, Comedy Time, Discovery, Oxygen, and others. Over 200 handsets can receive MobiTV telecasts. In 2009, MobiTV had over 6 million customers. The MobiTV service streams video on the data channel and the quality depends on a number of factors, including the network conditions and the phone selected. MobiTV supports an Optimized Delivery Server (ODS), which is designed to support delivery over multiple types of networks including mobile 3G, Wi-Fi, WiMAX, and terrestrial broadcast networks.

208

Chapter 6

Figure 6.19: MobiTV multiformat services.

In the United States, all major carriers—Verizon Wireless, Sprint Nextel, AT&T, and TMobile—have moved to 3G services (CDMA2000, EV-DO, 3G-GSM, or HSDPA). These services are characterized with speeds of 400–700 kbps with bursts of up to 2 Mbps per user.

6.8.10 Content Delivery Networks Although it is possible for any content provider to set up its own 3GPP streaming servers, there are many issues that need to be addressed. These include: ● ● ●

Handling of multiple handset types Load balancing of unicast traffic streams Integration into carrier data billing plans

For these reasons, it is more practical to use services offered by major carriers as content delivery services. Most mobile operators toady offer their own content delivery networks, which provide the full range of services, from downloading of content, its ingest, encoding to 3GPP (and other formats such as Flash, Real, Windows Media, QuickTime, or DivX as needed), storage on streaming servers, advertisement insertion, and streaming to different types of networks and handsets. A CDN replicates content from the base server to a number

Mobile TV Using 3G Technologies

209

of proxy servers spread throughout the network to provide the best QoS in streaming. The content on such servers can be mobile video, audio, massively multiplayer online gaming (MMPOG), feeds for social networking sites, RSS feeds, and software download services.

6.8.11 Delivering Mobile TV to iPhone 3G An application available from Streamezzo® and Atos® Origin enables the delivery of live mobile TV to iPhone 3G handsets based on the rich media technology developed by Streamezzo. This is an enabling application, which needs to be supported by the 3G operator. The application is designed to use the embedded video player of the phone and provide live TV as well as a channel grid, EPG, and video on demand catalog. The functional elements of the solution include a rich media streaming server, a rich media client, an interactive EPG, and a content generation platform. The application can be downloaded from the Apple iTunes Store. The operators have considerable flexibility in configuring this service as well as targeting it to Android, Windows Mobile, Brew, and Symbian Phones in addition to the iPhone 3G. Streamezzo is known for its rich media applications and had earlier launched a digital radio service for mobile devices. The radio and mobile TV applications can also be delivered to devices other than the iPhone, such as Blackberry Storm.

6.9 Operator-Specific Issues in 3GPP Streaming Services As 3GPP streaming takes up significant resources of a 3G network, operators try to maintain close control over the way these services are provided from their mobile networks. Customers subscribing to mobile TV services are required to subscribe to an unlimited data package. However, even with this revenue source, operators sometimes insist that the 3GPP servers be linked to their networks and consequently expect such servers to provide APIs for billing and traffic estimation. Operators may also block traffic from rtsp sites on the Internet.

6.10 Multimedia Broadcast and Multicast Service (MBMS) The potential limitations of the 3G networks for unicast streaming of high-usage TV traffic has led to the consideration of multicast technologies that are inherently more suitable and less resource-intensive, particularly for live TV channels. The live TV channel traffic is essentially of a multicast nature, with all users viewing identical streamed content. Multicast networks are ideally suited to such delivery. In multicast networks, each content channel is allocated one transport channel in each cell area, irrespective of the number of users watching. In an MBMS service, all routers need to repeat the multicast transmission in each cell. It is estimated that one 64 kbps multicast channel requires approximately 5% of the carrier

210

Chapter 6

power; a 128K channel requires 10% of the carrier power. This implies that up to 10128K or 2064K multicast channels can be supported per carrier in a cell area (the number of channels can vary depending on cell topography and type of receivers used). MBMS is an inband broadcast technique as opposed to other broadcast technologies for mobile TV such as DVB-H. It uses the existing spectrum of the 3G by allocating spectrum or carrier resources to the multicast transport channels in each cell. The cells can have both unicast and multicast channels, depending on the traffic dimensioning. MBMS is essentially a software-controlled feature that enables the dedication of transport channels to multicast TV. Typically only those channels that are of high viewership interest would be multicast in MBMS networks.

Figure 6.20: Audiences for broadcast and multicast channels.

As the name suggests, the multimedia broadcast and multicast service operates in two modes: broadcast mode is available to all users without any differentiation (such as payment status). The users can receive the channel with a requested QoS. In the multicast mode, the channel is available to select users in a selected area only. The users may be provided the service based on payments or subscription.

6.10.1 MBMS Service Setup Procedure The MBMS service setup works as follows: ●

Service announcement: Operators would announce the service using advertising or messaging, etc. The announcement may go only to subscription customers in the case of multicast services.

Mobile TV Using 3G Technologies ●



● ●



211

Joining: The multicast users can indicate that they would be joining the service. The joining can be at any time, but only the authorized users will receive the multicast service. In broadcast mode, all users would receive the service. Session starts: The requisite resources are reserved in the core network as well as the radio networks. MBMS notification: A notification goes out of the forthcoming service. Data transfer: The data transfer commences and is received by all users in the selected group. In broadcast mode, all users would receive the data, which is without any encryption. In multicast mode, the data is encrypted and only the authorized users receive the service. Leaving or session end: In multicast mode, the users may leave the session at any time, or the session ends after the data transfer is completed.

Figure 6.21: MBMS delivery—MBMS reserves transport capacity in all cells for multicast TV.

6.10.2 MBMS Media Types and Service Definition MBMS supports two types of delivery: ● ●

Streaming File download (unidirectional, or FLUTE)

212

Chapter 6

Both types of delivery are supported over the two modes of transmission: broadcast and multicast. The encoders and receiver clients need to support encoding in H.264 for media encoding.

6.10.3 Requirements for Establishment of an MBMS Service The establishment of an MBMS service requires the following: ● ●



Placement of a broadcast/multicast content server in the network. Adding MBMS controlling functions to GGSN, SGSN, and Radio Access Network (UTRAN). The MBMS operation requires the establishment of an MBMS Traffic Channel (MTCH), Control Channel (MCCH), and Scheduling Channel (MSCH) per functionalities defined in 3GPP release 6. The radio link needs to operate in an unacknowledged mode for unidirectional data transfer (radio link control— unacknowledged mode; RLC-UM). Adding MBMS capabilities in handsets (MBMS client functions). The client functionality involves capability of mobiles to negotiate MBMS protocols, handle media encoding defined in MBMS, and support unidirectional file transfer protocols such as FLUTE.

6.10.4 Practical Issues in Implementation of MBMS Services Despite the 3GPP having released the MBMS as a standard for multicast and broadcast of video as early as 2005, there have been practical issues in implementation of MBMS that have prevented any significant rollouts of the service. These issues include: ●

● ●



Transient standards, with some essential features of MBMS having been defined only in release 6 of 3GPP. The need to upgrade equipment to MBMS capability and the cost trade-offs involved. The need to have MBMS clients in handsets and the question on how many handsets in field will actually be MBMS-capable. MBMS is a unidirectional service and is better suited for unpaired spectrum. Not all operators have such spectrum. Existing 3G spectrum is expensive and heavily used for “paying” voice and data traffic.

6.10.5 Telecom Italia MBMS Trial Telecom Italia, in cooperation with Huawei and Qualcomm, conducted an MBMS trial in April 2008, which was successful in demonstrating high-quality MBMS streaming up to 256 Kbps in indoor and outdoor environments. Huawei provided the MBMS equipment for the trial and Qualcomm provided the chipset MSM 7021A, which was used in the handsets for testing.

6.10.6 TDtv Mobile TV Services IPWireless® brought out a technology, called TDtv™, that provides MBMS services using the unpaired frequency slots in the 3G-GSM spectrum (TDD). The TDtv technology is based

Mobile TV Using 3G Technologies

213

on the use of TD-CDMA technology, and a 5 MHz slot can provide about 28 mobile TV and radio channels to an unlimited number of users. The technology was successfully tested in networks of Vodafone, Orange, and T-Mobile UK. Sprint Nextel, which owns spectrum in the 2.5 GHz band, is currently using the IPWireless technology for TDtv to offer live TV services. NextWave Wireless and its subsidiary PacketVideo have brought out a full solution for 3GPP compliant MBMS implementation on handsets. Quick Facts Implementing a 3GPP-MBMS Platform Carrier Platform: IPWireless Handsets: NextWave’s TDtv Device Integration Pack Includes: a low-power TDtv System in Package (SiP) complete MBMS software stack MediaFusion(TM) multimedia client software from PacketVideo Corporation (PV) ● ● ●

The TDtv SiP is a complete handset solution including the RF chip, TDtv baseband chip, and associated components that can interface directly with the handset application processor.

6.10.7 3GPP Integrated Mobile Broadcasting (IMB) 3GPP has finalized standards for broadcasting based on TDD spectrum under the IMB, which is a part of 3GPP release 8. These standards lay down the terminal specifications and the protocols to set up and release such calls. TDD spectrum is available in blocks of 5 MHz. The IMB standard provides for a total of 20 broadcast channels at 256 Kbps for each channel. The channels can be a mix of unicast and multicast content. There is a seamless handover and roaming facilities are provided for, along with the specifications of an enhanced EPG.

6.11 Mobile TV Services Based on CDMA Networks Technologies for CDMA-based networks with technologies such as CDMA2000 and 1xEV-DO fall under the purview of 3GPP2. Although there are no major differences in the way streaming is done to mobile devices, there are differences in media encoders and the standards to which video and audio can be encoded.

6.11.1 3GPP2 Encoders and File Formats 3GPP2 video and audio coding specifications were released initially for CDMA2000 networks. The key features are as follows: Video: H.263 and MPEG-4 Visual Profile Audio: AAC, AMR, AMR-WB Speech: EVRC, QCELP

214

Chapter 6

File format: ISO container Presentation: 3GPP2 SMIL Most commercially available encoding solutions provide encoding for both 3GPP and 3GPP2. Transcoding solutions are also available. The same is true of streamers, which can provide streaming in 3GPP or 3GPP2.

6.11.2 Compatible Handsets 1xEV-DO does not provide compatibility to CDMA2000 networks. This needs to be achieved using multimode handsets. The ready availability of such handsets having compatibility with CDMA2000 networks has led to an increasing use of the services of 1xEV-DO. The users have the flexibility to receive incoming voice calls (CDMA2000 1x) while downloading data using 1xEV-DO. Newer versions of handsets also provide support for GSM/GPRS networks. Almost all handset categories such as PDAs, Blackberry, music phones, feature phones, and smartphones now support dual standards.

6.11.3 Mobile TV Using 1xEV-DO Technologies EV-DO has been widely deployed in a number of networks in Japan (KDDI), Korea (SK Telecom and KTF), and the United States (Verizon and Sprint), as well as other countries. Almost all CDMA2000 and 1xEV-DO operators provide live TV and video streaming/ download services through these networks. The 1xEV-DO networks have the flexibility to support both user- and application-level QoS. Applications such as VoIP can be allocated priority using application-level quality of service. This helps this delay-sensitive service to work well even in high-usage environments. Userlevel QoS allows the operator to offer premium services such as mobile TV. The 1xEV-DO packet scheduling combined with Diff-Serv-based QoS mechanisms can enable QoS within the entire wireless network. Verizon launched its V CAST service in 2005, which provides streaming audio and video clips (news, weather, entertainment, and sports). The service using 1xEV-DO can be delivered at 400–700 kbps with burst speeds of 2 Mbps. V CAST started with Windows Media 9 as the media encoding but later shifted to Real using Helix servers. Subsequently in 2007, Verizon also launched a service VCAST that was based on MediaFLO technology. The launch from Sprint on EV-DO was its Power Vision service.

6.11.4 CDMA 1xEV-DV Technology Mention must be made of the 1xEV-DV (data and voice) networks as these provide compatibility with the CDMA2000 architecture. The operation can be extended to 3x mode with multicarrier operation, and data rates (peak) of 3.072 Mbps downlink and 451 kbps

Mobile TV Using 3G Technologies

215

Figure 6.22: Verizon VCAST service.

uplink can be achieved. The system has advanced features such as adaptive modulation and coding (QPAK, 8 PSK, and 16 QAM) and variable frame duration. The mobile device can select any of the base stations in its range.

6.11.5 CDMA2000 1xEV-DO Networks 1xEV-DO has already been deployed widely across the United States (Verizon, Sprint, and Alltel), Canada (Bell Mobility), Korea (SK Telecom and KT Freetel), Japan (KDDI), Australia (Telstra), New Zealand (Telecom New Zealand), and a number of other countries. Sprint and KDDI have already moved to 1xEV-DO Rev A, and others are in the process of launching Rev A services. These operators now leverage the data capabilities of the networks to deliver a number of innovative multimedia services such as music/video streaming, videophone, and live TV broadcast. In the United States, Sprint and Verizon are the largest operators, with coverage of over 200 cities each.

6.12 Other Multimedia Services over 3G Networks So far, in our discussions in the chapter, we have focused primarily on the streaming of video either as unicast or multicast. However, the utility of 3G applications does not end with streaming of video and audio. In fact, the networks today are dominated by other applications such as video calling, MMS, social networking services, podcasting, location-based

216

Chapter 6

applications, and many types of interactive games amongst a very large range. In order to address how these applications are addressed, we need to review the classes of service in 3G networks and how these classes best meet the new service requirements.

6.12.1 UMTS Quality of Service Classes UMTS networks have the following classes of traffic, which are distinguished primarily by how sensitive they are to the delays that might be experienced in the network: ● ● ● ●

Conversational class Streaming class Interactive class Background class

Conversational class The conversational class is the most delay-sensitive, and the background class is used for nonreal-time services such as messaging. The conversational class is designed for speech and face-to-face communications such as video telephony, for which the acceptable delay should not exceed 400 ms. In UMTS, the conversational class is used to provide the AMR speech service as well as the video telephony service (H.324 [mobile] or 3G-M324). The speech codecs in UMTS use the adaptive multirate coding technique (AMR). The AMR codec can be controlled by the radio access network to enable interoperability with the existing 2G cellular networks such as GSM (EFR codec 12.2 kbps) or the U.S. TDMA speech codec (7.4 kbps). The bit rates possible for AMR are 12.2, 7.95, 7.4, 6.7, 5.9, 5.15, and 4.75 kbps. Video telephony standards for PSTN networks are prescribed by ITU-T H.324 recommendations and have been in use in video telephony and conferencing applications for a long time. These are used over PSTN/ISDN connections. The H.324 uses H.263 as video codec, and G.723.1 (ADPCM) as speech codec. The audio, video, and user data is multiplexed using an H.223 multiplexer, which gives a circuit-switched bit rate of n64 kbps. Conversational calls on mobile 3G networks have followed a modified version of the PSTN standard called H.324(M) or 3G-324M. The standard has agreement from both 3GPP and 3GPP2 fora and is in use on 3G networks for the conversational class.

6.12.2 3G-324M-Enabled Networks: Video Calling Although the objectives of the 3GPP and 3GPP2 projects are to move toward an IMS (3GPP) and multimedia domain system (3GPP2), both based on IP core, for the initial implementation of the carriage of video over 3G networks, the convergence between the two groups has been on the use of an agreed-upon standard, 3G-324M. Both the 3GPP and

Mobile TV Using 3G Technologies

217

the 3GPP2 organizations have adapted the 3G-324M protocol as the means of transporting conversational video over mobile networks, e.g., 3G. 3G-324M (also known as H.324 annex C) envisages initial mobile services using 3G data bandwidth without IP infrastructure. The service uses the 64 kbps data channel, which provides an error-protected and constant-bit-rate interface to the application.

Figure 6.23: 3G-324M network.

In the mobile network, 3G-324M-based video content is carried by a single H.223 64-kbps stream that multiplexes audio, video, data, and control information. In accordance with 324M, the video portion of the H.223 protocol is based on MPEG-4, whereas the audio portion is based on AMR-NB coding. The control portion of the H.223 stream is based on the H.245 protocol, which is primarily responsible for channel parameter exchange and session control. The constant bit rate interface simplifies the interface as the bit rates achieved are not dependent on the number of active users in the network for data services. 3G services in Japan were initially launched using the 324M standard, i.e., without the IP multimedia system. UMTS streaming class Streaming class is very popular in Internet applications, as it allows the receiving client to start playing the files without having to download the entire content. The receiver (e.g., a

218

Chapter 6

media player) maintains a small buffer, which helps a continuous playout of content despite transmission packet delays and jitter. Streaming video is delivered via the RTSP protocol, which allows the multimedia streams delivered to be controlled via RTP. Interactive class The interactive class is designed for user equipment applications interacting with a central server, another remote application. Examples of such applications are web browsing or database access. Round-trip delay in the interactive class of applications is important but not as critical as in the conversational class. Background class The background class is meant for applications that are not delay sensitive such as e-mail, SMS, and MMS services. Of particular interest in UMTS are the MMS messages, which can carry multiple elements as content including text, images in any format (GIF, JPEG, etc.), audio and video clips, and ringtones. MMS messages are carried per the 3GPP and the WAP forum standards.

6.13 Wi-Fi Mobile TV Delivery Extensions Recently, interest has been growing in wireless LAN technologies, especially in 802.11, also known as Wi-Fi, and the 802.16e-2005 (Mobile WiMAX). The low cost of equipment and wide availability of subscriber devices in laptops and mobile phones have led to a proliferation of Wi-Fi in homes for home networking and in the enterprise for in-building mobility. Hundreds of thousands of public hotspots have been rolled out using Wi-Fi technologies and the number of Wi-Fi-enabled devices is estimated to be over 1 billion. Content providers can therefore provide streaming mobile TV over the Internet that is delivered via Wi-Fi or WiMAX to mobile handsets in coverage areas. This avoids the use of high-tariff mobile networks and facilitates in-building use.

Before We Close: Some FAQs 1. How is content streaming tailored to the receiving device and the network? The streaming server (such as the optimized delivery server in MobiTV) can switch streams to optimize to the highest rates at which the receiver can receive the streaming. 2. What does the channel switching time in 3G depend on? Channel switching by a user is dependent on selecting a new channel on the mobile and then actually requesting the new stream. The time can be reduced by a client on the handset, which caches the EPG and the server that actually performs the switch.

Mobile TV Using 3G Technologies

219

3. What is Pocket Live TV? It is a website that lets one watch live TV programs from a Windows Mobile phone. The content includes channels such as CNN, BBC, Sky News, and NBC. This is a subscription-based service like MobiTV and is available in 10 countries including Europe, the United States, and Canada. 4. Are there any mobile Internet sites offering free TV? One such site is FreeBe TV. It is compatible with many handsets, including the iPhone. 5. What is SHOUTcast? SHOUTcast is a free Internet radio streaming service that uses MP3. Apart from listening, any user can start his or her own Internet streaming using SHOUTcast. It can be received using an Internet connection and any of the popular media players (Winamp, Windows Media, iTunes, etc.) or by using a SHOUTcast mini player supplied by the company. 6. In which format does YouTube deliver content to the mobile phone? YouTube delivers content in Flash; the mobile player needs a Flash Lite 3.0 player to play the content. Most S60 devices provide Flash-supported players. 7. What speeds does HSPA offer? Operators such as AT&T (United States) and Vodafone (United Kingdom) are now upgrading to HSPA with 14.4 Mbps bandwidth. They are also increasing cell capacity by reducing the cell sizes and adding additional cell sites.

This page intentionally left blank

CHAPTE R 7

Mobile TV Services in the ATSC Framework An idea that is not dangerous is unworthy of being called an idea at all. Oscar Wilde, The Critic as Artist (1891)

It may be difficult to believe, but it was as late as 2006 that the industry leaders formed a mobile DTV alliance (MDTV alliance) to “foster the growth of mobile digital TV and accelerate DVB-H deployment in North America.”1 In fact, it was on January 23, 2006, just two years after DVB-H had set the NAB buzzing in 2004 and commercial trials had been completed across Europe that Intel, Modeo, Motorola, Nokia, and Texas Instruments put their weight behind DVB-H. Digitalization in the United States was proceeding through ATSC, but it seemed to present too formidable a barrier for delivering terrestrial TV to mobiles. But the scene was set to change completely, at NAB 2009, when there was not only a roadmap for a countrywide rollout of ATSC Mobile DTV but one that was complete with a candidate standard for mobile devices—one that had been proven in trials. David Rehr, president and CEO of NAB, said in his keynote address for 2009 some very surprising words: “By 2012, we expect 130 million phones and 25 million media players will be able to receive mobile television. A NAB study concluded that TV broadcasters could see incremental revenue of more than $2 billion after 2012 with mobile DTV. I believe the revenue upside is probably greater than we can even imagine.”2 The new initiatives were led by a number of broadcast stations along with ATSC. It unveiled a new dimension with the mobile DTV trial transmissions conducted in November 2008 simultaneously by two stations (FOX WPWR and ION Media WCPX). These field trials were conducted under the auspices of Open Mobile Video Coalition (OMVC) and were particularly important, as they demonstrated that the existing ATSC transmission system could be modified so as to carry a stream of data for mobile TV services. This could be done without additional spectrum, having to modify the 8-VSB transmission format, or affecting any of the millions of receivers in the field for DTV or HDTV. It also led the way to the adoption of a candidate standard for mobile DTV (ATSC A/153) in May 2009. OMVC member stations have scheduled a progressive upgrade to 1 2

http://www.dvb.org/news_events/news/archive/mdtv_alliance_formed/index.xml http://www.nabshow.com/2009/newsroom/keynote.asp

© 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00007-2

221

222

Chapter 7

DTV stations by adding on mobile TV capabilities in major markets. Today, ATSC Mobile DTV provides one of the biggest market opportunities for the deployment of mobile DTV services.

7.1 Introduction: Digital Broadcasting to Handhelds and Mobile Devices The digitalization of TV networks is now providing an opportunity to reach a wide range of mobile and handheld devices by using only incremental investments on part of broadcasters by adding on the ATSC Mobile DTV capabilities. Transmission to mobile devices has many dimensions that are well beyond those possible with standard DTV receivers. These new dimensions arise from a range of interactive capabilities that can be used with mobile devices. Devices that conform to the 3GPP, for example, allow the developers to provide applications in SVG-T, Dynamic and Interactive Multimedia Scenes (DIMS, a 3GPP standard), ECMA Script Mobile Profile (OMA standard), or the OMA-Rich Media Environment (OMA-RME), which are parts of the ATSC Mobile DTV standard. Mobile devices can cover a wide range from pocket TVs, mobile phones with ATSC Mobile DTV tuners, portable media players, and navigation devices to vehicle-mounted receivers. Due to the small screen size, a typical TV or video program can be delivered in around 600 Kbps of capacity, making it possible to deliver multiple channels on a single DTV carrier. The services can be delivered in the clear or encrypted and may be supported by a back channel. The applications can also span a wide range and can incorporate not only linear broadcasting but also local channels with weather information, traffic, and so on overlaid with interactive rich media content, graphics, and games. GPS and location-based services can be integrated in mobile devices that display such content. Past experience has shown that combination devices with a combination of location-based services and mobile TV have been well accepted in the markets in the initial rollouts of mobile TV services.

7.2 Why ATSC Mobile DTV? Providing broadcast services to mobile handhelds is always a great challenge, due to a number of factors that we discussed briefly in Chapter 5. There are two basic issues that are of prime importance: first, mobile devices have low gain antennas and are in motion, and consequently receive a significantly lower as well as sharply varying signal as compared to a stationary terrestrial receiver with an external antenna. Second, the mobile devices are power-constrained and need to preserve battery usage time by adoption of techniques that allow a receiver to be powered on only for a fraction of time in each frame of data. In the DVB framework, the first issue of the need for better signal resilience had been addressed by providing an additional level of forward error correction and convolutional coding that are applied to the encoded signals for Mobile broadcasting (audio and video encoded

Mobile TV Services in the ATSC Framework 223 in H.264) in the multiprotocol encapsulator (MPE) before transmission on a common MPEG-2 transport stream carrying DTV and HDTV as well as mobile TV. The second issue relating to conservation of battery power is addressed by using a time slicing technique. The number of subcarriers for DVB-H was also modified so as to allow for Doppler shifts arising due to motion. A similar set of techniques has been followed in ATSC to cater to mobile devices receiving the transmissions. These transmissions are carried by a separate mobile DTV stream within the ATSC MPEG-2 transport stream, which is subjected to a high level of forward error, convolution, and RS coding to get up 13 dB system gain over a standard ATSC transmission. The high degree of resilience built in for mobile TV streams permits a high degree of immunity to level variations, dropouts, and burst noise, and obviates the need for a diversity antenna. Also, the mobile DTV data streams are provided with a time slicing scheme that allows a receiver to be dormant for a major part of the cycle. The 8-VSB transmission is left unaffected so as to permit standard receivers to be unaffected by the addition of the mobile DTV capability to the transmission system. ATSC Mobile DTV follows H.264 (MPEG-4/AVC) encoding for video, making it highly efficient and carrying a payload of over 1 Mbps (without error coding) in a 3 Mbps bit rate in the transport stream. This allows multiple channels of 300–600 Kbps to be placed in the mobile DTV stream. Typically a 6 MHz ATSC slot can carry a minimum of 1 and a maximum of 16 mobile channels, based on the number of other services carried in the 19.39 Mbps payload of ATSC. The ATSC Mobile DTV standard provides for an IP-based mobile payload and both streaming and file delivery based content.

7.3 The Open Mobile Video Coalition (OMVC) The OMVC is an alliance of U.S. public and commercial broadcasters (with more than 800 member stations) with the primary objective being to accelerate the development and rollout of mobile DTV products and services. Such an alliance was indeed a necessity in the world of multiple standards with attendant difficulties in getting uniform implementations and receivers in the market. It is interesting to recall that no less than 10 ways of implementing mobile TV were proposed to ATSC for incorporation of mobile DTV standards. The integration of these standards, trials, and the adoption of the standard for ATC mobile DTV services as ATSC A/153 are indeed due in large part to the efforts of the OMVC. It also ensures that its alliance member broadcasters will adopt the standard in transmitter setups and lead to its availability in a large number of markets. OMVC claims, by virtue of its alliance member stations, a reach of over 100 million households.

224

Chapter 7

7.4 Technology of ATSC Mobile DTV 7.4.1 What Is ATSC Mobile DTV? ATSC Mobile DTV is the standard adopted by the advanced television systems committee (A/153) to enable broadcasts to mobile handsets. The key elements of the ATSC Mobile DTV standard are: ●





Backward compatibility with existing ATSC transmissions: This ensures that no changes are required to the existing receivers when mobile broadcasting is added on by the broadcaster. Broadcasters also continue to use the existing transmitter network. Use of existing spectrum: The broadcast takes place in the same spectrum slot as that used for carriage of an existing DTV transmission. This is done by using an enhancement to the physical layer and an Open Mobile Alliance (OMA)–based enhancement to the application layer. Enabling interactivity: The use of Open Mobile Alliance-Rich Media Environment (OMA-RME) enables a range of standardized interactive applications such as voting or polling, mCommerce, interactive advertising, real time audience measurement, and so on.

The OMVC has targeted four categories of mobile devices that will be able to receive content broadcast using ATSC transmissions: cellphones, laptops, portable media players, and in-vehicle receivers. The devices can be mobile at pedestrian or vehicular speeds using invehicle receivers.

7.4.2 How Does ATSC Mobile DTV Work? The ATSC Mobile DTV works by setting aside a small part of the 19.39 Mbps payload in the ATSC transmission stream for “ancillary video.” The ancillary video stream then carries mobile video that has been coded for additional resilience in reception. The following are the steps involved in the conversion of an ATSC station to include mobile DTV capabilities: 1. An ATSC Mobile DTV Multiplexer: An ATSC Mobile DTV accepts the ATSC standard stream as an ASI input. This stream carries the MPEG-2 encoded video, audio, data, and PSIP (program and system information protocol) per ATSC Standard A/53. It also accepts the ATSC Mobile DTV data stream encoded using MPEG-4 as per A/153 standard as the mobile DTV stream through an IP interface (mostly Ethernet). The multiplexer then generates an A/153 compliant stream after processing the two input streams. The ATSC Mobile DTV multiplexed stream is now ready to be delivered to an M/H modulator and exciter. 2. ATSC Mobile DTV Modulator: The ATSC Modulator (or Exciter) generates an 8-VSB IF signal for delivery to the upconverter. The ATSC Mobile DTV modulator is designed to work in either the single transmitter or a distributed transmitter system (DTS) (slave

Mobile TV Services in the ATSC Framework 225

Figure 7.1: ATSC payload of 19.39 Mbps split to carry Ancillary Video for M/H services.

mode). In the case where a DTS is used, a digital transmission adapter (DTA) is used to add timing and synchronization information on the transport stream. (The specifications and mode of functioning of a DTS are per ATSC A/110.) The M/H modulators, then use this information to delay transmissions so that they are time synchronous, making all the distributed transmitters function as an SFN. Finally, the modulator generates an IF frequency, which is then upconverted by an upconverter (which may be a part of the exciter/modulator) to generate an RF signal to be handled by the transmitter. 3. Transmitter: The ATSC Mobile DTV system utilizes the standard ATSC transmitters. This means that there is no change in the existing transmitters that are upgraded with M/H capabilities.

7.5 The ATSC Mobile DTV Standard The Mobile DTV standards are being developed in the ATSC under the Technology and Standards Group (TSG). The Subgroup S4 (TSG/S4) is the specialist group for ATSC mobile/ handheld (M/H) Mobile TV. The TSG had decided to upgrade the draft recommendations

226

Chapter 7

Figure 7.2: Adding M/H capabilities to an existing station.

A/153 to “ATSC M/H candidate standard status” in December 2008 and which were subsequently approved in May 2009 by the ATSC. Subsequently they have been approved as the ATSC Mobile DTV standard in October 2009 replacing the earlier nomenclature of “M/H”. However the term M/H continues to occur in equipment specifications and other literature. The ATSC M/H should be read to mean “ATSC Mobile DTV” in line with the new nomenclature adopted by the ATSC. The recommendations for the ATSC Mobile DTV (A/153) standard have been issued in eight parts as follows: Part 1—Mobile/Handheld Digital Television System Part 2—RF/Transmission System Characteristics Part 3—Service Multiplex and Transport Subsystem Characteristics Part 4—Announcement Part 5—Presentation Framework Part 6—Service Protection Part 7—Video System Characteristics Part 8—Audio System Characteristics

Mobile TV Services in the ATSC Framework 227 These parts of the standard contain detailed requirements to be met by chip manufacturers, software designers, and operators to ensure uniform implementation and flawless interworking with different set of vendors for equipment, receivers, and applications. ATSC has also formally defined the ATSC Mobile DTV transmission system as well its relation with the standard ATSC transmission components in a reference diagram (Figure 7.3).

Figure 7.3: ATSC transmission system with main transport stream and M/H services stream.

7.6 ATSC Frame Structure with Mobile Channels In order to understand how the ATSC transmissions at 19.392 Mbps using 8-VSB modulation accommodate the ATSC Mobile DTV payload, it is first desirable to review the ATSC transmission in brief. The frame structures, while carrying the mobile DTV channels, need to remain unchanged for ATSC to enable the traditional receivers to continue receiving services without any changes.

7.6.1 The ATSC Frame Structure The 6 MHz transmission slot of ATSC permits a symbol rate of 10.762 Msymbols per sec. This is based on an occupied bandwidth of 5.381 MHz. The Symbol rate is therefore

228

Chapter 7

2  5.381  10.762 MSym/Sec. Using 8-VSB gives a capacity of 3 bits/symbol, thus providing a raw data rate of 10.762  3  32.286 Mbps. It may be recalled that the ATSC transmission involves the use of an outer RS coder (188/208), an inner Trellis coder (2/3 coder), in addition to a data randomizer (see Figure 7.4). The error resilience in ATSC is provided by the data randomization, which is done on each MPEG packet of 187 bytes, an inner Trellis coder, which adds an overhead of 1 byte for every 2 bytes, and finally the RS coder, which providers a further error correction to the Trellis-coded and randomized data. The MPEG-2 transport stream frame of 188 bytes, after adding on the error-correcting code bytes in the RS coder and the inner Trellis coder, is increased to 312 bytes, and this forms the basis of a segment in ATSC prior to the 8-VSB modulation. One segment contains 312 bytes of a complete MEPG-2 TS packet and has a duration of 77.3 μS. The framing for ATSC is done by grouping 312 such data segments in one field, which— together with an added “Field Sync” segment—has 313 segments. The duration of a field is 77.3 μS  313  24.2 ms.

Figure 7.4: Coding and data rates in ATSC.

Mobile TV Services in the ATSC Framework 229

Figure 7.5: Frame structure in ATSC.

The useful data rate at the respective outputs is as follows: Symbol Rate: 10.762 MSym/Sec Raw Data Rate: 10.762  3  32.286 Mbps Trellis coder: 2/3 coder and one sync segment out of 313 (312/313) giving a data rate of 32.286 Mbps  (2/3)  (312/313)  21.455 Mbps RS coder (188/208) data rate : 21.455  188/208  19.392 Mbps (ATSC payload or useful data rate) For 8-VSB modulation, the data needs to be transmitted in the form of symbols, each of which carries 3 bits. One segment, containing 312 bytes (2496 bits), therefore requires 2496/3  832 symbols. The transmission for each segment has a segment sync of 4 symbols, leaving 828 symbols for data.

7.6.2 ATSC Mobile DTV Addition in ATSC As given in Figure 7.1, the ATSC Mobile DTV system handles data from two sources: the regular ATSC MPEG-2 TS (containing the main services) and the Mobile DTV data stream.

230

Chapter 7

Figure 7.6: Transmission of a segment using 8-VSB.

The ATSC Mobile DTV transmission system is required to combine these two streams and generate a new set of MPEG-2 transport packets of 188 bytes each, which can then enter the RS outer encoder followed by the Trellis inner encoder, which forms a part of the regular ATSC system. The M/H service multiplex caries out these functions and generates packets that can enter the ATSC RS coder in a manner identical to those from the main service multiplex. The 188-byte MPEG-2 TS packets that are generated by the M/H multiplex are called “ensembles” and may carry more than one service. Each ensemble uses an independent RS frame. The services within an ensemble can be coded to a different level of error protection as against another ensemble. As shown in Figure 7.7, the functions specific to the M/H stream are handled in a preprocessor, which generates packets for multiplexing in the RS coder. These packets that are generated by the M/H preprocessor are called MHE (M/H Encapsulated) packets. The M/H preprocessor includes an independent frame structure for M/H services and includes a fast signalling channel. The data from the preprocessor is then combined with the main ATSC data in a postprocessor that handles the RS, Trellis coding, randomization, and interleaving functions as required for ATSC transmission. As the multiplexing of the MHE

Mobile TV Services in the ATSC Framework 231

Figure 7.7: A preprocessor handles the M/H service multiplex in ATSC Mobile DTV.

packets in the main ATSC transmission results in the change of timings when the ATSC main service packets are transmitted, a packet timing and PCR adjustment is carried out on stream originating from the ATSC multiplex. This timing readjustment would not have been required if the additional MHE packets were not introduced. The M/H service in the ATSC Mobile DTV schema is similar to a virtual channel in the ATSC and this enables receivers that do not recognize the presence of M/H data to function normally as they would in an ATSC system.

7.6.3 M/H Frames at the 8-VSB Transmission Level The frame structure for mobile DTV channels is required independent of the ATSC frame structure in order to be able to indicate to the M/H receiver where to expect the data and signalling information for various channels. For this purpose, an M/H framing structure has been included in the ATSC A/153 part 2, which covers the physical layer, including the RF. The M/H framing is thus carried out in accordance with the scheme as laid down in this recommendation. The slots where the M/H information may be transmitted have also been fixed in order to preserve the “burst nature” of the M/H data, which

232

Chapter 7

enables the receiver to be active only during the short period when the burst transmission is taking place. The key to the M/H multiplexing structure is the M/H signalling channel, which provides information on how the data is portioned into “ensembles” to be processed by RS coders. In addition to the signalling channel, there is also a “Fast Information Channel” (FIC) that provides information for quick acquisition of services. As shown in Figure 7.5, one VSB data frame consists of 2 data fields each of 24.2 ms duration. The data frame (VSB frame) thus has a duration of 48.4 ms. The field sync of the first data field is used by all receivers, including the legacy receivers, to synchronize. The rest of the data field then consists of 312 segments, each of which contains information equivalent to one MPEG-2-TS packet (188 bytes) plus the FEC bytes. One M/H frame is defined as spanning 20 VSB frames. The M/H frame thus has a duration of 20  48.4  968 ms. Each M/H frame is subdivided into five consecutive subframes, and thus each subframe has four VSB frames (eight VSB fields). Each subframe is further subdivided into 16 M/H slots. Each M/H slot is a quarter of a VSB frame or half of a VSB field. As a VSB field has 312 segments (equivalent MPEG-2 TS packets), the M/H slot, being half a field, is equivalent to 156 TS packets. The duration of an M/H slot is half of a VSB field or 12.1 ms. The M/H slot is the basic unit for multiplexing of the M/H data and the ATSC main services data. The M/H services after preprocessing are in the form of MH-encapsulated packets (MHE). These packets are organized in groups of 118 consecutive MHE packets. Data is then transmitted in M/H slots (capacity of 156 TS packets). An M/H slot can thus have 118 M/H encapsulated packets (MHE) packets and the balance main ATSC channel packets or may have all packets originating from ATSC main services. The recommendations of A/153 further define the offset of the M/H slots in the VSB frames, which interested readers may refer to for additional information.

7.6.4 Allocation of Capacity for Mobile TV Channels in an ATSC System As shown in Figure 7.8, the basic transmission unit that the physical layer provides to higher layers is the “M/H slot,” which consists of capacity equivalent to 156 TS packets or half of a VSB field. One M/H slot takes 12.1 ms to transmit. M/H group While configuring the service, either all the M/H slots may be used to carry mobile TV, in which case the ATSC service is dedicated to the mobile TV or, more commonly, only a few slots will be used to carry mobile TV channels from the M/H multiplex, while other slots are used for main ATSC services. The M/H slot, which consists of 156 packets, may either carry all packets for ATSC main services or may carry packets for the M/H multiplex. As mentioned earlier, an M/H slot of 12.1 ms carries 118 TS MHE packets in consecutive slots (which take 9.2 ms to transmit), and 38 TS packets (2.9 ms) from the main ATSC stream.

Mobile TV Services in the ATSC Framework 233

Figure 7.8: M/H frame structure.

Such a slot is called an M/H group and carries 118 TS packets for M/H service (9.2 ms) and 38 packets for the ATSC main service (2.9 ms). The data rate for an M/H service carried by an M/H group is (9.2 ms/193.6 ms)  19.39 Mbps  0.92 Mbps, where 19.39 Mbps is the total payload of the ATSC multiplex and 9.2/193.6 represents the fraction of time allocated to an M/H service in an M/H subframe of 193.6 ms. The division of capacity between the main ATSC services and the M/H services is thus quite straightforward. If five M/H groups are allocated to mobile TV multiplex, the capacity calculations are as follows: Data rate carried by M/H groups  0.92  5  4.6 Mbps The capacity allocated to main ATSC services is then 19.39 – 4.6  14.7 Mbps. In practice, due to framing overheads, these capacities are 4.58 and 13.9 Mbps. M/H services can be added in blocks of 5 M/H groups (one of each goes into one of five subframes to become a part of the parade. One parade contains at least 5 M/H groups or its multiples). Three alternative scenarios of 5, 10, and 15 M/H subgroups are represented in Figure 7.10.

234

Chapter 7

Figure 7.9: System capacities for M/H parade in ATSC Mobile DTV multiplex.

Figure 7.10: Three scenarios for an ATSC transmission with 5, 10, and 15 M/H groups per M/H frame dedicated to M/H services.

Mobile TV Services in the ATSC Framework 235 The figure, which shows the ATSC transport stream payload, cannot however be directly converted to channel capacities for mobile services. This is owing to the overhead in the M/H stream owing to turbo coding. In the previous example, with 13.8 Mbps being allocated to mobile TV, eight channels of 630 Kbps each (total 5 Mbps) can be delivered in the capacity of 13.8 Mbps, which is delivered by the M/H ensembles. At the minimum, with 4.6 Mbps being allocated to M/H services, two or three channels can be supported.

7.6.5 Service Multiplex and IP Encapsulation The primary function of the M/H service multiplex is to deliver files or data using the M/H broadcast stream. The processing of M/H content (including its framing) prior to introduction in an ATSC Mobile DTV multiplexer is given in ATSC standard A/153 part 3. As there are multiple content types, data formats, and delivery parameters possible, it is necessary to have an M/H signaling channel, which is also a part of the M/H service multiplex. The service multiplex also assists the functioning of the time slicing mechanism at the physical layer. The M/H service multiplex presents a stream to the ATSC M/H multiplex, which then treats the M/H data as a virtual channel. The data for an M/H service is packaged in consecutive RS frames. The data originates from an audio or video stream and is put in the form of an IP datagram by adding the RTP, UDP, and IP layer headers at these respective protocol layers. This datagram is the source data, which is then carried in an RS packet. These RS frames are then logically termed “ensembles.” These ensembles are logical pipes that carry IP datagrams. One or two such ensembles are further grouped into units called “parades” (Figure 7.11). One parade contains at least 5 M/H groups or its multiples. These parades therefore carry data that originates from one or two ensembles. The receiver needs to be able to decode these parades in order to be able to obtain data for a particular service. The M/H groups belonging to a parade are equally divided amongst the 5 M/H subframes belonging to an M/H frame. Hence if a parade has 5 M/H groups, these will be carried as one each in five consecutive subframes. As briefly mentioned in the section on Technology of ATSC Mobile DTV, the content for mobile DTV after encoding (H.264 for video and HE-AAC V2 for audio) is prepared for transmission on an existing ATSC transmission system (designed for service to fixed receivers with external antennas), by addition of error correcting codes. Time-sliced transmission of M/H data Figure 7.11 shows the M/H slots, each of which has a capacity of 156 RS frames being used to transmit M/H data in the form of parades. The receiving device needs to synchronize itself to receive one of the many parades being transmitted. If a parade has 5 M/H groups, and they occur one each in a subframe, it is obvious that the receiver needs to wake up only once in

236

Chapter 7

each subframe. It is thus able to switch on power only during a relatively small duration of the overall cycle. For example, in an M/H frame cycle of 968 ms, the power may be on for only 59.2 ms or about 50 ms, depending on the bandwidth of the service. This is about 6% of the total time.

Figure 7.11: Time-slicing mechanism in ATSC Mobile DTV.

Quick FAQs ATSC Mobile DTV 1. Does ATSC support hierarchical modulation? No, ATSC, which uses the 8-VSB, does not support hierarchical modulation. 2. What is the mechanism of SFN synchronization in ATSC? SFN synchronization is a characteristic of OFDM-based systems. As ATSC uses 8-VSB transmission, an equivalent distribution environment is achieved using the distributed transmitter system (DTS). The primary purpose of the DTS is to ensure that all transmitters are time-synchronized and do not lead or lag in terms of timing of transmission. For this purpose, a distribution transmission adapter is inserted before the studio-transmitter link (STL). The DTA adds the necessary timing and synchronization information to the transport stream sent to all transmitters in the network. The modulators of each transmitter then act as a DTS slave to delay the transmission as per timing requirements.

Mobile TV Services in the ATSC Framework 237 3. How do mobile TV transmitter networks differ from those for terrestrial broadcasting? For better indoor reception, as well as to cover fringe areas, it is common to use a larger number of transmitters of lower power rather than single large high-power transmitters. 4. Is the emergency alert system (EAS) available on mobile TV using ATSC Mobile DTV? Yes, emergency alerts can be provided using technology from iSET media Pte Ltd. (http:// iset-dtv.com).The company provides the automatic disaster alert system (ADAS). The emergency information is typically inserted in the ATSC multiplexers and pops up on the mobile devices irrespective of the channel or content being watched.

7.7 Content Types, Encoding, and Capacity The ATSC Mobile DTV system can carry audio, video, scalable vector graphics (SVG), and ancillary data (for closed caption subtitling, teletext, and so on). The A/153 recommendations specify the characteristics of how each of these elements should be handled prior to encoding in order to ensure a uniform encoding and decoding of these signals.

7.7.1 Video Video in M/H is encoded as per H.264/AVC standards. The video format used in M/H has the 16:9 aspect ratio. As the source video may originate from one of many different formats, the following specifications have been laid down to preprocess video prior to encoding: ●





High definition (19201080i) video is cropped by 48 pixels (24 pixels each to the left and right respectively) to create an image of 18721080i. This is then de-interlaced and re-sampled to a uniform 416p240 lines. High definition (1280720p) video is cropped by 16 pixels each on the left and right sides and resized to 1248720. This is then resampled to 416p240 lines. Standard definition (480i) and enhanced definition (480p) video having an aspect ratio of 16:9 (i.e., 720 pixels per line ) is cropped by 8 pixels each on the left and right sides to produce an image of size 704480, which is then resampled to the 416240 format (after de-interlacing for 480i format). In case of SD video with 4:3 aspect ratio, the picture is first scaled to 16:9 and then converted to the 416240 format.

The video in the encoding process is constrained to the allowable parameters as specified in the AVC Baseline Profile 1.3. This results in a maximum encoding rate of 768 Kbps. After compression, the video elementary stream is carried in RTP packets (NAL units as per ISO/IEC 14496-10). The output of the encoder is thus in packetized IP format (RTP packets carried using UDP).

238

Chapter 7

Figure 7.12: Audio and video formats for ATSC Mobile DTV.

Quick Facts Video and Audio Characteristics for ATSC Mobile DTV Video Characteristics ● ● ● ● ●

Aspect Ratio: 169 Resolution: 416240 Source Video: HD (1080i or 720 P), ED (480p), or SD (480i) AVC Compression: H.264 Base Line Profile 1.3 Closed Captioning: CEA 708 format

Audio Characteristics ● ● ● ●

Audio Source: Analog or AES Digital Audio Format: Stereo or mono with support for parametric surround Audio Encoding: HE AAC V2.0 Sampling Frequency: 16,22.05 and 24 KHz with SBR (32, 44.1 and 48 KHz without SBR)

7.7.2 Audio Format Audio in ATSC Mobile DTV is compressed per MPEG-4 HE-AAC-V2 (High-Efficiency Advanced Audio Coding version 2.0). The audio coding thus includes the HE-AAC and AAC coding formats, which are a part of version 2.0. The spectral band replication and parametric surround are also supported.

Mobile TV Services in the ATSC Framework 239

Figure 7.13: Audio coding formats for ATSC Mobile DTV.

The use of high-efficiency encoding can, for example, carry a 24 KHz sampled video within 144 Kbps using SBR (96 Kbps with 16 KHz sampling). The audio packets after encoding are carried using RTP packets using UDP. The function of converting to IP packets is done by an IP encapsulator. An example of an encoder that can accept audio and video in different formats (including decoding from MPEG-2 TS) or IP decapsulation, perform video preprocessing, and provide encoded output in different formats (suitable for multiple types of Mobile TV networks) is given in Figure 7.14.

Figure 7.14: Encoding content for ATSC Mobile DTV—an example.

240

Chapter 7

It is now becoming common for manufacturers such as Envivio to offer multiformat encoders whereby the output profiles can be selected by the user from a pool of HDTV, SDTV, IPTV, 3G mobile, and mobile TV based on M/H or DVB-H, amongst others.

7.8 Multiplexing of M/H Channels The M/H packets, after encoding and creation of an M/H service multiplex (per the A/153 standard), now need to be multiplexed with the main ATSC multiplex stream (A/53 standard). This function is performed in an ATSC Mobile DTV multiplexer, which typically has a port each for the ATSC main and M/H streams. Typically, the ATSC main multiplex signals are in ASI format, which is the typical format used in existing installations. The M/H signals are delivered via an Ethernet port as IP-encapsulated packets. The output of the multiplexer, which now contains the M/H and the ATSC main transmission together in a single ASI stream (compliant with A/153), is delivered to the postprocessor/ modulator. The multiplexer can feed either a single transmitter or a network of transmitters in a DTS.

7.8.1 M/H Signaling Channel The framing of ATSC Mobile DTV in the service multiplexer adds an M/H signalling channel as a key component of delivery to M/H receivers. The signalling channel is an IP multicast

Figure 7.15: ATSC Mobile DTV multiplexer.

Mobile TV Services in the ATSC Framework 241 stream and is carried in each ensemble. It has the service map table (SMT-MH), guide access table (GAT-MH), cell information table (providing frequency information on adjacent transmitters to enable roaming), service labelling table (carrying only the service IDs of all channels for fast access, for a device that has just entered the area), and a regional rating table (to provide information on any content advisory rating system in use).

7.9 Upgrading Transmitters for Mobile Services The ATSC Mobile DTV standards have been designed to minimize the changes to existing transmission set up for introducing mobile TV. The following changes are required (refer to Figure 7.2): (i)

An ATSC Mobile DTV multiplexer (including an ATSC Mobile DTV preprocessor) is required to be added. This multiplexer accepts input from the existing ATSC multiplexer as well as the encoders for mobile content and combines these into an ATSC transmit stream compliant with the A/153 standard. Considering the standards of equipment in service, the ATSC Mobile DTV multiplexer accepts an ASI input from the existing ATSC Mux (for fixed services) and an IP input from encoders. (ii) The ATSC modulator needs to be replaced to make it compliant with the distributed transmission system. In some cases, a DTS may already exist, or it may be possible to upgrade the firmware to meet the M/H requirements. (iii) The mobile TV programs need an ESG, as the ATSC PSIP system takes care of the program information for the fixed TV channels. It is usual to add an ESG generator (such as that from EXPWAY) along with the ATSC Mobile DTV multiplexer to provide the electronic service guide.

7.10 ATSC Mobile DTV Transmission The transmission of mobile TV signals as part of an ATSC transmission does not require any change in existing transmitters. This is a great advantage, as DTV transmitters of up to 100 KW ERP exist in the regular UHF band, thus providing extensive coverage. However, on a more practical plane, the requirements of delivery of mobile TV signals are different from those of a fixed TV receiver. A fixed TV receiver can support higher gain antennas; mobile receivers not only have lower gain antennas, but are also used in indoor areas. Despite the fact that there is additional error resilience built in for mobile TV through error coding, the transmission requirements for mobile TV are much better met by a larger number of relatively low-powered transmitters rather than one or two high-powered transmitters covering a city. A DTS with a single transmission center (studio and ATSC multiplex) is thus the preferred approach in ATSC Mobile DTV environment. DTS works by adding a DTA to the transmission

242

Chapter 7

path prior to the studio–transmitter link. The DTA adds timing and synchronization information, which is sent to all the transmitters via the STL. The ATSC Mobile DTV modulator, in this setup, operating in a DTS slave mode, analyzes the time stamp and—using a GPS clock— delays the signals so that all transmissions happen in a time-synchronous manner. DTS is a standard feature in ATSC systems with specifications given by ATSC-A/110. An example of a modulator that operates in a DTS slave configuration is the Axciter™ from Axcera or the Apex M2X™ from Harris.

7.11 ATSC Transmitter Networks We are quite used to tall TV towers used for NTSC (and now ATSC) transmissions. These frequently loom over a city and provide coverage over areas as large as 50 km in radius. (The actual distance depends on the antenna terrain and the power transmitted.) For fixed ATSC receivers, these single transmitters work fine with fixed outdoor or indoor antennas. These are also fine when mobiles are in the area of coverage and used outdoors in line of sight. The coverage of indoor areas and the use of ATSC, however, require transmitter networks to be able to provide signals in shadow areas as well as indoors, where the use of mobiles is most common. Such transmitters need to operate on a single-frequency network (SFN). However, a SFN is a characteristic of the OFDM systems; ATSC uses 8-VSB transmission. In ATSC systems, the DTS in some sense is the equivalent of SFNs in OFDM. In SFN transmitters, the primary requirement is that the signals from all transmitters be timesynchronous. Similar objectives are achieved in ATSC by the use of DTS. DTS is not a new requirement for mobile TV. However, as mobile TV transmission requires a larger number of low-power transmitters, this requirement is more important. The FCC in its Report and Order (08-256) issued in November 2008 provides authorization for DTSes and lays down the manner in which they can be used. Specifically, the FCC has now adopted a “Table of Distances,” which lays down the area in which an authorized station’s transmissions (i.e., DTV transmissions) must be contained within specific limits of signal power. When a DTS system is used, the distributed transmitter for each transmitter’s emissions must be contained within the same authorized service area given by the table. Table 7.1: Table of Distances as Provided by the FCC. Channel 2-6 2-6 7-13 7-13 14-51

Zone 1 2 and 3 1 2 and 3 1,2 and 3

F(50,90) Field Strength 28 dBu 28 dBu 36 dBu 36 dBu 41 dBu

Distance from Reference Point 108 Km 128 Km 101 Km 123 Km 103 Km

Mobile TV Services in the ATSC Framework 243 In addition, the DTS transmitters must cover the entire service area that a single DTV transmitter would have covered. Hence “cherry-picking” or leaving out some areas from the DTS stations coverage is not permitted within the Table of Distances area. The DTS stations also do not need to be separately licensed and are covered by one construction and permit license. The DTS stations need to be synchronized in accordance with the ATSC standard. No additional specific synchronization requirements have been laid down by the FCC in this regard.

7.11.1 Digital On-Channel Repeaters (DOCR) There is considerable complexity involved in setting up a DTS. In addition, the individual transmitters must be connected using an STL link. As an alternative, digital repeaters, which simply receive and boost the RF signal, retransmitting it at low power, can be used to act as gap fillers as a cost-effective measure. However, care is still required, as their signals are slightly time-shifted as compared to original transmission. The turnaround can be RF (with very small delay), IF (1–2 μS delay), or baseband, where if signals are regenerated, the delay may be several ms; making these useful only for areas not served by the main transmitter.

Quick FAQs ATSC Mobile DTV 1. Does the M/H multiplexing support ATSC data transmission? Yes; in M/H multiplex, the unused slots can be filled with the data transmission for the main ATSC data services. Data transmission on ATSC is provided as a synchronous data stream using digital storage media command and control (DSM-CC). This is identified in MPEG-2 by its PCR_PID. 2. What is the difference between ATSC RF signals and those from a DVB-T transmitter? ATSC uses 8-VSB, which is a single-carrier technology, as compared to COFDM, which is multiple-carrier, with 2 K to 8 K carriers. 3. How do receivers deal with multiple transmitters in a DTS environment in ATSC? The receivers use adaptive equalizers. These treat the signals from different transmitters as echoes of the main signal and extract the combined signal. The receivers have a time window, within which such echoes can be equalized. This reception is improved in M/H receivers by the M/H training sequences. 4. What is the accuracy of synchronization required of distributed SFN transmitters? The ATSC A/111 specifications require the transmitters to be synchronized to within 0.5 Hz.

244

Chapter 7

5. When mobile devices are equipped with ATSC receivers, are they not likely to get damaged in areas near the transmitter? The gain of mobile antennas is very small; in addition the tuner provides 20–50 dB rejection of adjacent channels per recommendations on receiver design. 6. Can M/H receivers also receive the main ATSC transmissions? This is a handset feature. Handsets with ATSC reception capacity are already available and M/H receivers may support main ATSC reception. 7. Will there be a change required in the M/H part of the ATSC system when it moves to AVC encoding? No, the M/H multiplex or M/H receivers are not affected by the encoding standard in the ATSC main stream.

7.12 Receivers and Handheld Units Experience with the use of mobile TV in Korea, Japan, and Europe has shown that standalone receivers, auto receivers, personal media players, and portable navigation devices feature high in the list of initial deployments. The initial trials of ATSC Mobile DTV have validated the full range of these devices in addition to handsets such as LG Voyager and Maize with ATSC Mobile DTV tuners.

Figure 7.16: ATSC Mobile DTV receiver devices.

Mobile TV Services in the ATSC Framework 245

7.13 Data Transmission on ATSC Mobile DTV The ATSC Mobile DTV system can carry data services such as news, weather, headlines, traffic information, horoscopes, or downloadable video content. The data services are carried as non-real-time content (NRT) and use FLUTE for delivery. This includes the ESG and data in carousels (for on-screen interactive services). These services are continuously broadcast and the users can catch up with important information fast, using these services. Figure 7.17 displays the protocol architecture of ATSC Mobile DTV. It may be seen that the RF layer and the ATSC signaling layers are unchanged in relation to ATSC, enabling the nonM/H receivers to continue to receive the service. The M/H services are introduced through the M/H ensembles, which either carries these as M/H groups or carries just ATSC data based on the system configuration. The audio and video on the other hand are delivered using RTP/RTCP to the ATSC Mobile DTV ensembles, which act as streaming IP tunnels to the receiver.

Figure 7.17: Protocol architecture for ATSC Mobile DTV.

7.14 Electronic Service Guide (ESG) In ATSC Mobile DTV, a service—i.e., a mobile TV channel that has as its components a video and one or more audio streams—is defined as a group of virtual channels, all of which are delivered through streaming packets. These are delivered over the UDP using FLUTE. The ESG (or Service Guide, in ATSC terminology) is the primary interface through which a user obtains the program information and selects the channels to play. This also serves as a tool for subscription management and on-demand access to pay content.

246

Chapter 7

7.14.1 OMA BCAST ESG ATSC Mobile DTV uses ESG based on OMA-BCAST specifications. This makes the ESG uniform across many diverse platforms for delivery of mobile TV including DVB-H and MBMS. OMA-BCAST ESG is based on an XML schema and is delivered to the receiver using FLUTE as a unidirectional file delivery. OMA-BCAST ESG is very flexible, as it contains the provision to support multiple operators. At the most basic level of operation, the ESG file contains the logo files and the SDP (URLs containing the IP addresses and channel names). The receiver decodes the ESG to get the IP address of various streams contained in the M/H multiplex and plays the streams selected by the user. All audio video streams are delivered using RTP/RTCP over UDP. The information, which is delivered using OMABCAST ESG, also contains the short-term and long-term keys for DRM or conditional access. OMA-BCAST ESG supports both the Smartcard profile and the DRM profile for content protection. (Content protection is discussed in Chapter 21.)

7.14.2 OMA-BCAST ESG XML Fragments The service guide, an XML schema, contains apart from the broadcaster ID and so on a number of “fragments.” These include: ● ● ● ●

● ● ● ● ●

Service fragment (provides language selection and service information URL) Schedule fragment Content fragment (gives information on particular program with start and end times) Access fragment (information on alternative bearers, content protection, and access information such as the URL in rtsp format) Purchase item fragment Purchase data fragment (price information, subscription period) Purchase channel fragment (provides URL for online purchase) Preview data fragment (provides images and text as preview information) Interactivity data fragment (provides an interactivity window)

OMA-BCAST ESG also enables a range of interactivity services such as voting (say during an “American Idol” program) and online purchase. An Electronic Service Guide per OMA-BCAST standards is provided by many companies. Examples are the FastESG™ from EXPWAY and Jade™ ESG generator from Thomson.

7.15 ATSC Mobile DTV Pilot Projects and Commercial Launches A number of TV stations had signed up for pilot projects for implementing the ATSC Mobile DTV transmissions. These include the WRC-DT (NBC Universal), WHUT-DT (PBS), WDCA-DT (Fox), WUSA-DT (Ganett), and others. These are amongst about 80 stations that had signed up for such transmissions.

Mobile TV Services in the ATSC Framework 247 ATSC Mobile DTV transmitters are now progressively going on air. In June 2009, ION Media announced mobile DTV broadcast ION stations WPXN, Channel 31 in New York City, and WPXW in Washington, D.C.

7.16 Example of an ATSC Mobile DTV Transmission System for Mobile TV An example of ATSC Mobile DTV components that can be used to set up a mobile TV transmission is available from Harris. This is composed of: 1. NetVX™ Video Networking Platform for compression and multiplexing: From Harris, composed of a NetVX frame with power supply and controller and NetVX encoders. The encoders, with 1RU size, can encode one mobile TV channel using H.264 and AE-AAC audio. 2. ATSC Mobile DTV Multiplexer: An ATSC Mobile DTV multiplexer from Axcera (MHPM-A) provides all functions of ATSC Mobile DTV preprocessing, coding, and stream multiplexing.

Figure 7.18: ATSC Mobile DTV system design using Harris® and Axcera® components.

248

Chapter 7

3. Synchrony™ Mobile Networking Adapter: Provides synchronization and timing for distributed transmission system (DTS). 4. Apex M2X™ multi-modulator platform: Provides DTS-compatible modulation conforming to transmitter network requirements. 5. Roundbox™ Multimedia Electronic Program Guide. 6. Maxiva UAX: UHF Transmitters from 2 KW to 10 KW.

Before We Close: Some FAQs 1. What is the resolution of encoding in the ATSC Mobile DTV H.264 baseline profile? The encoding is in MPEG-4 part 10 (baseline) and the resolution supported is 416  240. Audio is encoded in MPEG-4 part 3 (HE-AAC v2). The encoding rate can be up to 768 Kbps. 2. In which format does the ATSC Mobile DTV service multiplexer deliver the signals? The signals from the multiplexer can be in ASI format or IP (TS over IP). 3. Can ATSC Mobile DTV carry Windows Media (VC1) content? The content formats (RTP payload types) that can be carried as A/V streams are at present limited to H.264/AVC for video and HE-AACv2 for audio. However, it can be delivered as a file using a FLUTE file delivery session. 4. Is it possible to carry still pictures via an M/H system? Yes, the syntax descriptor for AVC can indicate the presence of still pictures. 5. What type of tuners will ATSC Mobile DTV receivers require? The tuners for ATSC Mobile DTV are essentially UHF tuners. A multistandard tuner such as the ADMTV803, which is designed for CMMB, DTMB (China), DVB-H, DVB-T (Europe), DAB (Europe), T-DMB (Korea), ISDB-T: full-seg and 1-seg (Japan), and ATSC Mobile DTV (United States), can be used.

CHAPTE R 8

Mobile TV Using DVB-H Technologies Just because something doesn’t do what you planned it to do doesn’t mean it is useless. Thomas Alva Edison

8.1 Introduction: Digital Video Broadcasting to Handhelds DVB-H technology is designed to use the digital terrestrial TV broadcast infrastructure to deliver multimedia services to mobiles. It can use the same spectrum slots used by digital TV. It has been designed to meet almost all the objectives of delivering a TV service to handhelds, which include: ● ●

● ●





Broadcast service reaching potentially unlimited users. Delivery of sufficiently large transmitted power so that the mobiles can work even within buildings. Conservation of battery power used in receiving the TV service of choice. Use of the terrestrial broadcast spectrum, which is being rendered free as a result of the digitalization of TV networks. Robust coding and error correction to cater to highly variable signal strength conditions encountered in the handheld environment. Minimum infrastructure to roll out the TV services for mobiles. DVB-H can use the same infrastructure as DVB-T.

A DVB-H service can deliver 20–40 channels or more (depending on the bit rate, modulation, and bandwidth slot) or up to 11 Mbps (typical) in one DVB-H multiplex, which can reach millions of viewers, being in a broadcast mode. The following are the options for configuring a DVB-H system: ● ● ●

Bandwidth modes of 5, 6, 7, and 8 MHz COFDM carrier modes 2 K, 4 K, and 8 K Modulation formats of 4QAM, 16QAM, and 64QAM

© 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00008-4

249

250

Chapter 8

DVB-H was standardized by the DVB and the ETSI under EN 302 304 in November 2004. Due to the evolving nature of the technology, there are new versions of the basic specifications that take into account the latest developments. The technology has been validated in a number of trials, including Helsinki; Pittsburgh, Pennsylvania; Oxford; Barcelona; and Berlin. DVB-H is based on open standards and is compatible with DVB-T. It follows the IP datacast model and the entire network is end-to-end IP.

8.2 Why DVB-H? Digital video broadcasting using terrestrial transmission is a widely used technology with over 50 countries already having terrestrial transmissions in digital mode. Even in countries in which analog TV transmission is the norm, digital terrestrial transmission is rapidly being introduced and is replacing the analog terrestrial transmissions. In the process, spectrum is being freed up and capacity to broadcast channels is going up. A single spectrum slot that was carrying one channel in analog mode can carry six to eight channels in a DVB-T multiplex. DVB-T services are not straightaway suited to mobile devices, as the standards for DVB-T have been formulated for fixed receivers with relatively large roof-mounted antennas and no limitations on receiver battery power. These factors make the straight reception of DVB-T in a mobile environment—by much lower signal strengths, mobility, and fading—unsuitable. The DVB-H standard, which addresses these factors through suitable enhancements to the specifications, becomes an ideal medium for mobile TV delivery. The other factor that tilts the scale toward terrestrial broadcast is that the UMTS or 3G-based mobile TV services, which are unicast in nature, are not scalable for mass delivery and these spectrum bands are in any event hard pressed for capacity.

8.3 How Does DVB-H Work? DVB-H is based on IP-based transport. Video is typically encoded using high efficiency encoding protocols such as MPEG-4/AVC (H.264) or VC-1, which can provide a QVGA coding at 384 kbps or less. These encoders can work on real-time TV signals and provide the encoded output in IP format. As it is based on IP transport, DVB-H can support video and audio coding in any format. Fundamentally as an IP transport, it can support any AV stream type. The resolution and frame size can be selected by the service provider to meet the bit rate objectives. The data is then transmitted by using an IP datacast.

Mobile TV Using DVB-H Technologies

251

Figure 8.1: DVB-H mobile TV transmission system.

In a typical DVB-H environment, a number of TV and audio services are aggregated in a content provisioning system. The content provisioning system may have capabilities to provide interactive services based on reverse path interactivity. These video and audio services are encoded by mobile encoders. All these encoders are connected by an IP switch to an IP encapsulator, which then combines all the video and audio services as well as the Electronic Service Guide (ESG) into IP frames. Content protection in the form of conditional access (CA) or DRM is also applied to the transmitted data. Not all DVB-H systems are designed to handle interactive content. Some are organized as pure broadcast systems. There is also difference in the way the content protection and ESG are handled that has a bearing on the receiver characteristics. These lead to variants of implementation models of DVB-H—i.e., DVB-CBMS (a DVB standard) and OMA-BCAST (standardized by open mobile alliance). These are discussed later in the chapter. The IP encapsulator also provides for channel data to be organized into time slices so that the receiver can remain active only during the times for which the data for the actively selected channel is expected to be on air.

252

Chapter 8

Figure 8.2: DVB-H IP datacasting.

The IP encapsulator also provides a more advanced forward error correction code, which can deliver reliable signals in typical mobile environments. The data rate at the output of an IP encapsulator under DVB-H will in general be dependent on the modulation type used as well as the bandwidth available. Typically, a DVB-H multiplex would be 11 Mbps of data, which when modulated could generate a carrier, e.g., 6–8 MHz. This compares with a 21 Mbps multiplex for DVB-T service in the VHF band. The effectively lower transmission rate for DVB-H is due to a higher level of forward error correction applied to make the transmissions more robust for the handheld environment. The output of the IP encapsulator, which is in ASI format, is then modulated by a COFDM modulator with 4 K (or 8 K) carriers. The COFDM modulation provides the necessary resilience against selective fading and other propagation conditions. The DVB-T standard provides for 2 K or 8 K carriers in the COFDM modulation. The 4 K mode has been envisaged for use in DVB-H, as 2 K carriers would not give adequate protection against frequency-selective fading and also provide for a smaller cell size owing to the guard interval requirement for single-frequency networks (SFNs). At the same time, the 8 K carrier mode has the carriers placed too close in frequency and the Doppler shifts are significant for moving receivers. Hence the new mode of 4 K carriers has been incorporated as part of the DVB-H standards. The 4 K mode provides a better compromise between the cell size and

Mobile TV Using DVB-H Technologies

253

the Doppler effects due to motion. A 4 K symbol interleaver is also used in the modulation process. However, it should be recognized that the carrier mode actually used would depend on the frequency band employed, i.e., UHF band or L-band. The modulation used for each of the carriers can be with QPSK, 16QAM, or 64QAM.

Figure 8.3: 4 K mode is a new introduction in DVB-H.

In practice, if the DVB-H system is planned in the UHF band (700 MHz), the Doppler shifts are not significant and the 8 K mode can be used. However, in the L-band or the S-band, the 4 K mode or 2 K mode needs to be used. The DVB-H standard provides for COFDM modulation, which is suitable for SFNs. The system uses GPS-based time clocks and time stamping to ensure that all the transmitters in a given area can operate maintaining time synchronism, which is needed for SFNs. This also implies that repeaters can be used in the coverage area at the same frequency and these repeaters serve to add to the signal strength that is received at the mobile.

8.3.1 Data Carousels and FLUTE Reliable distribution of files over broadcast networks requires the use of a protocol such as FLUTE (file delivery over unidirectional transport). The protocol used in FLUTE is asynchronous layered coding (ALC) and is based on the use of forward error correction for reliable delivery. FLUTE sessions are identified by a source IP address (which is announced via an out-of-band technique such as SDP) and a transport session identifier (TSI).

254

Chapter 8

FLUTE can deliver file carousels by transmitting the files repeatedly (i.e., in a cyclical order) so that all receivers ultimately receive them. FLUTE can also be used to deliver files through a file delivery session. Applications such as ESG are delivered using a file delivery session. For this purpose a file delivery table (FDT) is used to specify the attributes of all files that are delivered using the file delivery session.

8.4 Technology of DVB-H 8.4.1 Principles of the DVB-H System Building upon the principles of the DVB-T standard and the digital audio broadcasting standards, the DVB-H standard adds functional elements necessary for the requirements of the mobile handheld reception environment. Both DVB-H and DVB-T use the same physical layer and DVB-H can be backward compatible with DVB-T. Like DVB-T, DVB-H can be carried on the same MPEG-2 transport stream and use the same transmitter and OFDM modulators for its signal, with the modification for the 4 K mode. Between 20 to 40 television and audio programs targeted for handheld devices can be transmitted in a single multiplex, or the capacity of a multiplex can be shared between DVB-T and DVB-H. In practice, the bit rate for a DVB-H multiplex can range from 5 to 21 Mbps. DVB-H provides additional support for mobile handheld reception. This includes battery saving through time slicing and increased general robustness and improved error resilience compared to DVB-T using multiprotocol encapsulation–forward error correction (MPE-FEC). In addition, DVB-H broadcasts sound, picture, and other data using IPv6. The following are the basic attributes of a DVB-H system: ● ● ● ● ● ● ●

Encoding of audio, video, data, or files Use of IP datacasting for delivery of data to multiple receivers Organization of data into a group of packets for each channel (time slicing) Insertion of appropriate signaling data for carrying the DVB-H stream information Application of forward error correction and multiprotocol encapsulation GPS time stamping for single-frequency networks Modulation using QPSK, 16 QAM, or 64 QAM and 4 K (or 8 K) COFDM carriers with frequency interleaving

8.4.2 Functional Elements of DVB-IP Datacast Model DVB-H uses IP datacasting (referred to as IPDC). The process involves packaging of digital content into IP packets and then delivering these packets in a reliable manner. The IP platform does not restrict the type of content that can be carried and hence the IPDC is suitable for

Mobile TV Using DVB-H Technologies

255

carrying live video, video downloads (via file transfer), music files, audio and video streams (in streaming format), web pages, games, or other types of content. Compared to unicast IP networks, the IPDC provides significant advantages, as the broadcast networks can reach millions (unrestricted number of users) and are inherently high speed. The use of IP as the base technology has the advantage that the data including content can be handled by the same protocols and devices that have been used extensively on the Internet and for which inexpensive devices and management techniques are available. The transmission medium is also neutral to the type of content being carried, which can be live TV, audio and video files, or HTML/XML web pages. The data to be broadcast consists of two types—the broadcast content and the service description, such PSI/SI data and an electronic service guide. In addition, the data may contain rights management information for access or subscription to the content. The IP layer provides sockets through which information of each type can be transmitted.

8.4.3 Time Slicing One of the features that distinguish DVB-H from DVB-T is the feature of time slicing of the channel data on the final multiplex. In the case of DVB-T, a number of channels are also multiplexed together (e.g., six to eight services in a multiplex of 8 MHz). However, at the multiplexing level, the packets for different channels follow sequentially. As a result of the very high data rate, the receiver for each channel needs to be active all the time, as the packets are continuously arriving. In the case of DVB-H, the IP encapsulator gives the full capacity of the multiplex for a limited time to only one channel. Hence packets for the channel all arrive in a bunch, one after another, during this time. While this slot is allocated to the channel, there are no packets from other channels. This allows the receiver, if it needs only one channel, to become active only during the time the packets for the channel are grouped together (i.e., during the time slot allocated to the particular channel). At other times the receiver (tuner) can be switched off to conserve power. It needs to wake up just prior to the planned arrival of the designated channel slot (in practice 200 msec is required for synchronization). This allows the mobile receiver to be in power-off mode for signal reception for up to 95% of the time, depending on the number of services multiplexed. In terms of time, the data for periods of 1–5 sec is delivered in a single burst. If the channel data rate is 0.5 Mbps (for example) the receiver needs to buffer 2.5 Mbits of data for a five-second “inactive time.” Alternatively, for a TV service running at 25 fps, the receiver would buffer 125 frames of data. These buffered frames are displayed normally and the user is not aware that the receiver is inactive.

256

Chapter 8

Figure 8.4: Time slicing in DVB-H.

The amount of data sent in a burst is equal to one block encoded in the FEC frame. This may be 1–5 Mbits. When the receiver is not receiving the wanted burst of data, the tuner contained in the handheld device is “inactive” and therefore using less power. There are alternative uses for the inactive period, however. For example, the receiver may measure the signal strength from nearby repeaters to work out the handover to a more appropriate transmitter or repeater. It is possible to place time-sliced (i.e., DVB-H) and non-time-sliced (DVB-T) services in the same multiplex.

8.4.4 Switching Time between Channels and Transmitter Parameter Signaling (TPS) Bits One of the issues that arise due to the receiver being in the power-off mode for a significant portion of the time is the time needed to switch TV channels on the mobile. In order to reduce the search time and enable “fast service discovery,” the signaling bits of the DVB-T stream carry information about the DVB-H streams as well. The DVB-T signaling frame consists of 68 TPS bits, of which only 23 are used for DVB-T parameters. When DVB-H is carried on the same multiplex, some of the unused TPS bits are used to carry

Mobile TV Using DVB-H Technologies

257

information about the DVB-H. The following types of information are carried by the TPS bits: ● ● ● ●

Whether DVB-H is present in the DVB multiplex 4 K or 8 K mode, Use of time slicing Use of forward error correction

The added TPS bits in the signaling stream help in fast retuning of the newly selected channel as well as handoffs in the mobile environment, as the DVB-H receiver is aware of the status of the entire transmit stream.

8.4.5 MPE-FEC The reception by handheld devices is quite different from that of fixed terrestrial antennas. First, the antennas themselves are quite small and have low gain. Second, the handset being in a mobile environment, the received signal can undergo rapid fluctuations in received power. Despite the robust physical layer using COFDM transmission in which the selective fading is reduced, owing to the use of single-frequency networks and the received signals being reinforced by all sources, reflected and direct, there is additional protection needed in the form of forward error correction. The video and audio data in a DVB-H environment is delivered using IP datacasting. This implies that the data is encapsulated with IP headers and transmitted in the same way as it is over the Internet. However, the radio environment is not as friendly as the Internet and is subject to a high error rate due to signal level variations, interference, and other transmission effects. This requires the data to be well protected. The data protection is done in the case of DVB-H using the technology of forward error correction. The IP encapsulator carries out the additional functions of MPE-FEC. The FEC is implemented at the link level (i.e., before the data is encrypted). It should be recognized that the DVB-H uses the physical layer of the DVB-T (i.e., the COFDM modulation). The COFDM is very robust and provides good reception even under conditions of multipath transmission. The MPE-FEC provides a further degree of protection over and above the COFDM. The data coming from the encoder is put into an FEC frame, which is prepared by using RS code RS(255,191). The frame consists of up to 1024 rows. Each row has 191 columns (each column being a byte) of IP data and 64 columns of FEC data in the form of parity bytes. Each row thus represents 191 bytes of IP data, which is converted by adding the forward error correction parity bits to 255 bytes. If 1024 rows are used in the frame, then one frame contains 191 Kbytes of IP data and 255 Kbytes of transmitted data. This can also be represented as 1.528 Mbits of IP data and 2.040 Mbits of transmitted data (Figure 8.5).

258

Chapter 8

Figure 8.5: MPE-FEC frame structure.

For an encoder running at 384 kbps (48 Kbytes per second), one FEC frame can carry 3.97 sec of data, which is transmitted as one burst. This will contain approximately 100 frames at a 25 fps coding rate. The use of the FEC reduces the signal-to-noise ratio required to receive the signals by a factor of up to 7 dB. This gives a significant resilience to the handheld devices in receiving DVB-H transmissions.

8.5 DVB-H Higher Layer Protocols DVB-H is a multimedia transmission system that is expected to serve multiple applications and file formats. This can include audio and video streaming, file transfer, electronic service guide, HTML, or XML data. The standard has therefore been designed with an appropriate layered protocol structure to carry out these tasks over the IP datacast layer. This is specified through its protocol stack with a number of layers. The IP datacasting layer allows data content to be delivered in the form of packets over the DVB-H physical network. This uses the UDP/IP stack at the network layer level and the MPE at the data-link level.

Mobile TV Using DVB-H Technologies

259

Figure 8.6: DVB-H protocol stack.

The video and audio data, which has been encoded using H.264/AVC, is streamed using the underlying UDP/IP layers in the application layer. The signal is also carried using the network and data link layers. Datacasting in DVB-H is defined based on IPv6 (or Internet version 6). This provides more flexibility in the management of the application and is compatible with the future requirements of the IP applications, which may require interactivity and addressing of every mobile device with attendant IPv6 security and features.

8.6 Network Architecture The DVB-H standard has been designed in a manner that enables operation of the video broadcast systems in a very flexible manner with multiple configurations possible either with existing digital TV networks or as new installations. It also needs to be borne in mind that although the DVB-T transmissions are meant for relatively large antennas mounted on rooftops, the DVB-H needs to reach very small antennas in the mobile environment. There is also a requirement that the transmissions reach inside buildings. Owing to these factors the effective radiated power (ERP) needs to be much higher for DVB-H systems. The power transmitted also depends on the antenna height. As an example, if the ERP required for a mobile with a minimum power threshold of -24.7 dBm and a range of up to 5 km is 46 dBm (20 W) for a 120 m antenna height, then an antenna of 25 m will require approximately 70 dBm of EIRP (10 kW).

260

Chapter 8

Quick Facts DVB-H Standards under ETSI EN 302 304: Transmission System for Handheld Terminals TR 102 473: IP Datacast over DVB-H: Use Cases and Services TR 102 469: IP Datacast over DVB-H: Architecture TR 102 401: Transmission to Handheld Terminals TR 102 377: DVB-H Implementation Guidelines TS 102 472: IP Datacast over DVB-H; Content Delivery Protocols TS 102 471: IP Datacast over DVB-H: Electronic Service Guide (ESG) TS 102 470: IP Datacast over DVB-H: Program Specific Information (PSI)/Service Information (SI)

8.7 DVB-H Transmission DVB-H technology has been designed to share the existing infrastructure of DVB-T, which is being rolled out for the digital TV implementation. Hence sharing of the DVB-T network has been given a special consideration in the specifications framework. DVB-H can be operated in three network configurations: 1. DVB-H shared network (sharing the MPEG-2 multiplex): In a DVB-H shared network, the mobile TV channels after IPE (IP encapsulation) share the same DVB-T multiplex along with other terrestrial TV programs. The terrestrial TV programs would be coded in MPEG-2, while the mobile TV programs are in MPEG-4 coding and IPE. The

Figure 8.7: DVB-H on shared multiplex.

Mobile TV Using DVB-H Technologies

261

multiplex combines these into a single transmit stream, which is then transmitted after modulation. 2. DVB-H hierarchical network (sharing DVB-T network by hierarchy): In a hierarchical network, the modulation is hierarchical with the two streams, DVB-T and DVB-H, which form a part of the same modulator output. (Figure 8.7 shows the sharing of the network by hierarchy.) DVB-T is modulated as a low-priority stream and DVB-H as a high-priority stream. In the case of high priority, the modulation is more robust (e.g., QPSK) as opposed to low priority, which may be 16 QAM. The lower “density” modulation scheme provides higher protection against error as opposed to higher density schemes.

Figure 8.8: Hierarchical modulation in DVB-H.

3. DVB-H dedicated network: The DVB-T carrier is used exclusively for DVB-H transmission. In a dedicated network, the COFDM carrier will be used exclusively by the mobile TV and audio channels as an IP datacast with the MPEG-2 envelope. Dedicated networks are generally used by new operators who do not have existing digital terrestrial broadcasting.

262

Chapter 8

Figure 8.9: Reception in DVB-H shared environment.

8.8 Transmitter Networks The DVB-H implementation guidelines provide for a reference receiver (ETSI 102 377), which serves as a benchmark for system design. The C/N required depends on the modulation and the code rate used. Table 8.1 provides indicative values of C/N required. The design provides for a C/N of 16 dB for 16 QAM for half code rate with 2 dB of margin. Indoor coverage typically would need to take care of transmission losses of 11 dB or more. There are different models that are applied for estimating losses in urban, suburban, and rural environments. Figure 8.10 shows the field strength required for a receiver with an antenna gain of –12 dB operating in the UHF band (Band IV). Table 8.1: C/N Required in DVB-H. Modulation QPSK QPSK 16 QAM 16 QAM

Code Rate

Typical C/N with Guard Interval 1/4

1/2 2/3 1/2 2/3

9 dB 11 dB 14 dB 18 dB

Mobile TV Using DVB-H Technologies

263

Figure 8.10: DVB-H transmitter coverage areas and field strength.

Figure 8.10 demonstrates the typical distances that can be covered when a main DVB-T transmitter is also used to host a DVB-H antenna. These are dependent on the terrain and need to be more accurately cartographed. There are also issues of adjacent channel interference, particularly for analog channels, and these have a bearing on network planning. In most cities, the main transmitters may have a height of 200–300 m and an ERP of 50–100 KW. For additional transmitters, there are likely to be limitations of mast heights that may be limited to those used in cellular base stations or in the range of 30–40 m and power transmitted limited to about 100–200 W (1–2 KW ERP). Applying the design parameters, a city the size of Paris or New Delhi will typically require 17–20 transmitters in addition to gap fillers.

8.8.1 Single-Frequency and Multifrequency Networks Reception of DVB-H requires delivery of signals with field strength in the range of 60–90 dBμV, which is difficult to achieve using a single transmitter for a large city in an urban environment. The EIRP required for such coverage would be in the range of 1–50 MW, and the alternative of using a larger number of low-power transmitters is the only way to cover a city. This is where DVB-H transmission network diverges from a DVB-T system that uses outdoor antennas with a line of sight to the main transmitter. The signal strengths required for DVB-H are 30 dB higher than a DVB-T system. Delivery of DVB-H to cellphones is similar to the delivery of GSM or 3G signals and a network of towers and repeaters is needed. The cell sizes that each of these transmitters create are of course larger, as there is no limitation on cell size due to number of users.

264

Chapter 8

Depending on the area required to be covered, the DVB-H systems may be engineered with single-frequency networks or may need multifrequency networks.

8.8.2 DVB-H Cell A small town can be covered by a single DVB-H “cell” comprising one transmitter and 10–20 repeaters. The repeaters are required to cover the areas in shadows due to the geographical terrain. A repeater is essentially a mini-transmitter with a high-gain antenna for receiving the signals from the main transmitter and retransmitting them with an increase in RF power. Due to the SFN requirements, the example topology cannot be extended beyond a certain range, as the time delay in reception from the main transmitter will result in the retransmitted signal being out of phase with the main transmitter. The number of repeaters in a DVB-H cell is determined by the power of the main transmitter as well as the height of the tower. A very high tower reduces the shadow areas and the number of repeaters required for a given geographical area.

8.8.3 Single-Frequency Networks Larger areas (e.g., a city or an area around 50 km in radius) can be covered by using an SFN. The SFN comprises a number of DVB-H cells, each with a transmitter and a number of repeaters. The transmitters receive the signal in the form of an MPEG-2 transmit stream, which originates from the IPE.

Figure 8.11: DVB-H SFNs.

Mobile TV Using DVB-H Technologies

265

An IP network is used to distribute the signal to all the transmitters in a given area. All the transmitter sites thus receive the same signal, which is time-stamped by the GPS-based clock. At each transmitter site, the COFDM modulator synchronizes the signal using a GPS time reference so that all transmitters transmit identically timed signals despite their geographical location. The number of repeaters used with each transmitter can be increased to provide indoor reception, leading to the nomenclature of dense SFN. The synchronization of SFN repeaters is done using the mega frame initialization packet (MIP), which is inserted in all DVB-H/T transmission streams and indicates the start of the mega frame. All SFN transmitters use an SFN adapter, which aligns the MIP to be the same as those transmitted by adjacent transmitters. The output frame is synchronized with an external clock reference (10 MHz) derived from a GPS source. Figure 8.12 shows the typical SFN correlation distances.

Figure 8.12: SFN correlation distance.

8.8.4 Multifrequency Networks When the area of required coverage is large (e.g., an entire country of several hundred kilometers), sourcing a signal from a single IPE is not practical due to time delays in delivering signals to all transmitters. In such a case, transmitters beyond a certain range use different frequencies. Based on the topography, five or six frequency slots may be needed to cover a country. In such cases, it is usual to distribute the signals using a satellite so that hundreds of transmitters can be covered, including in remote areas. MFNs are easier to implement but require larger resources in the form of spectrum for implementation.

266

Chapter 8

8.9 Terminals and Handheld Units DVB-H provides a technology to successfully broadcast live TV signals by encoding the content and IP datacasting the packets after applying robust FEC. However, because the terminals for reception are mobile phones (e.g., Nokia N92), they need to support the necessary reception antenna for receiving the signals. Single-chip tuners and DVB-H decoders provide an efficient way to receive mobile TV on handsets that provide the TV application as an “add on” to the “normal” functions of the set, which may be voice- and data-based on the use of 3G networks. The DVB-H is thus not an “in-band technology” like MBMS, which uses the same spectrum as the 3G services. DVB-H receivers are generally configured to have a return channel via the underlying 2G or 3G networks, unlike normal broadcast receivers. This implies that the broadcasters can use these features to have additional control over the sale of programs that are broadcast, content protection, and digital rights management. The need to incorporate these features has led to slightly different approaches to the manner of content protection or handling of return channel interactivity. The different approaches have been reflected as DVB-H implantation profiles and are an area of future convergence of standards.

8.10 DVB-H Implementation Profiles The DVB-H standards specify the use of the IP datacast (IPDC) as the model for the delivery of the content. To this extent, the DVB-H platforms are identical. However what brings differences in the way services are implemented are the return path interactivity, ESG, and content protection. While implementing a DVB-H service, the considerations therefore are: ● ● ●

Format and capabilities of ESG Content provisioning through return channel interactivity Encryption of content for broadcast-level security and application of digital rights management (DRM) on content itself for its storage and later use

The work for standardization of implementation profiles has progressed in different forums such as DVB, BMCO, OMA, ETSI, and others. This has led to the convergence toward use of two implementation profiles: DVB-IPDC: A DVB standard that is intended for unconnected devices. This was formerly called DVB-CBMS (CMBS stands for Convergence of Broadcast and Mobile Services). OMA-BCAST: An open standard of the open mobile alliance (OMA) for IP datacast over any network where reverse connectivity is available.

Mobile TV Using DVB-H Technologies

267

Figure 8.13: DVB-H implementation profiles under DVB and Open Mobile Alliance (OMA).

8.10.1 DVB-IPDC A working group called the DVB-CBMS has formalized the audio and video formats that should be used and the format of the ESG (electronic service guide). It is also responsible for giving recommendations on service protection and content protection. The CBMS standards were released by the DVB in December 2005. These standards have subsequently been renamed DVB-IPDC. The mode of delivery in DVB-IPDC using DVB-H as a network is ALC/FLUTE. Each channel is delivered as a number of time slices that form an IP carousel. The mode of delivery of interactive content follows a similar mechanism and is in fact derived from the broadcast world, where unidirectional systems such as DTH or digital cable provide “interactivity” by providing an IP data carousel that is transmitted continuously for carrying headlines, weather information, magazines, and so on that can be downloaded by the users by pressing a button on their remote. There being no return path, the information demanded is simply picked off from the data carousel. The DVB-IPDC is a multicast file delivery system, which has its origin in the similar use of carousels to carry data files. The DVB-CBMS is used to deliver pictures, games, text, ringtones, or other data.

8.10.2 OMA-BCAST OMA-BCAST is a common standard for mobile TV irrespective of the broadcast network used. In principle, OMA-BCAST can be used with any of the mobile TV systems, such as MBMS, ATSC Mobile DTV, DVB-H, or 3GPP/3GPP2 unicast streaming. OMA-BCAST is based on very simple concepts of defining a common methodology to carry content, service information, and content protection.

268

Chapter 8

Figure 8.14: OMA-BCAST system operation for media and encryption-independent mobile TV services.

The following are the elements of the OMA-BCAST: ●

● ●

Content is carried by using either streaming or file delivery (FLUTE, operating a carousel for reliable delivery). Electronic Service Guide (ESG) generated per OMA-BCAST specifications. Content security based on 18Crypt encryption and DRM or Smartcard profile (content security is discussed in Chapter 21).

8.11 Electronic Service Guide in DVB-H Analogous to the DVB-H profiles (DVB-IPDC and OMA-BCAST), the ESGs also have different implementation based on the implementation interface selected. The way ESG information is provided (and customers can make selection of content to be viewed) is handled differently in the two methods of implementation, due to the return path connectivity. The ESG service model in DVB-H is based on XML schema and is intended to be consistent across all implementations and receiver devices.

8.11.1 OMA-BCAST ESG For the OMA-BCAST-based systems, there can be a number of subscription models. These can include the following: ● ●

Subscription-based (limited time or open ended) Program-based

Mobile TV Using DVB-H Technologies ● ● ●

269

Event-based (e.g., a sporting event) Pay-per-view Token-based

A single ESG can be used for multiple operators, including individual operator guides. An example of the type of interactivity that can be achieved if a reverse path is available and an OMA-BCAST ESG is used is given by the Nokia MBS 3.2 solution. The ESG lists not only the available services but also the purchase information.

8.11.2 DVB-IPDC ESG The CBMS ESG profiles are still operator specific and implementations differ based on the parameters and profiles selected by the operator. An example of an ESG generator for mobile TV headends is the JADE™ ESG generator from Thomson. JADE can be used for generation of ESG for either DVB-IPDC or OMA-BCAST environments. The program and service information can be picked up by the ESG generator either from a metadata server or directly from the ASI program streams using the event information table (EIT).

Quick FAQs DVB-H 1. What factors determine channel-switching time in DVB-H? Channel-changing delay in DVB-H is primarily related to the time-slicing scheme and the initial buffering delay before the next channel starts. The time-slicing parameters and periodicity of random access points determine the delay for going to a new channel. The variable bit rates, if implemented in multiplexing, will also increase the time delay. 2. Do the DVB-H systems provide statistical multiplexing? Statistical multiplexing in DVB-H is a difficult proposition, due to the time-slicing scheme. All the video packets for a channel are transmitted in a thin time slice at a very high rate (through buffering) and then the transmission for the channel ceases until the next time slice. The timing of the next slice also needs to be signaled to the receiver to enable it to know when to wake up next, which is difficult if the bit rates are not fixed. However, recent products such as R&S AVE264 audio/video encoder and the R&S AVP264 multiplex manager provide statistical multiplexing by “adaptive time slicing.” 3. Why does DVB-H not offer graceful degradation of quality like MediaFLO? DVB-H does not use hierarchical modulation with the base layer and the enhancement layer as does MediaFLO. In MediaFLO, in adverse transmission conditions, only the base layer is delivered, giving a lower quality but not a loss of reception. In DVB-H, hierarchical modulation is also used but it is in the context of carrying DVB-T on the same multiplex.

270

Chapter 8

8.12 Content Security There are three approaches to content security. ●





Open Security Framework (OSF): The first approach is that broadcast security be provided by traditional conditional access systems suitably modified for the mobile environment. This leads to handsets that are proprietary to specific networks. Use of DRM: The second approach is to use a common encryption at the transmission level such as ISMAcrypt and use either proprietary or open DRM at the content level. Use of the smartcard profile: The third approach is to use an open encryption such as IPsec or ISMAcrypt at the broadcast level and use the smartcard profile (SCP) at the handset level to provide content security.

The second and third approaches lead to the possibility of broadcast system being uniform and handsets deployable in across multiple operator networks. The topics of interactivity and digital rights management are of considerable importance and are covered in Chapters 20 and 21 of this book.

8.12.1 Content Security in DVB-IPDC Using the Open Security Framework (OSF) The use of DVB-IPDC is supported largely by the broadcast industry, which has been using these technologies for more than two decades in satellite digital TV and DTH systems. The standards based on CBMS also envisage the use of derivatives of traditional encryption systems for content protection and access control. The DVB-IPDC standards provide for this under the Open Security Framework (OSF). The OSF allows the use of any of the encryption techniques. Examples of these types of encryption systems are Irdeto Access, Viaccess, Mediaguard, and Conax. These encryption systems are based on symmetric key coding and require the use of a corresponding smartcard or embedded key in remote handsets. The drawback of such a content security scheme is that once an operator selects one type of encryption system for encryption of content (say, Irdeto Access), the handsets used in the network cannot work seamlessly in other networks (e.g., those based on Viaccess or Conax). This has been viewed as a potential disadvantage, particularly by those in the mobile handset manufacturing industry, which sees free interoperator roaming capability as key to the growth of the industry.

Mobile TV Using DVB-H Technologies

271

Figure 8.15: DVB-H interactive content transmission via data carousel.

The example approach of the operators is somewhat akin to the replacement of a set-top box by the decrypters and decoder functions in the mobile TV. In this type of implementation, the mobile sets can still have additional interactivity via 3G networks. Interactivity using the carousel-based technology has been demonstrated in various trials. In the DVB-H pilot project in Berlin conducted by T-Systems in 2005, the interactivity was provided by “MiTV,” which is an API for interactive services. The data carousel using Flute was 50 kbps and provided pictures and text synchronized to the video content. The service also used an ESG client for the ESG service that was compliant with the DVB-H CBMS ESG.

8.12.2 Content Protection Using OMA-BCAST over DVB-H OMA-BCAST does away with variations of encryption systems by using common encryption schemes. The transmission encryption is provided by using 18Crypt. 18Crypt is based on encryption using IPSec, ISMAcrypt, or SRTP (secure real-time transfer protocol). 18Crypt is supported by both OMA and DVB. The content protection can be provided by using either DRM 2.0 or DVB-IPDC (found in earlier implementations of OMA-BCAST). The newer implementations use the concept of SIM-based security, which requires the smartcards

272

Chapter 8

to conform to the OMA smartcard profile. Following are the two ways in which content is protected in OMA-BCAST: DRM 2.0 Although encryption is used for transmission over the broadcast system, the content protection is provided by using OMA DRM 2.0. Smartcard Profile (SCP) The smartcard profile is an alternative to content protection using DRM 2.0; it provides SIMbased interactions with the encryption system and content management system. The OMA-BCAST smartcard profile enables interoperability between all smartcard-profileenabled handsets and SIM cards, as well as with all smartcard profile broadcast service management platforms. In the OMA-BCAST architecture, the broadcast network is based on open encryption systems that create interoperability among various operators and mobile handsets. Handsets with smartcard profiles are now widely available. Examples include the Nokia N95, N96, and N76; the GSmart T600; and many others. The network uses the mobile 3G for the return path through which interactivity can be provided. The OMA-BCAST provides an open framework independent of the transmission technology, such as DVB-H, DMB-T, or others.

Figure 8.16: Open DVB-H solution using OMA-BCAST.

Mobile TV Using DVB-H Technologies

273

The open-air interface provides specifications at the radio and application levels for DVB-H, IP layer protocols, electronic service guide, payment and purchase protection, as well as A/V coding carried in the DVB-H stream. This permits different networks and applications from different operators to connect to each handset. The interactivity in the open-air interface networks can, for example, be with the website hyperlinks that are accessed via the 3G networks.

8.13 DVB-H Commercial Services DVB-H systems have been extensively tested in a number of pilot projects spanning America, Europe, and Asia. These pilot projects have demonstrated the suitability of all elements of the DVB-H standard, including the source coding, IP datacasting, and COFDM reception, as well as confirming the suitability of handsets under varying transmission conditions. The first networks to launch commercial services were in Italy (La3) and Finland (Digita), in order to coincide with the FIFA World Cup 2006. Europe’s first commercial DVB-H service was launched by La3 in Italy in June 2006. Table 8.2 provides details of some of the commercially operational networks.

8.13.1 United States The U.S. digital TV scene is dominated by the extensive use of the ATSC transmission system, on which over 1200 stations are now active. The DVB-H standard, which relies on the basic DVB-T transport as a physical layer, can thus not be added onto the existing ATSC digital TV networks. Implementations in the United States are now moving ahead with the ATSC Mobile DTV standard. Modeo (formerly Crown Castle Media) pioneered the DVB-H trials in the United States and had also launched a commercial service using 5 MHz capacity in the L-band. The trials were held in Pittsburgh, followed by the launch of the commercial service. The Crown Castle network used satellite distribution as a means to feed the L-band transmitter network across the United States. However, the services were later discontinued as Hi-Wire acquired Modeo and AT&T subsequently acquired Hi-Wire, along with the spectrum.

8.13.2 Europe The EU has adapted the DVB-H standard for Europe, which has lent some uniformity in the implementations that have been put into place in different member countries. Commercial DVB-H services have been launched in a number of countries. However, countries such as the United Kingdom, France, and Germany are still to have major networks operational using DVB-H due to regulatory processes and spectrum availability.

274

Country Italy

Operators 3-Italia Telecom Italia Mobile (TIM TV) Vodafone Italy

Initial Service launch and Coverage

Profile

Encryption and Content Protection

2006 by La3 (Countrywide) 2006, Countrywide

DVB-IPDC

ISMACrypt; OSF with Nagravision DVB-IPDC OSF

DVB-IPDC FASTWAY ESG DVB-IPDC

2006, Countrywide (shared network with TIM) End 2006, Countrywide

DVB-IPDC

DVB-IPDC OSF ISMACrypt with Nagravision Free to Air

DVB-IPDC

DVB-IPC

Albania

DigitaLB

Austria

MEDIA BROADCAST with “one” and “3”

2008

OMA-BCAST

Switzerland

Swisscom (Bluwin TV)

2008; Zurich, Berne, Basel and Geneva

OMA-BCAST

India Vietnam

Doordarshan VTC

Phillipines

The Netherlands

2007 (only Delhi) 2006; Hanoi and Ho Chi Minh Smartcommunications 2007; All major and MediaScape cities (myTV) KPN/Digitenne 2008; Nationwide

DVB-IPDC

ESG

Handset Types P910, SGH 910 LG U900, U960, HB 620 and others Samsung SGH P920, P930, Brionvega N7100 LG KU 950 and others

Jade DVB CBMS; InteractivitySmartvision Mobility OMA-BCAST; SPP: Castlabs

LG KU 950 and others

OMA BCAST with Nokia MBS 3.2

Nokia N 77,96 Samsung P960 LG KB 620; HDNatel

DVB-IPDC OMA-BCAST

Nokia N 77,92 Phones compatible with OMA-BCAST N77,92 and others

DVB-IPDC OMA-BCAST

OMA-BCAST with DRM Profile (SCP profile supported by headend) OMA-BCAST with DRM Profile (SCP profile supported by headend) Free to Air OMA-BCAST

OMA-BCAST

18 Crypt

OMA-BCAST

OMA BCAST with OMA BCAST with DRM; SCP Profile Nokia MBS 3.2 supported by headend

OMA-BCAST

Nokia N77 and others

Samsung P960, LG KB620; Nokia N96

Chapter 8

Table 8.2: Commercial DVB-H Networks.

Mobile TV Using DVB-H Technologies

275

3 Italia DVB-H Network An example of an early provider of DVB-H services is 3 Italia, which commenced providing DVB-H services in 2006. 3 Italia services used the DVB-IPDC profile in the initial implementation. The transmissions are exclusively DVB-H, i.e., no hierarchical implementation with DVB-T has been used. The content protection uses OSF with Nagravision CAS and ISMA Crypt implementation. The DVB-H system uses the following parameters: FFT  8k Modulation QPSK FEC  1/2 Guard Interval  1/8 MPE-FEC  3/4 Timeslice  2 sec The DVB-H services are provided by a network of over 1000 transmitters (5 W to 2.5 KW), which cover 90% of the population.

Figure 8.17: The 3 Italia network.

8.14 Example of a DVB-H Transmission System for Mobile TV A number of vendors are now providing complete solutions for DVB-H broadcasting. A complete solution would consist of the following: ● ●

MPEG-4/WMV9 audio and video encoders DVB-H IP encapsulators

276 ● ● ● ● ●

Chapter 8

DVB-H modulators DVB-H transmitter system Gap fillers (repeaters) SFN adapters GPS receiver system

An example of the system for encoding, IP encapsulation, and transmission of mobile TV using the DVB-H product line from Unique Broadband Systems is given next for purposes of illustration. The video is encoded using MPEG-4/H.264 encoders, which convert a standard-definition TV to QCIF or QVGA (or full-resolution video in MPEG-4) and provide an IP output for further IP encapsulation. An example is the ViBE™ mobile TV encoder from Thomson. It can provide encoding to multiple standard types, e.g., MPEG-4 or H.264/AVC. The audio encoding is per HE-AAC v1.0 and v2.0, AAC-LC, AMR-NB, and MPEG-1 Layer II audio. It also provides integrated scrambling for OMA-BCAST and OSF. IP encapsulation is the next stage in the process of preparing the DVB-H signals for transmission. The IP encapsulator carries out multiple functions, including combining of the various services, integrating the PSI/SI data streams, providing time-slicing control, and MPE encapsulation. A typical example of an IP encapsulator is the OPAL™ IP encapsulator from Thomson. The RF signals are modulated to COFDM by the modulator. The modulator, in the case of DVB-H, has additional functions to perform over its basic function, i.e., modulating the ASI stream to COFDM. The modulators can be used to set the output bandwidth to 5, 6, 7, or 8 MHz, giving flexibility in the transmission plan or the country-specific implementations. An example is the DVB-H Modulator DVM-5000 from Unique Broadcast Systems, with the following specifications: ● ● ● ● ● ●

30 MHz to 1 GHz RF output Full hierarchical mode support SFN and MFN support Web browser remote control SNMP remote control Full DVB-H support

Finally, the modulated signal is upconverted by an exciter and transmitted by a high-power transmitter. A DVB-H transmitter system with power ratings of up to 200 W is the DVB-H-TX50/100/200 from Unique Broadband Systems. It also has SFN adopters (e.g., DVB-T/DVB-H DVS 4010 SFN adopter), DVB-T/DVB-H modulators (DVM 5600), and a GPS receiver system.

Mobile TV Using DVB-H Technologies

277

Figure 8.18: An implementation example of a DVB-H transmission system. (Courtesy of Unique Broadband Systems)

DVB-H transmitters and repeaters are available from a number of vendors. An example is the Atlas DVB-H solid state transmitter from Harris, with a power rating of 9 kW, and the Atom DVB-H repeater with a rating of 5 to 400 watts. For SFN networks, SFN adapters such as DVS4010 from UBS can be used.

Before We Close: Some FAQs 1. How can you create or edit ESG files for DVB-H? Use an XML editor, e.g., XMLFOX Advance. 2. What determines the number of channels carried in a DVB-H multiplex? The modulation scheme used, i.e., QPSK, 16 QAM, or 64 QAM, and the bandwidth of the spectrum slot (5, 6, or 8 MHz). 3. Is hierarchical transmission of DVB-T and DVB-H popular? Why? No, there are no major operators using hierarchical transmission. Hierarchical modulation makes the planning of transmission network for DVB-T (low priority) more complex and SFN planning more critical.

278

Chapter 8

4. Are any SDIO cards available for DVB-H? Why are such devices not popular? Yes. An example is the Philips SDIO TV1000/1100 for Windows/Linux. Sharp has a dual-mode (DVB-H/DMB-T) SDIO tuner VA3B5EZ915. The cost of SDIO tuners is relatively high (about $150). Users prefer an integrated DVB-H handset. 5. Are DVB-H transmissions tailored for different screen sizes? No. The transmissions are in a single resolution, e.g., QVGA. Individual phones need to adopt the display through their players.

CHAPTE R 9

Mobile TV Using DVB-SH Technologies This is the president of the United States speaking. Through the marvels of scientific advance, my voice is coming to you via a satellite circling in outer space. My message is a simple one: Through this unique means, I convey to you and all mankind America’s wish for peace on Earth and goodwill toward men everywhere. Dwight D. Eisenhower, December 18, 1958

9.1 Satellite Mobile TV with a Terrestrial Component When President Obama took the oath of office, his message also carried clearly on millions of phones, but this time as a video. Many of these were satellite phones. A recent survey of mobile receivers being sold in Europe brought out that combo receivers with DVB-T reception and a GPS capability were one of the most popular gadgets preferred by the customers on the move. However, imagine driving across 45 markets in Europe with their own DVB-T versions, scanty coverage outside of cities and multiple encryption types, and the need of a common mobile TV technology with a continent wide access on the move becomes evident.

Figure 9.1: DVB-T receivers find common use for TV and navigation. © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00009-6

279

280

Chapter 9

There has always been a strong business case for a broadcast mobile TV service, which is available outside of cities in addition to city-based coverage. The success of DMB-S technologies has demonstrated that satellites can in fact be used to great advantage to cover the entire landmass and deliver a mobile TV service that can be directly received by handheld or vehicle mounted receivers that have a line of sight of the satellite. Such a line of sight is always available in rural areas or outdoors. For indoor coverage, terrestrial repeaters can be used. In addition, people travelling outdoors need GPS and navigation information, and this constitutes one of the fastest growing segments of the market. A satellite-based mobile TV service that can deliver navigation, maps, and location information is one that directly addresses the fastest growing segments of the mobile multimedia market. These systems are, however, not limited to use outdoors, as terrestrial repeaters can extend the coverage to areas that are not in direct line of sight of the satellite, i.e., indoor areas. The satellite-based systems based on DVB standards are known as DVB-SH systems, which indicate a handheld reception capability added on to the DVB-S systems. The standards for satellite-based transmissions, with a terrestrial component, are derived from the DVB-H standards of the DVB with some modifications for additional error resilience that became necessary due to the low level of satellite signals received. In DVB-SH, the signals delivered to the mobiles are in the S-band using the MSS band of the IMT 2000 spectrum, thus making these systems independent of the crowded VHF/UHF bands. Although DVB-H uses COFDM modulation on the terrestrial link, the DVB-SH has two modes of operation, as explained in the following section.

9.2 The DVB-SH Standard The DVB-SH standard was approved by the DVB in April 2007. These standards have also been adopted by the ETSI. The relevant standards are as follows: ETSI TS 102 585: System Specifications for Satellite Services to Handheld Devices (SH) below 3 GHz ETSI TS 102 584: Guidelines for Implementation for Satellite Services to Handheld Devices (SH) below 3GHz EN 302 583: Framing Structure, Channel Coding, and Modulation for Satellite Services to Handheld Devices (SH) below 3 GHz The DVB-SH specifications have been framed for potentially establishing two types of satellite-based systems. DVB-SH-A System Using OFDM-Based S-Band Satellite Transmission

The first type of system is based on the use of a high powered S-band satellite, which can transmit power directly to mobiles using the COFDM physical layer as in DVB-H, but with

Mobile TV Using DVB-SH Technologies

281

additional error resilience. In this system, termed DVB-SH-A, the terrestrial repeaters, which receive the signal from the satellite, operate in a synchronous mode, creating an SFN. An S-band satellite is the central point of the DVB-SH-A system. As the satellite is required to deliver power directly to mobiles, it needs to be a high-powered satellite specially designed for mobile multimedia applications and having a very high EIRP of about 74 dbW in the coverage area. It also needs to support the COFDM modulation that is received by the DVBSH receivers on the ground. A DVB-SH broadcast headend uplinks signals to the satellite, which beams these back in the S-band using the COFDM modulation or a TDM modulation scheme. On the ground, gap-fillers and terrestrial repeaters retransmit the satellite signal to extend coverage to cities and indoor areas. There may also be terrestrial COFDM transmitters in dense urban areas as in underground location and locations that are out of line of sight of the S-band satellite. We will discuss the specifications of such a satellite in the next section. Figure 9.2 shows the essential elements of a DVB-SH system.

Figure 9.2: A DVB-SH-A system.

282

Chapter 9

DVB-SH-B System Using DVB-S2-Based Satellite Transmission

The DVB also considered an alternate implementation that is not dependent on a highpowered S-band satellite. This type of system uses DVB-S2 transmission on the receive link (satellite to ground), which may be in the Ku-band. The terrestrial transmitters then retransmit the signals in the S-band (i.e., the MSS band) for reception by the mobiles, using the COFDM physical layer. This type of satellite transmission is called a TDM transmission in the DVBSH terminology. Figure 9.3 depicts the mode of operation of such a system.

Figure 9.3: A DVB-SH-B system with only terrestrial transmissions using COFDM.

In a DVB-SH-B system, it is possible to have receivers, which receive directly using satellite dishes and DVB-S2 reception. This type of reception is possible in cars and other mobile receivers. DVB has defined different classes of receivers for this purpose. In the absence of a high powered S-band satellite, this system can also be used to “seed” transmissions before a satellite can be launched.

9.2.1 DVB-SH-A Ground System In the rest of the chapter, we will focus on the DVB-SH systems based on S-band transmissions, which can be received directly by the handhelds. Figure 9.4 shows the block diagram of the DVB-SH encapsulator and modulator. The shaded areas in the figure highlight the differences in DVB-SH as compared to DVB-H. In the MPE, additional “extended FEC” is used.

Mobile TV Using DVB-SH Technologies

283

An additional turbo coding is also carried out in the modulator in order to provide resilience required for satellite transmissions. This turbo coding, applied over 12K blocks, is based on the 3GPP2 turbo code. The modulator also introduces the 1.7 MHz channeling and 1 K FFT mode for operating the system in the DAB/DMB channelization schemes, which use 1.7 MHz spectrum slots. However, most systems being implemented—e.g., in Europe—use 5 MHz slots based on the MSS IMT2000 channelization. The ground system provides the DVB-SH signals for a satellite uplink as well as ground transmitters via a distribution network.

Figure 9.4: IP encapsulator and OFDM modulator in DVB-SH-A systems.

The DVB-SH OFDM modulator also provides a very flexible time interleaver (channel interleaver), which can spread out the bits over many frames (time span from 0.1 sec to many seconds) to provide protection against signal losses due to motion and thereby loss of line of sight of satellite. However, the receiver needs to have adequate buffer to have long time interleaving intervals. DVB-SH has defined two classes of receivers for this purpose. Class 1 receivers are able to handle only short interruptions on the satellite link; class 2 receivers are meant to be designed to handle long interruptions (up to 10 seconds) by building in sufficient memory in the receiver chip.

284

Chapter 9

The signals from the IPE are given to two modulators: one for the satellite component (SC) and the second for the complementary ground segment (CGC). The satellite component is the key to the start of service using a DVB-SH system in as much as it creates a single cell nationwide for delivery of mobile services. The terrestrial transmitters based on the CGC can be added progressively and create additional cells, which preferably operate in an SFN configuration. It is also possible to have a ground network with repeaters, which receive signals from the satellite and retransmit in an SFN mode.

9.2.2 The DVB-SH Physical Layer and Channel Capacity The DVB-SH specifications permit complete flexibility in system design through selection of parameters such as Fast Fourier Transform (FFT) size, modulation scheme, and guard intervals to achieve the desired tradeoff between channel capacity, coverage range, and QoS: ●





Spectrum Slot Size: The spectrum slot size can be 1.7, 5, 6, 7, or 8 MHz for ease of use in country-specific allocations. The 1.7 MHz band has been added for use of the DAB/ DMB spectrum. 5 MHz is the spectrum allocation in Europe in the MSS band and is the most common allocation. FFT Size: FFT sizes can be selected from 8 K, 4 K, 2 K, and 1 K. A larger FFT size leads to higher tolerance to multipath propagation common in urban environments. But higher FFT sizes have lower carrier spacing, and the Doppler shifts, which increase with the frequency and speed, make the 2 K size ideal for use in the S-band. Guard Bands: Guard bands are used in the time domain between two OFDM symbols. If an OFDM symbol shifts in time due to reflected signals, the symbols do not overlap if the shift is less than the guard band interval. However, higher guard bands also reduce system capacity. In the frequency domain, a higher guard interval results in higher subcarrier spacing (see Table 9.1).

Instead of being unoccupied, the guard bands carry a cyclical code that further reduces the interference from adjacent symbols. ●





Turbo Code Rate: The turbo code rate, which is applied in the modulator, can be selected from a range of values, i.e., 1⁄2, 1⁄3, 2⁄3, 1⁄4, 1⁄5, 2⁄5, etc., as specified by EN 302 583. Modulation Scheme: The modulation scheme for individual subcarriers can be QPSK or 16 QAM for the DVB-SH system. Hierarchical modulation (QPSK on QPSK) is also possible. Framing: At the physical layer, the transmission is organized into OFDM frames, each of which comprises 68 OFDM symbols. Four such frames make a superframe. For the purpose of data insertion into the OFDM frames, a capacity unit (CU) is defined, which is equal to 2016 bits. An entity called the SH-frame is defined, which comprises 816 CUs irrespective of modulation scheme, FFT size, or guard band intervals.

Mobile TV Using DVB-SH Technologies

285

Table 9.1: Physical Layer Parameters and Data Rates of a DVB-SH System. Parameter Channel Bandwidths Guard Intervals Modulation

Values 5/6/7/8 MHz 1/4, 1/8, 1/16, 1/32 QPSK and 16 QAM

1.7 MHz

Parameter Values for 5 MHz Bandwidth and 1⁄8 Guard Interval FFT Mode OFDM Carriers Number of Modulated Carriers

8K 8192 6817

4K 4096 3409

2K 2048 1705

1K 1024 853

Number of Useful Data Carriers OFDM Symbol Duration (μS) for 5 MHz Bandwidth Guard Interval (μS) Total Symbol Duration (μS) Carrier Spacing in KHz (for 5 MHz and 1⁄8 Guard Interval)

6048 1433.6

3024 716.8

1512 358.4

756 179.2

179.2 1612.8 0.698

89.6 806.4 1.395

44.8 403.2 2.790

22.4 201.6 5.580

Table 9.2: Framing and Typical Data Rates in an S-Band DVB-SH-A System in the 5 MHz Spectrum Slot. FFT Size

2K

Modulation

QPSK

Data Rate per OFDM Symbol in Bits Data per OFDM Frame (68 Symbols) in Bits Number of Capacity Units (2016 bits each) in OFDM Frame Number of OFDM Frames Needed for one SH-Frame (816 CUs) Raw Physical Layer Data Rate (Mbps) Data Rate at MPEG-TS Level with Turbo Code Rate of 1⁄3 and Guard Interval of 1 ⁄8 (Mbps)

3024 6048 205632 411264 102

4K

16 QAM QPSK

8K

16 QAM QPSK

16 QAM

6048 411264

12096 822528

12096 822528

24192 1645056

204

204

408

408

816

8

4

4

2

2

1

7.5 2.50

15 5

7.5 2.5

15 5

7.5 2.5

15 5

The mapping between OFDM frames and SH-frames depends, however, on the modulation scheme used and FFT size, as these variables determine the number of bits carried by each symbol. Table 9.2 provides information on the data bits per symbol and OFDM frame. It may be seen from this table that one OFDM frame is matched to one SH-frame for 8 K mode and

286

Chapter 9

16 QAM modulation. The OFDM frame then carries 816 CUs. The physical layer data rates and data rates available for payload at the MPEG-TS level are also given in Table 9.2. ●

Transmission Parameter Signaling (TPS): Like DVB-H, a DVB-SH system needs to signal the transmitter parameters such as code rates, interleaving scheme used, modulation, DVB-SH mode, and so on. The TPS bits in DVB-SH comprise of a block of 68 bits numbered s1 to s67. These bits are transmitted on 68 consecutive OFDM symbols, with each symbol carrying one bit. One OFDM frame of 68 bits thus carries all the information for TPS signaling. Of the 68 bits, only 37 are used for information signaling, the rest being for synchronization and error redundancy.

Quick Facts DVB-SH Standards Status: Published as DVB Standards (Blue Book) and ETSI standards Commercial Status: Launched in United States (ICO G1), satellite for Europe launched (W2A), European system not yet operational Frequency Bands: Specifications for below 3 GHz; likely usage using MSS and DAB spectrum Physical Layer: OFDM, FFT sizes 1 K, 2 K, 4 K, 8 K Guard Intervals: 1⁄4, 1⁄8, 1⁄16, 1⁄32 Operational Modes: SH-A OFDM on satellite and terrestrial links, SH-B TDM on satellite and OFDM on terrestrial links OFDM Constellations: QPSK, 16 QAM, QPSK Hierarchical Input Adaptation: 188-byte MPEG-TS packets encapsulated in EFRAMES of 12282 bits FEC: Turbo codes (based on 3GPP2) applied on 12282-bit input blocks with code rates of 1⁄5, 2⁄9, 1 ⁄4, 2⁄7, 1⁄3, 2⁄5, 1⁄2, 2⁄3 Interleaver: DVB-SH Convolutional Block Interleaver

9.2.3 Channel Capacity of a DVB-SH System A DVB-SH system, apart from the modifications in the IPE and the modulator for additional FEC and turbo coding, is essentially the same as a DVB-H system. A typical system may use H.264 AVC for video and MPEG4-AAC for audio, giving an average bit rate of about 300 Kbps per channel with QVGA (320  240) resolution. At an average bit rate of 300 Kbps per channel, a DVB-SH-A system would typically support about 8–10 channels in the net effective bandwidth made available by the DVB-SH system (2.5 Mbps for QPSK and 5 Mbps for 16 QAM). Table 9.3 shows how the parameters of a system might be used in an actual physical implementation.

Mobile TV Using DVB-SH Technologies

287

Table 9.3: An Example Representation of Parameters of a DVB-SH System. Parameter Video Resolution Video Codec Frame Rate Video Bit Rate ESG-SH PID, MPE-FEC Network ID Turbo Code Rate TV Services (For Examples Only) MTV-1 MTV-2 …… MTV-10

DVB-SH Implementation 320240 (QVGA) H.264-AVC ISMA profile 2 25 fps 256 Kbps 123, MPE-FEC 10% 12345 1 ⁄3

Parameter Audio Encoding Audio Sampling Frequency Audio Bit Rate Mobile TV Channels RTP/UDP Port Cell ID Guard Interval

Multicast Address/PID 225.1.1.1/PID 456 225.1.1.2/PID 457 ….. 225.1.1.10/PID 465

MPE-FEC 5% 5% … 5%

DVB-SH Implementation MPEG4-AAC 24 KHz 32 Kbps 10 6050 123 1 ⁄8 Bit Rate 300 Kbps 300 Kbps 300 Kbps 300 Kbps

Quick Facts Differences Between DVB-H and DVB-SH ● ● ● ●





DVB-SH does not support 64 QAM modulation, which is an option in DVB-H. DVB-SH provides additional FEC via turbo coding and provides more alternate coding rates. DVB-SH introduces a new FFT size of 1 K for supporting channel bandwidths of 1.7 MHz. DVB-SH provides time interleaving over multiple frames, and correspondingly requires higher buffer in receivers. DVB-SH coding and modulation provide an improvement in C/N requirement by 5.5 dB as compared to DVB-H. (DVB-SH receivers require lower C/N.) DVB-SH supports antenna diversity in terminals.

9.2.4 Commercial Components for DVB-SH IP encapsulators (IPEs), OFDM modulators, RF transmitters, and test instruments for DVB-SH are available from many vendors. A couple of these are described in the following sections. IP encapsulator from UDcast: IPE-10 The IPE-10 is an example of an IP encapsulator that can be used in a DVB-SH system. The IPE-10 can be used in either the DVB-H or the DVB-SH environment. It features two Ethernet (10/100 base-T) ports carrying H.264 signals from encoders (via an IP switch).

288

Chapter 9

The IPE provides all the requisite functions of time slicing, encryption (OMA-BCAST, DVB OSF/18 Crypt, and DRM/smartcard support) per ETSI EN 302 583 and ETSI EN 302 585. For DVB-SH, it provides the SH Initialization Packet (SHIP) functionality and provides two DVB-ASI outputs for input to the OFDM modulators.

Figure 9.5: A depiction of a DVB-SH system implementation.

Another example of an IP encapsulator is the DVBSH 6000 IPE from Unique Broadband Systems. In addition to other functions, it supports a PSI/SI generator and an SFN adapter. The SFN adapter can be set to work with various FFT sizes, guard bands, bandwidths, and code rates. DVB-SH OFDM modulators from teamcast: MSH 1000 and MSH 2000 OFDM modulators from Teamcast are available, with MSH1000 designed to give an IF output; MSH2000 includes an upconverter to provide a signal in the RF (S-band).

Mobile TV Using DVB-SH Technologies

289

Quick Facts Specifications of MSH 1000/2000 Inputs: Two MPEG transport stream inputs in ASI format from IPE, with redundancy switching. Support for RS code (188/204), bit rate adaptation and PCR restamping. Output: IF from 30 MHz to 45 MHz 0 dBm, 50Ω RF 2170–2200 MHz—10 dbM (MSS S band) for model MSH-2070 RF from 30–860 MHz (VHF and UHF) for model MSH-2010 Modulation: Support of DVB-SH: EN 302 583. Bandwidths 1.7, 5, 6, 7, and 8 MHz. FFT carriers 1 K, 2 K, 4 K, and 8 K. Synchronization: Internal clock and 10 MHz external reference. On-board GPS (optional). Network Configuration: MFN and SFN network operation modes. Management and Control: Serial protocol or user-developed GUI. ● ●

9.2.5 Implementation Guidelines for DVB-SH The implementation guidelines for DVB-SH are given in the DVB Blue Book A117r2-2. These guidelines give the specifications for network ID, cell ID, transport stream IDs, and the handover procedures between the networks of different types such as DVB-H, DVB-SH (SFN), and DVB-SH (non-SFN).

9.3 Characteristics of Satellites for Mobile Broadcasting 9.3.1 Link Analysis of a Satellite Handheld Receiver The operation of a handheld mobile receiver from a satellite presents many challenges, which range from movement of the receiver and consequent sharp losses in received signal to high losses due to distance from satellite. The additional turbo coding and extended FEC in the MPE are some of the measures adopted to deal with the satellite link. The selection of S-band for satellite transmissions, instead of the higher C- and Ku-bands, is in fact meant to limit the free space losses. S-band is also characterized by very low atmospheric losses. Using an FFT size of 2 K, turbo code of 1⁄3, and a guard band of 1⁄8, the C/N required for a handheld receiver in the 5 MHz bandwidth is 4.5 dB for QPSK and 9.5 dB for 16 QAM. These parameters can be used to evaluate the power transmission requirements of satellite, represented by its EIRP.

290

Chapter 9 Table 9.4: EIRP of S-Band Satellite for QPSK and 16 QAM Modes of Operation. DVB-SH Parameter C/N Required at DVB-SH Receiver (dB) Link Margin (dB) Free Space Loss (dB) Atmospheric Loss in S-Band (dB) Polarization Loss (dB) Total Losses (dB) G/T of Mobile Receiver K EIRP of Satellite

QPSK 4.5 3 191 0 3 194 29.5 228.6 69.4

16 QAM 9.5 3 191 0 3 194 29.5 228.6 74.4

The power received by a receiver on the ground is given by: C  EIRP  (All Losses)  Gain of Receiving Antenna where C  carrier power received at the receiver and EIRP  the effective isotropic power transmitted by the satellite. The losses include: Free Space Loss (FSL, satellite to mobile), polarization loss (3 dB, due to mobile receiver using a linear polarized antenna for circular polarized transmissions), and atmospheric loss. For S-band frequencies in the MSS band (2170 MHz), the free space loss is 191 dB for maximum slant of satellite. The noise power depends on the bandwidth and is expressed as: N  KTB where K  Boltzmann’s constant, T  the absolute temperature, and B  the bandwidth. C/N(dB)  EIRP  (All Losses)  G/T(dB)  B(dB)  K(dB) This can be expressed as: EIRP  C/N  All Losses  G/T  B  228.6 Table 9.4 provides the values of EIRP of the satellite expected for providing specified C/N. The calculations are simplistic but give an indicative range of the EIRP of the satellite to be about 74 dbW. Such high EIRPs require a large reflector at the satellite of 12 M and can be achieved in relatively small areas (spots) that are formed by the reflector. Most S-band satellites for mobile satellites are being built with 12 M antennas from Northrop Grumman or Harris Corp.

Mobile TV Using DVB-SH Technologies

291

Figure 9.6: Typical characteristics of an S-band satellite for coverage of Europe.

An example of such a satellite is the W2A satellite for coverage of Europe,1 which was launched in April 2009 and is designed to provide DVB-SH services for Europe. The satellite has a rated payload power of 11 KW.

9.4 Ground Transmitters for DVB-SH The DVB-SH IPE includes a very important packet called SH-Initialization Packet (SHIP), which is used by all the transmitters to synchronize their transmissions. (This is similar to the MIP used in DVB-T/H systems.) All transmitters on the ground in a DVB-SH system operate in frame synchronism with the satellite by matching their frames with the help of the SHIP packet. This helps all the cells operate as SFNs with all ground transmitters operating as slaves to the satellite transmitted signals. This requires the ground transmitters to be able to compensate for delays between the regenerated satellite signals and those received via ground distribution network. Commercially available DVB-SH IPEs such as Unique Broadband System’s 1

The S-band payload on the W2A satellite could not be brought into service and is likely to be replaced on another future satellite.

292

Chapter 9

Figure 9.7: Satellite and ground transmission in a DVB-SH system.

DVBSH6000 has an SFN adapter that performs this function. Figure 9.7 shows the SH-cells operating in a SFN network.

9.4.1 Ground Transmitter Components The DVB-SH ground network is designed to complement the satellite network as well as operate additional multiplexes that are local and are unavailable on the satellite. Broadly, it consists of DVB-SH repeaters and gap-fillers. DVB-SH repeaters A DVB-SH repeater consists of a DVB-SH exciter and a power amplifier of 50–200 W. The DVB-SH exciter integrates the S-band receiver, modulator, and upconverter, as well as a GPS receiver. An example of a repeater is Unique Broadband System’s S-band repeaters. Higherpower repeaters are also available (up to 1 KW) to support multiple multiplexes. DVB-SH gap-fillers Gap-fillers are relatively simpler components, as they only receive the RF signal using a sharp cutoff filter and retransmit the same in the same RF band without any downconversion

Mobile TV Using DVB-SH Technologies

293

or demodulation. For this reason, gap-fillers can work for multiple technologies, including UMTS and DVB-SH, which fall in the same band. Most of the DVB-SH gap-fillers are available for 30 MHz bandwidth for use in Europe. Gap-fillers can also be mounted in UMTS base stations with a diplexer.

9.5 Receiver Characteristics Two classes of receivers have been envisaged in the DVB-SH recommendations based on the resilience to interruption of received signals. Class 1 receivers can handle short interruptions using the physical layer mechanisms. For longer interruptions, Layer 2 protocols are required to resend the lost data. This is done by providing a short “time interleaver” at the physical layer and a longer time interleaver at the link layer. The time interleaving is about 200 ms for 16 QAM and 300 ms for QPSK. Class 2 receivers can handle relatively larger interruptions using the physical layer by using a longer time interleaving and larger buffer memories available in the receiver. Interruptions of up to 10 seconds can be handled. DVB-SH receivers are available from a number of vendors such as Sisda, Teamcast, Quantum, Sagem, and others. DiBCOM has also launched multimode receivers for DVB-SH and DVB-H for the European market, primarily oriented toward the Solaris Mobile DVB-SH offering using W2A. These receivers are for different usage environments such as handheld mobile, automobile receivers or as a part of portable navigation devices.

9.6 The ICO DVB-SH System (MIM) ICO has designed and launched a satellite (G1) that is designed to provide services termed as “mobile interactive media” (mim™). The satellite services operate with terrestrial repeaters and are being initially targeted for cars and other vehicles. ICO mim services also provide full interactive navigation capability with satellite maps and navigation software from TeleAtlas. The FCC has allotted ICO frequency slots of 2010–2020 MHz in the 2 GHz MSS for uplink and the 2180–2190 MHz frequency for downlink (20 MHz). The spectrum can be used for mobile satellite service (MSS) and the ancillary terrestrial component (ATC), with the FCC approval for ATC component having been granted in January 2009. The ICO mim services were the first test of the hybrid satellite–terrestrial delivery. The initial two terrestrial networks integrated were in Las Vegas, Nevada, and Raleigh-Durham, North Carolina. The DVB-SH chipsets for the trials were provided by DiBCOM. The FCC licenses and the operational ICO G1 satellite enable ICO to launch hybrid S-band services over the entire United States.

294

Chapter 9

Figure 9.8: The ICO G1 system based on DVB-SH is operational and providing mim services.

ICO has an agreement with Qualcomm for chipsets for hybrid S-band operation. The chipsets will provide handoffs from satellite to cellular networks and vice versa, will support CDMA2000/EV-DO, and will be capable of operation in L- and S-bands. With the incorporation of S-band capabilities in mass-market chipsets such as those for CDMA phones, ICO is readying the mass market.2

9.7 DVB-SH System for Europe The establishment of a DVB-SH system that would transcend national borders in Europe required a concerted plan and guidelines by the EU in terms of system characteristics and frequency allocation. Accordingly, the European commission had conducted a selection process of pan-European operators for providing MSS services. After completion of this process, the European commission in May 2009 announced the selection of two operators— Inmarsat Ventures Limited and Solaris Mobile Limited—for providing pan-European services (626/2008/EC). Each operator was assigned 15 MHz each of the 30 MHz MSS band.

2

On May 15, 2009, DBSD North America, Inc. (formerly ICO North America, Inc.) and its subsidiaries filed for Chapter 11 protection. Future operation of the company may be determined by its restructuring.

Mobile TV Using DVB-SH Technologies

295

9.7.1 The W2A Satellite As a part of the plan for providing the DVB-SH services, Eutelsat and SES Astra jointly invested in an S-band payload with a coverage of six countries in Europe. The satellite that would provide this coverage W2A was already launched earlier in the year. The S-band transmissions from the satellite use a total of 15 MHz of the 30 MHz MSS band, which comprises three sub-bands of 5 MHz each. The spot beams have been assigned so that beams over adjacent countries have different frequencies. The countries are individually free to use the remaining two sub-bands for terrestrial coverage. Solaris Mobile is commercializing the S-band payload of the W2A satellite. In May 2009, however, Solaris Mobile reported a problem in the S-band payload of W2A.

9.7.2 The Inmarsat Europasat Inmarsat, the second operator to have been assigned the 15 MHz spectrum over Europe, signed an authorization to proceed (ATP) for the manufacture of a satellite with S-band payload with Thales Alenia Space in August 2008. The payload is designed for both mobile broadcast as well as two-way telecommunications services. The satellite launch is scheduled in early 2011. Quick FAQs DVB-SH 1. Can DVB-H handsets be used for reception of DVB-SH transmissions? No, the DVB-H handsets of today are designed to operate in the VHF/UHF bands and expect FEC and coding in the IPE per DVB-H specifications. Future DVB-SH receivers will have built-in capability for DVB-H reception. An example is the DIB29098-SH chipset from DiBCOM, which supports antenna diversity and DVB-T/H and SH reception. 2. What is the future of DVB-H services if DVB-SH services are launched across Europe? The impact is likely to be country-specific. Not all countries will have DVB-SH coverage. The number of channels on DVB-SH will be limited to 9–12 and DVB-H will continue to provide the bulk of the channels. However, trends in receivers are likely to be for support of multiple technologies. 3. Will ICO G1 receivers work in Europe for DVB-SH reception? No, the ICO G1 service, even though it’s based on DVB-SH, uses different frequency bands and is organized as a two-way interactive service. With greater penetration, dual-mode receivers are possible in the future. 4. Are DVB-SH standards recognized in the United States? What is the FCC position on DVB-SH? The FCC has already approved the use of the 20 MHz spectrum for ICO mim services, including the use of the ATC component based on DVB-SH. 5. Can cellular base stations be used as repeaters for DVB-SH? Yes, cellular base stations with DVB-SH repeaters will one of the important types of repeaters. These can be used by insertion of a simple diplexer and a DVB-SH repeater.

296

Chapter 9

6. Can DVB-SH be used in the UHF band? No, the UHF band is not used for satellite transmissions. However, it is possible to use DVB-SH in the L-band. 7. How do DVB-SH handsets differ from DVB-H in complexity and power consumption? DVB-SH handsets bring in considerably greater complexity, antenna diversity technology, resilience to errors, and a lower C/N requirement. The power consumption of such receivers is also higher. 8. Will I be able to get plug-ins (such as USB adapters) for DVB-SH? Yes, the common chipsets for reception of DVB-T/H and DVB-SH are already targeted for use in a wide range of receivers from car receivers to mobile phones and USB adapters.

9.8 Future Systems Using DVB-SH Technology New systems for satellite to mobile deliveries with roaming between cellular (3G) and satellite networks are in the works. TerraStar Corp and SkyTerra Communications are launching three satellites with 22 M dishes for direct services to mobiles. The first such satellite was launched on July 1, 2009. With high-powered satellites and small spots, the satellite phones can be almost of the same size as 3G and will be dual-mode (satellite and cellular). A prototype handset from TerraStar with Windows Mobile and QWERTY handset is expected to cost in the range of $700. These satellites are designed to cover the United States, Canada, and South America and to use the L-band for transmissions. The satellites are being positioned at 101 degrees and 107.3 degrees.

Figure 9.9: A depiction of the SkyTerra® satellite network for direct mobile usage.

Mobile TV Using DVB-SH Technologies

297

SkyTerra is not however planning an ATC to complement the satellite coverage. Instead, it will depend on roaming to cellular networks, probably AT&T, for 3G services. The DVB-SH systems have a promising future as one of the technologies that will be uniform across the country. A service available on the move with multimedia capabilities and roaming between cellular and satellite networks has a promising future. This is not to say that commercialization of such technologies will be easy, as the new chipsets need to go into phones that are popular with users today before they have widespread acceptance.

Before We Close: Some FAQs 1. Does DVB-SH use OFDM on the satellite link? Yes, DVB-SH-A uses OFDM as the physical interface. The S-band link is meant for direct reception by the mobile handsets. There is also a TDM version (DVB-SH-B), which is designed to be used for links to terrestrial repeaters. The TDM version uses DVB-S2 usually in the Ku-band. 2. Does DVB-SH support reverse interactive path via satellite? Reverse path via satellite is not a part of the DVB-SH standard. However, 3G services using satellites (MSS) in the S-band are possible. These are similar to the terrestrial 3G services but provided using the return path via MSS band on the satellite (S-UMTS Phase 2). Systems such as ICO mim are designed for reverse path via satellite. 3. What is the CruiseCast™ mobile TV service? CruiseCast is a mobile TV service launched by AT&T in June 2009. It uses a rooftop-mounted antenna that tracks the satellite using a phased array antenna. The roof-mounted antenna is provided by Raysat and is priced at around $1300 at the time of launch. The service delivers 22 satellite TV channels and 20 satellite radio channels at the time of launch. The service uses video buffering to provide uninterrupted viewing as the vehicle drives under bridges or other obstacles. CruiseCast does not use DVB-SH. 4. Why is S-band selected for use in DVB-SH? S-band enables small omnidirectional antennas to be used in mobile devices, which makes it convenient. All the handheld mobile communications are in the L-band (used in Inmarsat) or the S-band. Spectrum has been set aside in the IMT-2000 band for such services. 5. What is S2M? S2M is a S-band satellite-based mobile TV service for the Middle East and North Africa (MENA region). The services will be based on CMMB.

This page intentionally left blank

CHAPTE R 10

DMB and China Multimedia Mobile Broadcasting (CMMB) The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts. Bertrand Arthur William Russell

10.1 Introduction to DMB Services A new era began in October 2004 when the world’s first broadcast mobile TV services went live in Japan with the launch of satellite-based S-DMB services. The T-DMB services were soon to follow—these were launched in December 2005 in Korea. The DMB services were the culmination of many years of work in designing of protocols, air interfaces, and chipsets that would enable the broadcasting of multimedia to mobile devices. Prior to this, the only unicast mobile TV was available through 3G networks with their attendant constraints. The T-DMB services also preceded the DVB-H services in terms of launch in commercial networks. Digital mobile multimedia broadcast services, as the name suggests, include broadcast of multimedia content, including video, audio, data, and messages to mobile devices. These services, unlike the 3G mobile services, are provided by broadcasting of the content and hence have the capability to service an unlimited number of users. DMB services had their origin in Korea and are provided by using both the terrestrial mode of transmission and the satellite mode. The initiative for the development of DMB services came from the Ministry of Information and Communications (MIC) of Korea, who assigned the Electronics and Telecommunications Research Institute (ETRI) of Korea to develop a system for broadcast of mobile TV. The standards of S-DMB and T-DMB, which were formulated subsequently, received ETSI approval, paving the way for launches outside of Korea. The S-DMB services were provided by TuMedia, which was founded by Korea Telecom; the T-DMB services in Korea were licensed to multiple broadcasters. By the end of 2009, the DBM-T service in Korea had become the world’s largest terrestrial broadcast mobile TV service, with over 20 million users. DMB services are based on an enhancement of the DAB (digital audio broadcast standard), which has been in use worldwide for radio broadcasts. The DMB standards use the physical layer, air interfaces, and multiplex structure of DAB to carry MPEG-2 transmit streams, which serve as conduits of audio and video data coded in MPEG-4 or other standards. © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00010-2

299

300

Chapter 10

The DAB standards were designed for delivery of CD-quality audio anywhere on the move. For audio the technology used is MPEG-1 Layer 2 MUSICAM and the audio services are provided at a data rate of 384 kbps. The robust design of DAB and availability of spectrum were the primary reasons for an enhancement of the DAB standards to also carry video. The modified standards were formalized under DMB. The new standards have also been standardized by the ETSI under ETSI TS 102 427 and TS 102 428. However, it should be recognized that many of the specifications under the DMB standard remain proprietary, with intellectual property rights (IPRs) held by Korean companies. Mobile TV services using the DMB technology have been launched in Germany and France, and are under implementation in many countries. The DAB, DMB, and DAB-IP services are fundamentally similar in nature, as they share the same spectrum, protocols, and infrastructure.

10.2 A Brief Overview of DAB Services The DAB services had their origin in the year 1987 with the formalization of the Eureka 147 Project. The standard for DAB services was formulated as a European standard in 1993 and as an ITU-R standard in 1994. DAB envisages the use of VHF, UHF, the L-band, or the S-band. DAB services are organized as “ensembles.” Each ensemble is a multiplexed stream of audio and data and occupies approximately 1.55 MHz after orthogonal frequency division multiplexing (OFDM) modulation. Each ensemble can carry 1.5 Mbps of data based on the parameters selected for OFDM modulation. Where the capacity is provided in a typical 6 MHz broadcast “slot,” the 6 MHz can carry three independent ensembles. This bandwidth of 1.5 Mbps is adequate for five audio channels of CD-quality running at 256 kbps each. The codec used is MPEG-2 Layer 2 MUSICAM. Over time, DAB has become a well-established service and the technologies of OFDM transmission and SFNs have proved quite robust. The service can be received by a large range of devices, both car-mounted and handheld. After the availability of more efficient audio codecs such as AAC, the DAB standards have also adopted the AAC audio encoding and the new standards which incorporate AAC encoding are referred to as DAB. MUSICAM audio in future systems is likely to be discontinued.

10.3 How is the DAB Structure Modified for DMB Services? The DAB frame consists of a multiplex of audio services that are carried at a constant bit rate (also called the stream mode). The frame is generated by an ensemble multiplexer, which assembles the various audio services, assigning a fixed bit rate to each and providing the multiplex information in the frame header, which also carries the synchronization information. It can also carry data from one or more sources, in which case a variable rate of data carriage is offered through packet switching.

DMB and China Multimedia Mobile Broadcasting (CMMB) 301

Figure 10.1: DAB frame structure.

In the DAB scheme, the bandwidths allotted to stereo channels are quite wide, i.e., 256 kbps for MUSICAM. This was seen as an opportunity for replacing audio with multimedia channels (audio and video) using encoders with higher efficiency such as MPEG-4 or H.264 for video and AAC-HE or AAC for audio. This was done by modifying the DAB standards to DMB standards. In DMB, in addition to the carriage of multimedia in the available bandwidth, it was felt desirable to add an additional layer of coding and convolutional interleaving to give greater forward error correction capability, which is necessary for mobile environments. The ensemble multiplexer of DAB does not carry program information such PAT (Program Association Tables), which is needed to identify all the streams of video, audio, and data (such as subtitling) associated with a particular program. This was resolved by maintaining the MPEG-2 transport frame structure as a “pipe” to channel programs into the ensemble multiplexer. The broadcasting world is very familiar with the MPEG-2 transport structure, and this requires minimal changes on the broadcaster’s side. Consequently the DMB standards have evolved from DAB by letting the ensemble multiplexer (which provides a fixed bit rate or stream mode) carry MPEG-2 transmit streams, which in turn contain multiple programs coded in MPEG-4 or other protocols.

302

Chapter 10

Figure 10.2: Evolution of DMB services based on DAB frame structure.

The use of the conventional MPEG-2 transmit stream (TS) also meant that the broadcast level conditional access could be applied with only minor modifications. Once the frame structure was finalized, the only other addition was the additional RS coding and convolutional interleaving, which was applied to the MPEG-2 transport streams to complete the structure of DMB as they operate today. DMB uses RS(204,188) coding and Forney interleaving for additional error protection. The additional error resilience was built into the multimedia streams to overcome the transmission impairments encountered in the mobile environment. The overall protocol structure of the DMB is at considerable variance with the DVB-H, which is based on the IP datacasting of streams of encoded audio, video, and data. The DMB standards, on the other hand, use the DVB-ASI formats of streams generated by the encoders, which are multiplexed together in an ensemble multiplex. The protocol stacks on which DVB-H services are built are essentially built around the capability of the IP datacasting layer and include RTP streaming, FLUTE file transfer, HTML/XML, and other technologies. The IP layer also gives the DVB-H the capability of transmitting data carousels using FLUTE. On the other hand, the DMB standards do not use an IP transport layer, as is the case in DVB-H, but rather rely on the MPEG-2 TS structure carried within the DAB ensemble multiplexer. DMB does not also use any time-slicing scheme for saving of power. Figure 10.3 shows the DMB system architecture, including the enhancements done for DMB over the DAB Eureka 147 system. DMB instead uses the MPEG SL layer for synchronization and BIFS for additional program-associated data.

DMB and China Multimedia Mobile Broadcasting (CMMB) 303

Figure 10.3: The DMB system.

Table 10.1: Transmission Modes in DAB. DAB Parameters

Transmission Modes I

Frame Duration (ms) Number of Carriers Recommended Frequency Range (MHz)

96 1536 VHF

Nominal Range for SFN (Km)

96 Km

II 24 384 L-band (1500 MHz) 24 Km

III 24 192 S-band (3000 MHz) 12 Km

IV 48 768 L-band (1500 MHz) 48 Km

The DAB system is characterized by four modes of transmission based on the frame duration and the number of carriers (Table 10.1). It is evident that the VHF band (in mode I) and L-bands (in mode IV) provide the best combination of SFN range and mobility.

304

Chapter 10

Figure 10.4: T-DMB ensembles in the 6 MHz band.

The ensembles in DMB continue to serve the same function as in DAB, with the addition that each of the services in an ensemble is error-protected. The overall error protection works out to be quite robust with FEC overheads ranging from 200% to 300%.

10.4 Satellite and Terrestrial DMB Services Mobile multimedia broadcast services under the DMB umbrella are available for satellitebased delivery (S-DMB) and terrestrial digital multimedia broadcast (T-DMB). Satellite multimedia broadcast services, in Korea and Japan, use the high-power satellite MBSAT operating in the S-band (2630–2655 MHz). The spectrum used by S-DMB is the same as that of digital audio broadcasting, which has been allocated by the ITU and hence is available in most countries. The satellite transmissions in the S-band directly to mobiles are possible through the use of the specially designed high-power satellite MBSAT, which has footprints over the major cities of Korea and Japan. The satellite services needs to use gapfillers for coverage of indoor areas and where the satellite signal strength is not adequate. Despite the high power of the satellite signals, direct reception by mobiles requires more robust techniques for error protection and resilience against transmission conditions. S-DMB uses modulation similar to that of CDMA as opposed to multicarrier OFDM for terrestrial transmissions (System E, CDMA in Korea). The 25 MHz available on the satellite is then sufficient to provide 11 video channels, 30 audio channels, and up to 5 data channels for delivery over the entire country. The video is carried at 15 fps as opposed to 30 fps in T-DMB. The terrestrial digital multimedia services, on the other hand, are based on the use of the VHF spectrum, which, in the case of Korea, was reserved for such services. T-DMB services can in fact operate in the VHF, UHF, or other bands such as L- or S-bands, depending on

DMB and China Multimedia Mobile Broadcasting (CMMB) 305 Table 10.2: S-DMB and T-DMB Characteristics. S-DMB Transmission Coverage Frequency Band Modulation Standard Channel Capacity (Typical)

Satellite with gap fillers Countrywide S-band (2630–2655 MHz) System E (CDMA), Korea Video 15 (15 fps) Audio 30 Data channels up to 5

T-DMB Terrestrial transmitters One city with SFN network VHF Band (Korea) L-band (Europe) System A, OFDM Video 6-9 (30 fps) in 6 MHz (3 ensembles) Stereo Audio 12–15 (AAC) Data channels up to 8

availability. A 6 MHz analog TV channel slot can carry seven or eight video channels (CIF), 12 audio, and up to 8 data channels in a typical operating environment (Table 10.2).

10.5 DMB Services in KOREA 10.5.1 Terrestrial DMB Services Terrestrial DMB services were planned in Korea by the ETRI, which was mandated by the MIC of Korea to provide TV transmission for mobile devices. The standard selected was a modification of the Eureka 147 standard for digital audio broadcasting. The plans for the launch of the T-DMB service, including the standards and spectrum to be used, were finalized in 2003. The basic requirements, which were set out at the time of planning of DMB services, were to provide a video service with CIF (352288) resolution at 30 fps and CD-quality stereo audio with 48 kHz sampling. For this purpose, two VHF channels were identified, i.e., channel 8 and channel 12 of the VHF band. Each channel was to be divided into three slots (i.e., three digital channels per analog VHF bandwidth slot) so as to enable up to six T-DMB broadcasters to provide such services. Owing to the subdivision of the 6 MHz band into three slots of 1.54 MHz each and the requirements of the guard band, the gross data rate per digital channel works out to 1.7 Mbps and the usable data rate is around 1.2 Mbps. Accordingly, each provider needs to divide 1.2 Mbps into several audio and video channels. By applying MPEG-4 part 10 Advanced Video Coding (AVC) (H.264), two to three television channels can be accommodated, or one television channel and several audio and data channels can be accommodated in one ensemble multiplex.

306

Chapter 10

Figure 10.5: T-DMB in Korea.

The T-DMB services are broadcast by standard high-power VHF transmitters with 1–2 kW of emission power. This obviates the need for gap-fillers for most locations except tunnels, etc.

Figure 10.6: Terrestrial DMB phones. (Courtesy of LG)

DMB and China Multimedia Mobile Broadcasting (CMMB) 307 The T-DMB services are broadcast using terrestrial transmitters, but also rely on gapfillers for transmission inside subways, malls, and areas not served well by the terrestrial transmitters. The T-DMB services began in December 2005 and are being provided free of charge at present in Korea. There were initially six permitted broadcasters—KBS, MBC, SBS, YTN, Korea DMB Co., and U1 Media. Of these, KBS, MBC, and SBS are the national television networks. The parameters of services being provided by two of the broadcasters are as follows: ●



KBS (Korean Broadcasting System): video (CIF 352288) at 30 fps and BSAC audio at 48 kHz/128 kbps stereo. SBS (Seoul Broadcasting System): video (QVGA 320240) at 15 fps and BSAC audio at 48 kHz/128 kbps stereo.

The channels broadcast are largely derived from the nationally broadcast channels and are available free to air. For example, KBC Korea broadcasts two TV channels based on national channels KBS TV1 and KBS TV2, a 24-hour music channel, and a data channel on its T-DMB transmissions. SBS provides its mobile TV offering under the SBS-U (SBSUbiquitous). This has one TV channel (with all prime shows featured), three audio channels (music, local news, and traffic), and three data channels.

Figure 10.7: T-DMB operators.

308

Chapter 10 Table 10.3: Highlights of T-DMB Standards.

Video Coding Audio Coding Multiplexing Channel Coding Transmission Layer Auxiliary Data

H.264 (MPEG-4/AVC Part 10) Baseline Profile @ level 1.3 MPEG-4 part 3 BASC Audio (Bit-Sliced Arithmetic Coding) M4 on M2 (MPEG-2 TS carrying MPEG-4 SL) Reed Solomon with Convolutional Interleaving DAB (Eureka 147, Stream Mode) MPEG-4 BIFS Core 2D Profile

Initially, the metropolitan areas were covered by 13 transmitters and there were 6 additional regions that were covered. However, currently all of South Korea is covered with T-DMB transmissions.

10.5.2 T-DMB Standards Highlights of the T-DMB standards followed in Korea are given in Table 10.3. T-DMB standards have been approved by ETSI and the T-DMB specification was reflected in ITU-R REC-BT.1883 (’07). The technology of DAB, which is the physical transmission layer, has already been tested widely and is in use in a number of countries. The T-DMB standards make use of efficient compression algorithms under H.264 and thus permit the carriage of even VCD-quality video at 352288 resolution at a full frame rate of 30 fps. The audio is coded at 96 kbps for MUSICAM as opposed to 384 kbps under the DAB. Video for T-DMB standard, where higher resolution is required (i.e., more than QCIF or QVGA), is coded with the resolution of 640480 (VGA quality) with 30 fps using H.264 (MPEG-4/AVC) codecs. The profile used is the baseline profile at level 1.3. The coding of audio (CD-quality stereo audio) uses MPEG-4 ER-BSAC. In addition, auxiliary data (e.g., text and graphic information) can be transmitted using MPEG-4 BIFS specifications. The specifications also cover the carriage of legacy DAB services such as CD-quality audio (DAB MUSICAM) and slideshow/interactive services using the BWS EPG protocols. At the same time, it also provides for upgrades to this technology. Whereas the standard DAB MUSICAM is carried at 384 kbps, using the optional higher compression codec, it is possible to carry it at 96 kbps. Slideshows can be carried using MPEG-4 BIFS format. Although the T-DMB services do not have the feature of time slicing (as DVB-H has), the fact that they deal with lower-frequency transmissions with a lower bandwidth of 1.55 MHz for a given carrier helps keep the tuner power low. The launch of T-DMB services in Korea was preceded with considerable work in the development of chipsets, handsets, and technologies to launch the services. LG and Samsung have been active partners in the launch of T-DMB services, which have been launched as free-to-air.

DMB and China Multimedia Mobile Broadcasting (CMMB) 309 T-DMB services in Korea are also characterized by a high level of interactivity provided through services such as traffic and traveler information, television mobile commerce, and audio–video-synchronized data.

Figure 10.8: DMB interactive services.

The chipsets developed for the service have the capability of return channels via CDMA networks widely used in Korea or via the GPRS, EDGE, Wi-Fi, or WiBro networks. The technology of MPEG-4 BIFS and the use of middleware such as Java and Brew have been helpful in presenting applications with animations and graphics enhancing their user appeal.

10.5.3 Satellite DMB Services Satellite DMB services in Korea had their origin in the planning and launch of a specialized high-powered S-band spot beam satellite (MBSAT) for video, audio, and data services. The satellite was designed specifically to provide coverage of Korea and Japan while avoiding interference to other countries through the use of a 12 m offset paraboloid offset reflector. The beam in the shape of the territories covered was achieved using a multiple-element feed array. The large reflector satellite along with the high-power electronics delivers a high effective isotropic radiated power (EIRP) of 67 dBw, which enables handheld mobiles to receive the signals directly. Areas inside buildings and in subway tunnels and the like are covered using gap-fillers, which also operate in the S-band. Just for the purposes of comparison, it is interesting to note that the Ku-band direct-to-home systems using the FSS band uses an EIRP of around 52 dBw in conjunction with 60 cm receiver dishes. The BSS band satellites such as Echostar have an EIRP of 57 dBw. The EIRP of 67 dBw is 10 dB higher than the

310

Chapter 10

highest-powered Ku-band systems, i.e., a power level that is 100 times higher. This satellite is somewhat unique in this regard and hence S-DMB-type services elsewhere in the world would depend upon the availability of such high-powered specially designed satellites. The S-band geostationary satellite is jointly owned by MBCo Japan and SK Telecom of Korea and is manufactured by SS/Loral, based on the FS-1300 bus.

Figure 10.9: MBSAT for S-DMB services.

10.5.4 Transmission System The technical system for satellite DMB services is designated as System E (ITU-R BO.1130-4) and is based on CDMA modulation. However, this interface is not identical to the CDMA used in 3G phones. The transmission system has been designed for Ku-band uplink and S-band downlink. The mobile transmission systems use the code division multiplexing scheme with interleaving RS coding and forward error correction systems. The satellite signals, however strong, cannot reach deep inside buildings, tunnels, and other covered spaces and a range of gap-fillers have been developed to retransmit the signals in the S-band. The gap-fillers receive their signals from the Ku-band transmission of the satellite.

DMB and China Multimedia Mobile Broadcasting (CMMB) 311

Figure 10.10: MBSAT mobile broadcasting system.

The satellite S-band transmissions (direct to the mobiles) are in the frequency band 2.630 to 2.655 GHz with a bandwidth of 25 MHz. The use of a high level of error protection, however, allows a transmission capacity of only 7.68 Mbps. This is sufficient to handle 15 video services and a mix of audio and data services. For the coverage of indoor areas, the S-band repeaters receive signals from the satellite in the Ku-band at 12.214 to 12.239 GHz. The dual coverage of satellite and terrestrial repeaters ensures that the signals can be received in metropolitan areas, which are characterized by tall buildings, tunnels, and obstacles that prevent a direct line of sight to the satellite. The launch of the Korean DMB services in S-DMB format followed by T-DMB placed it in the category of countries that are keen to adapt innovative technologies.

10.6 DMB Services Ground Segment The ground segment of a DMB station aggregates video-, audio-, and data-based programming from a number of sources (or different broadcasters). These are encoded using MPEG-4 or H.264, and audio, coded as AAC or BSAC. The following are the basic parameters: ● ●

video—MEPG-4/AVC (H.264) audio—MPEG-1/2 Layer I/II or BSAC/AAC

312

Chapter 10

The encoder outputs (ASI) are then be placed in an MPEG-2 TS framework and have all parameters assigned to them such as PAT, PMT, and ESG. The service is also encrypted at this stage by a CA system (such as Irdeto Mobile, used in Korea S-DMB). The MPEG-2 stream is then placed in a DAB layer after the RS coding and convolutional interleaving, which is carried out in a unit called D-VAUDX. The signal stream is then OFDM-modulated and -transmitted. In the case of DVB-S for reception by repeaters, instead of OFDM transmission, a CDMA modulation is used.

Figure 10.11: Mobile TV ground segment in T-DMB.

10.7 S-DMB System Specifications The S-DMB system follows the following specifications: 1. Compression layer • Video: H.264/MPEG-4 part 10 AVC baseline profile at Level 1.3 • Audio: MPEG-2 AAC • Auxiliary data: MPEG-4 BIFS Core 2D profile 2. Multiplexing layer: M4 on M2 • MPEG-4 SL • MPEG-2 TS (PES)

DMB and China Multimedia Mobile Broadcasting (CMMB) 313 3. Channel coding layer • Reed–Solomon coding (204,188) • Convolution interleaving • BER performance: Less than 10–8 4. Transmission layer • DAB (Eureka 147)

10.8 DMB Trials and Service Launches The DMB services have witnessed trials in over 11 countries, and all indications are that the services will see strong growth despite considerable focus on DVB-H by broadcasters and some handset manufacturers.

Figure 10.12: DMB trials and commercial launches.

After the launch of services in Korea, the next major milestone was the trial launch of services in Germany, Italy, and Finland using the T-DMB technologies (prior to the FIFA World Cup 2006). The German trial involved the broadcaster MFD (Mobiles Fernsehen Deutschland) in cooperation with T-Mobile and the services were based on a T-DMB offering. Mobile TV services based on T-DMB were available in six cities in Germany by the end of 2007. However, MFD discontinued these services in 2008 and planned a re-launch based on DVB-H in joint venture with Mobile 3.0. Subsequently 3.0 ceased the DVB-H services also in autumn of 2008. China had granted four licenses for T-DMB services in Band III (Beijing, Guangdong, Dalian, Zengzhou) and one experimental license in Shanghai in the L-band. The Chinese

314

Chapter 10

DMB standard is their own version of T-DMB (the SARFT approved FY/T-214-2006), which features the COFDM Chinese standard. The standard is called DVB-T/H and was developed by Tsinghua University in Beijing and Jiaotong University in Shanghai. The standard has both a single carrier and a multicarrier option. The service is supported in both VHF and UHF bands with 8 MHz channel spacing. In Beijing, five transmission sites were operational in 2008 with the number expected to rise to 10 by end of 2009. Four ensembles were being transmitted in 8 MHz carrying 16 audio and 6 multimedia programs. T-DMB services in France were launched by the cooperation of the mobile operator Bouygues Telecom and TV broadcasters TF1 and VDL. VDL has been providing DAB services in France since 1998. Bouygues Telecom has been providing i-mode services in France since 2002 and operates an EDGE network. The handsets for the service are being provided by Samsung.

10.9 China Multimedia Mobile Broadcasting (CMMB) Just as it appeared that the launch of mobile TV in China would be delayed due to multiple homegrown standards vying for center stage, the SARFT set the ball rolling in 2007 with the mandate of CMMB being used as the primary standard for mobile multimedia broadcasting. An ambitious nationwide program for terrestrial broadcasters based on the CMMB followed, with the Summer 2008 Olympics getting a glimpse of the new technology. China Satellite Mobile Broadcasting Corp (CSMB), a state agency under SARFT, became the agency for nationwide rollout of the CMMB transmission, which are in principle based on a dual-mode satellite–terrestrial transmission.1 However, few were prepared for the massive growth that was unleashed following the award of 3G licenses in January 2009 to three companies, which remained after merger of mobile operators, i.e., China Mobile, China Telecom, and China Unicom. This—coupled with the policy of the MIIT to grant network access to 3G (TD-SCDMA) phones that have CMMB reception capabilities—made for an exponential growth in CMMB handsets.

10.9.1 CMMB Technology CMMB has many similarities with the DVB-H technology, as it uses a transport stream that is very similar to MPEG-2 called the CMMB-TS. The modulation of the carriers is OFDM with 4 K carriers in 8 MHz of bandwidth. There is also a 2 MHz bandwidth mode, with 1 K carriers, enabling the use of DAB spectrum with 1.7 MHz carriers. The transport stream structure used in CMMB is CMMB-TS. As CMMB is based on the use of OFDM modulation, it is possible to have SFNs using SFN adapters. CMMB uses H.264 (base profile) video codecs with QVGA, 30 fps encoding, and HE-AAC (AACv2) audio encoding. Figure 10.13 shows a typical CMMB transmission setup. 1

The current transmissions are all terrestrial, with the satellite launch scheduled in 2010.

DMB and China Multimedia Mobile Broadcasting (CMMB) 315

Figure 10.13: CMMB encoding and transmission system.

The transmission system is very similar to DVB-H, except that a CMMB multiplexer is used, which generates a CMMB transport stream (instead of an MPEG-2 transport stream). Encoders for CMMB (such as Envivio Mobile Series 4Caster M2) provide multistandard encoding, suitable for DVB-H, DMB, or CMMB systems. The OFDM modulator needs to be able to support SFNs and MFNs per network design.

Quick Facts CMMB System Characteristics ● ● ● ●

Transport Stream: CMMB-TS Bandwidths: 8 MHz and 2 MHz Modulation: CMMB (GY/T 220.1-2006) OFDM Carriers: 4 K (8 MHz bandwidth), 1 K(2 MHz bandwidth) Modulator Byte Interleave Modes: Mode-1, Mode-2, Mode-3 RS coding: (240,240), (240,224), (240,192), (240,176) LDPC Bit Rate: 1/2, 3/4 Guard Interval: 1/4, 1/8 Constellation: BPSK, QPSK, 16 QAM Max Time Slots: 40 Transmitter Networks: SFN or MFN ● ● ● ● ● ● ●

316 Chapter 10

Table 10.4: Features of CMMB in Comparison with Other Mobile TV Technologies. Parameter

CMMB

DVB-H

T-DMB

S-DMB

ISDB-T/1-Seg

FLO

ATSC M/H

Bandwidth Video Coding

8 MHz 2 Mhz H.264(BP), QVGA, 30fps

6–8 MHz H.264(BP), QVGA, 25 fps

6 MHz H.264(BP), QVGA 30 fps

6 MHz H.264(BP), QVGA 30 fps

HE-AAC (AACv2)

HE-AAC (AACv2)

25 MHz H.264(BP), QVGA 30 fps BSAC, HEAAC (AACv2) MPEG-1 Layer 2

430 Khz H.264(BP), QVGA 15 fps

Audio Coding

HE-AAC (AACv2)

HE-AAC (AACv2)

HE-AAC (AACv2)

MPEG-2 TS 2,4,8K CoFDM RS Convolutional

2 MHz H.264(BP), QVGA 30 fps BSAC, HEAAC (AACv2) MPEG-1 Layer 2 MPEG-2 TS 1K OFDM RS Convolutional

MPEG-2 TS CDMA RS Convolutional

MPEG-2 TS OFDM(1-seg) RS Convolutional

MPEG-2 TS 4k OFDM RSTurbo

Multiplex Modulation Channel Coding Max Transmission Rate Channel Switching Time Interleaving Depth

CMMB-TS 4K OFDM 1K OFDM RS RSLDPC LDPC 16 Mbps

3 Mbps

12 Mbps

1.5 Mbps

7 Mbps

312 Kbps

11 Mbps

MPEG-2 TS 8-VSB RS Convolutional (Trellis) 19.39 Mbps

⬃1 Sec

⬃1 Sec

⬃3–5 Secs

⬃1 Sec

⬃1 Sec

⬃1 Sec

⬃2 secs

⬃3–4 secs

⬃1 sec

⬃1 sec

0.25 Secs

1.5 secs

3.55 secs

0.25 secs

0.75 secs

0.25 secs

DMB and China Multimedia Mobile Broadcasting (CMMB) 317 Table 10.4 outlines the basic features of CMMB in comparison to other technologies.

10.9.2 CMMB Multiplexer The CMMB multiplexer provides encapsulation of audio, video, data, and control information per MMB specifications parts 1 and 2. It generates MPEG-2 TS packets (ASI format) that are delivered to the modulator. The multiplexer can be used at a central site for “global” multiplexing of signals to be delivered to a wide area or at local sites to add local channels. The CMMB multiplexer also generates tables similar to an MPEG-2 multiplex—e.g., NIT, CMMB Multiplexer Control Table (CMCT), CMMB Service Control Table (CSCT), etc. The CMMB frame has a well-defined PHY and logical channel. A CMMB frame is of 1 sec duration and has 40 time slots at the PHY level. At the logical level, it has a control channel that can be used for allocation of resources amongst various services via time slots available. The CMMB thus has a very strong resource allocation strategy. The 40 slots at the PHY level are mapped to a total of 53 OFDM symbols plus a symbol for fast synchronization.

Figure 10.14: CMMB multiplex structure.

An example of a CMMB multiplexer that can provide these functions is Unique Broadband Systems CMMB6000. Remultiplexers are also available for channel management at the ASI level. The Cello-Mux CMMB remultiplexer from Innofidei is an example.

318

Chapter 10 Table 10.5: Characteristics of OFDM in CMMB. Total Bandwidth Useful Bandwidth Data Subcarriers Subcarrier Separation Symbol Duration Cyclical Prefix Useful Symbol Duration

8 MHz 7.512 MHz 3077 2.44 KHz 460.8 μS 53.6 μS 409.6 μS

CMMB multiplexers have provision for encryption (including simulcrypt), ECM and EMM injection, and emergency broadcast inputs, in addition to remote management and control table management.

10.9.3 CMMB Modulators Modulators in CMMB generate the OFDM carriers per the standards specification. Table 10.5 lists the key characteristics of CMMB modulators. CMMB modulators are available from a number of vendors. An example is the range of CMMB modulators from Teamcast (MMB1340 and CMMB2000), which provide IF and RF output, respectively. The RF output can be in the UHF range (470–860 MHz) or S-Band (2635–2660 MHz) based on the model selected. The modulators can extract the TOD message for SFN synchronization. The modulators provide FEC (LDPC) with ½ or ¾ code rates in addition to the outer RS coding.

Quick Facts CMMB Standards ● ●



Transport Stream: CMMB-TS GY/T 220.1-2006 Mobile Multimedia Broadcasting Part-1: Frame, structure, channel coding, and modulation for broadcasting channels GY/T 220.2-2006 Mobile Multimedia Broadcasting Part-2: Multiplexing GY/T 220.3-2007 Mobile Multimedia Broadcasting Part-3: Electronic Service Guide GY/T 220.4-2007 Mobile Multimedia Broadcasting Part-4: Emergency Broadcasting GY/T 220.5-2008 Mobile Multimedia Broadcasting Part-5: Data Broadcasting GY/T 220.1-2008 Mobile Multimedia Broadcasting Part-6: Conditional Access GY/T 220.1-2008 Mobile Multimedia Broadcasting Part-7: Technical Specifications of Receiving and Decoding Terminals GY/T 220.1-2008 Mobile Multimedia Broadcasting Part-8: Technical Requirements of Measurement Methods for the Multiplexer ● ● ● ● ●



DMB and China Multimedia Mobile Broadcasting (CMMB) 319

10.9.4 Transmitter Networks in CMMB CMMB standards have been designed for facilitating an SFN or MFN network of transmitters. For this purpose, the modulators need to mark a particular OFDM symbol that is used by all repeaters to align their output. The CMMB2000 modulators (Teamcast), for example, insert a null symbol that is used by all transmitters to start their frames. As the transmitters are GPS-synchronized, the rest of the frames contain symbols that are timesynchronous. Transmission system design in CMMB is similar to that for DVB-H networks, with coverage in different terrains being computed based on urban or suburban Hata Models.

10.9.5 SFN Operation in CMMB In general, SFN operation in OFDM systems is enabled by the cyclical prefix. The cyclic prefix enables the demodulators to synchronize channel reception. In the case of CMMB, the cyclic prefix is 53.6 μS, which gives a distance between SFN transmitters of 16 Km. Many of the CMMB installations in China such as Beijing and Shanghai use SFN networks with six or more transmitters.

10.9.6 STiMi Satellite The CMMB services architecture involves a satellite delivered broadcast in the S-band, with UHF terrestrial repeaters and gap-fillers. The satellite signals are expected to expand the reach nationwide, with today’s terrestrial transmitters having coverage of urban areas and prefecture level cities. Fig 10.15 shows the STiMi architecture for CMMB. The satellite launch for S-band broadcast has been delayed, with likely availability in 2010.

10.10 The DTMB Standard Even though the CMMB is the standard used for large-scale deployment of mobile TV in China, the DTMB standard (GB20600-2006) is the national Chinese standard for terrestrial broadcasting and can support all screen sizes from mobile TV to HDTV. The most common use of DTMB is for replacement of analog transmissions with digital terrestrial, primarily for large screens. The DTMB has two transmission modes: a single-carrier mode and a multicarrier mode. In multicarrier mode, it uses time domain synchronous OFDM (TDS-OFDM); in single-carrier

320

Chapter 10

Figure 10.15: STiMi satellite–terrestrial architecture for CMMB services.

mode it uses VSB modulation, which varies from 4 QAM to 32 QAM. DTMB is thus a hybrid system following the ADTB-T (Advanced Digital Television Broadcast-Terrestrial) standard for single-carrier mode and the DMB-T/H standard (Digital Multimedia BroadcastTerrestrial/Handheld) standard for multicarrier mode. The transmission uses an RS outer code and a low-density parity check (LDPC) inner code. Based on the modulation scheme adopted, a 6 MHz spectrum slot can support bit rates from 4.8 to 21.9 Mbps. The two multiplex schemes (VSB and TDS-OFDM) coexist in the DTMB system. Table 10.6 provides the parameters of the DTMB system. Modulators such as the Team Cast MCX-2000 support both DTMB and CMMB modulation using the same device. It also provides for either single carrier of TDS-OFDM modulation for DTMB and SFN or MFN networks. A number of demodulator chips are available for STB, IDTV, or mobile receivers with DTMB reception using either a single-carrier mode or the TDS-OFDM mode. An example of a multimode chip is the Legend Silicon LGS-8G52 chip.

DMB and China Multimedia Mobile Broadcasting (CMMB) 321 Table 10.6: Characteristics of a DTMB System. Parameter Transmission Modulation Outer Code Inner Code Bandwidth Guard Time (OFDM) Data Rate (6 MHz Slot)

Single-Carrier Mode Vestigial Side Band (VSB) 4 QAM, 16 QAM, 32 QAM RS(204, 188) LDPC (Low-Density Parity Check) (7493, 3048), (7493, 4572), (7493, 6096) 6 MHz or 8 MHz 1/4, 1/7, 1/9 4.8 to 21.9 Mbps

Multicarrier Mode TDS-OFDM 4 QAM, 16 QAM, 64 QAM

Before We Close: Some FAQs 1. What range of coverage do SFNs in T-DMB provide? As T-DMB operates in the VHF band, the coverage of SFN, based on antenna height, can extend to about 100 Km, say, with an ERP of 50 KW. However, due to the low frequency, larger receive antennas are required on handsets. 2. T-DMB does not use time slicing like DVB-H. What is the mechanism for power saving? The bit rate and bandwidth of T-DMB systems is very limited—1.2 Mbps. This is sufficient to carry one or two video channels only. Hence the time slicing is not required. The low tuner bandwidth leads to power savings that are similar to that of DVB-H. 3. There are six T-DMB operators in Korea. Is it possible for a mobile phone to tune to any of the six services? Yes. 4. What is the mechanism of power saving in CMMB? CMMB provides for time slots at the multiplex structure. The tuner is active only during the designated time slot. 5. Does CMMB provide statistical multiplexing of channels? No, but the CMMB framing provides allocation of time slots based on bit rate requirement of individual services, which makes it very efficient in resource allocation. 6. Does S-DMB use an OFDM air interface like DVB-SH? No, S-DMB uses a code division multiplex (CDM) carrier of 25 MHz bandwidth. In fact, it is the only system that does not use OFDM. One reason for this is that it operates in a band of 2.63–2.655 GHZ, and is not limited in the use of the 2.170 to 2.2 GHz for multiple channels. Also, its terrestrial gap-fillers are in the S-band and thus do not require the use of 6 MHz or 8 MHz UHF slots. 7. Are there dual-mode handsets available that can work for T-DMB and S-DMB? Yes, dual-mode handsets are available, such as the Samsung SGH-P900.

This page intentionally left blank

CHAPTE R 11

Mobile TV Using MediaFLO™ Technology Thus, the task is, not so much to see what no one has yet seen, but to think what nobody has yet thought, about that which everybody sees. Erwin Schrödinger

11.1 Introduction to MediaFLO In March 2007, customers in over 20 markets in the United States were in for an entirely new TV viewing experience. The event was the launch of VCAST service by Verizon, which in its initial launch carried only eight live TV channels. However, the viewing experience on these channels was entirely different from that of 3G-based networks. The service was available on the Samsung SCH-u620 and cost $15 per month. The video displayed on mobile screens was near–perfect, with much higher resolution than 3G and a frame rate of up to 30 fps. The service was based on MediaFLO technology, a name that few had heard at the time. Later, in May 2008, AT&T also launched its mobile TV service (called AT&T Mobile TV) based on the same technology. The services were delivered in UHF channel 55 in the 700 MHz band, which is owned by Qualcomm Inc. in a number of markets. MediaFLO technology used for mobile TV has many positives. First, the technology has been designed from the ground up for mobile TV signals rather than an add-on to the MPEG-2 transport stream for terrestrial TV. It features a QVGA (320  240) encoding of video using H.264 and a very efficient statistical multiplexing of the TV channels. This enables a 6 MHz spectrum slot to be able to carry over 20 streaming video channels. MediaFLO also has features for power saving in mobile phones, an EPG with fast channel change of approximately 2 seconds and the capability to carry data such as tickers or weather information. The transmission of mobile TV signals is based on a “forward-link-only (FLO)” air interface, which is very robust and is a TIA and ETSI standard (TIA-1099A and ETSI TS 102 589). The interface features a layered coding and modulation, which enables receivers to achieve the best quality possible under transmission conditions, and to gracefully reduce bit and frame rates as transmission conditions become adverse.

11.2 How Does MediaFLO Work? MediaFLO technology is an end-to-end system envisaged and designed by one company, Qualcomm, and that is what makes the entire system very efficient. A MediaFLO system © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00011-4

323

324

Chapter 11

takes over the content from broadcasters (both local and national) and is then responsible for its decoding, re-encoding, and delivery via the FLO air interface to receiver devices, where it is decoded using FLO-enabled tuners and baseband chipsets. A top-level architecture of the system is given in Figure 11.1. Live content is received from satellites or fiber from content providers. This is then decoded and the video and audio are processed in the MediaFLO Transcode and Multiplex System (MTMS). In quite a stark contrast to the multiple hierarchies of a “mobile channel multiplex” followed by “transport stream multiplex” used in MPEG-2-based systems, the multiplexing in MediaFLO technology is stream-based. These streams are provided by the FLO air interface and are explained later in the chapter. Each audio and video after encoding thus gets assigned to a stream and the bandwidth used is based on the variable data rates generated by the encoders, making it very efficient in statistical multiplexing. Moreover, the streams are combined in the FLO air interface by time division multiplexing and provide power-saving options to a receiver. A MediaFLO system also has a module for non-real-time content, VoD and EPG, called the MediaFLO Distribution System (MDS). The billing and package subscription information is

Figure 11.1: MediaFLO system architecture.

Mobile TV Using MediaFLO™ Technology

325

provided by the MediaFLO Provisioning System (MPS) while the billing itself is handled by the MDS. The distribution of multiplexed MediaFLO streams is done by the MediaFLO transmit system, which has features for national distribution of content as well as adding local content. Finally, the FLO air interface takes care of physical delivery using Radio Access Nodes (RANs). The actual transmitters operate in an SFN network for multidiversity reception by the receivers.

Quick Facts MediaFLO Parameter

Characterstics

Services

Live TV (QVGA upto 30 fps), clipcasting, IP data, radio, interactive services.

Number of Video Channels

20–24 (6 MHz), 28–32 (8 MHz) using 30 fps and H.264 with average bit rates of 200 Kbps per channel.

Air Interface

FLO air interface with OFDM transmission.

FLO Features

Supports base layer and enhancement layer modulations, graceful fallback in quality with C/N, fast channel-switching time of ⬃2 secs, lower power consumption in mobiles (850 mAH battery can be used for 4 hours when playing video).

Transmission Mode

Broadcast to unlimited users; authentication, conditional access, subscription management and interactivity via cellular return path.

Standards

The FLO air interface and the MediaFLO is governed by TIA standards; ● ● ● ● ● ● ● ● ●

TIA-1099-A TIA-1102-A TIA-1103-A TIA-1104 TIA-1120 TIA-1132 TIA-1130 TIA-1146 TIA-1178

Also by ITU-R approved standard (BT.1833). Standards development work coordinated by FLO forum. Markets Deployed

The United States is currently the major market. Within the United States, FLO has been deployed using channel 55. Since the DTV transition the MediaFLO technology has been rolled out nationwide.

Main Technology Provider

Qualcomm

Interactive Services Aavailable

Chat, interactive ads, voting.

Datacast Radio

Screen has HE AACv2 player, ESG and scrolling ads. Users can access metadata such as artist, lyrics, and title information.

326

Chapter 11

11.3 MediaFLO Technology Overview The technology of MediaFLO technology is best understood by considering the transmission system as being composed of two components. The first is the FLO Air Interface (TIA 1099), which operates using an OFDM physical layer and has Turbo error correction codes to provide enhanced resilience. The overall bit rate generated by the OFDM physical layer is presented to the higher layers as “streams” into which the video and audio data can be transmitted. The second part comprises higher layers in the OSI protocol stack such as the transport layer and the media adoption layers through which applications operate (Live TV, non-real-time content, and file-based applications).

Figure 11.2: FLO Air Interface.

11.3.1 The FLO Interface The physical layer The physical layer in FLO technology is based on OFDM with an FFT size of 1, 2, 4, or 8 K carriers. FLO interface can operate over spectrum bandwidth slots of 5, 6, 7, and 8 MHz.The use of 4 K OFDM implies that each OFDM symbol is represented by 4096 carriers spaced out in frequency, thus giving it a very high channel delay spread tolerance. Of these 4096 subcarriers, 96 are not used, as they are guard subcarriers. This leaves 4000 subcarriers that are “active.” Of these, 500 subcarriers are used as pilot subcarriers and 3500 subcarriers are

Mobile TV Using MediaFLO™ Technology

327

used for actual transport of data. The subcarriers can be modulated using QPSK or 16 QAM under different “transmission modes” of MediaFLO, which are described later. FLO defines a physical layer superframe of 1 sec that has 200 OFDM carriers per MHz of allocated bandwidth (1200 OFDM symbols in a 6 MHz slot). For channel bandwidth of 6 MHz the 4 K FFT size gives a symbol duration of 738.02 μs and a guard interval (cyclic prefix) of 92.25 μs. The subcarrier spacing is 1.355 KHz.

Figure 11.3: OFDM symbol and guard times in MediaFLO.

Base layer and enhanced layer modulation The subcarriers in the FLO transmission can be modulated using QPSK and 16 QAM modulation schemes. However, a unique feature of the FLO air interface is the base layer and enhanced layer modulation. When employed, the MediaFLO application layers provide stream data for enhancement and base layers separately and these two streams are modulated using a “hierarchical” or layered modulation. As an example, the base layer may carry data for a TV channel up to 15 frames per second, and the enhancement layer would enhance it to 30 fps (25 fps in the case of PAL systems in Europe). The use of hierarchical modulation allows receivers that are witnessing lower signal quality to operate at lower bit rates (i.e., 15 fps); the receivers in stronger signal areas can operate with full resolution.

328

Chapter 11

Figure 11.4: Hierarchical modulation in MediaFLO delivers the base layer and the enhancement layer separately.

The layered modulation enhances the range and at the same time provides for graceful fallback in quality. This would not have been the case if a single modulation—e.g., 16 QAM—were used, in which case receivers near the cell edge would have lost service.

11.4 System Capacities and Content Types If the physical layer of MediaFLO technology operates using OFDM with 4 K carriers, what is the bit rate of signals that is available for transmission over the physical interface? And how are these gross bit rates passed on to the MAC and higher layers?

11.4.1 Physical Layer The bit rates generated at the physical layer are quite straightforward to work out. There are 4000 active subcarriers, of which 3500 are used for carrying data, and 500 subcarriers are pilot subcarriers. If QPSK modulation is used, for example, each subcarrier carries 2 bits. Thus an OFDM symbol has the capacity of 3500  2  7000 bits per OFDM symbol

Mobile TV Using MediaFLO™ Technology

329

as raw data rate. With 1200 OFDM symbols transmitted per second in the 6 MHz RF slot (one superframe has 1200 symbols and a duration of 1 sec), the physical data rate is 1200  7000  8.4 Mbps. In practice, the data to be transmitted would have a turbo code. The turbo coding in FLO interface is taken from those of CDMA2000 and EV-DO and the turbo code rates specified are 1⁄2, 1⁄3, or 2⁄3. If a code rate of ½ is used, the raw physical layer capacity translates into a usable bit rate of 4.2 Mbps using QPSK. There are a number of different modes of transmission possible using different code rates and modulation types, which are given in Table 11.1. The physical layer comprises of a superframe with a duration of 1 second. The frame begins with TDM pilot symbols, which help in synchronization. The TDM pilots are followed by the OIS and four frames, which carry the multicast logical channels (MLCs) as defined below. Overhead information symbol (OIS) and channel switching With the physical data rates provided by the physical layer established, the next item to understand is how these bit rates are used to transmit multicast logical channels. If MediaFLO had used a mobile channel multiplexer, the physical layer bit rates would have mapped straight to the multiplexer output. However, FLO is an interface in which each multicast channel (video, audio, subtitles, or data) accesses the lower layers through individual streams and to this effect are independent in their carriage. The multicast channels are generated at the MAC layer and comprise a maximum of three application streams per MLC. The information on the multicast logical channels (MLCs) is carried in a group of symbols called the OIS. This makes it very easy for a receiver to decode just one symbol to find out which symbols and subcarriers carry the video channel of interest. The receiver then decodes only these few symbols out of 1200 symbols, thus saving on power consumption. In case of a channel change, the receiver just needs to revert to the OIS to get the new parameters and start decoding these from the next frame. With each frame being 1 second, the channel change time is typically just 2 seconds, i.e., time enough to decode the OIS information and lock on to the new symbols carrying the channel.

Figure 11.5: OIS facilitates the receiver’s locating of the multicast logical channel (MLC) of interest.

330

Chapter 11

The MAC layer also embeds the multicast logical channel with OIS information (embedded OIS) about the location of the same MLC in the next superframe. Thus the receiver can decode the data in the next superframe without decoding the OIS at the beginning of each superframe.

11.4.2 Control Layer An important component of information carried on the FLO interface is the control information. This information is sent over the MAC and physical layers using special-format multicast channels rather than the stream layer. More specifically, these are composed of control protocol packets, each of which carries its header and a protocol message. The types of control packets sent over the control channel are composed of the following: ● ●



RF channel description (frequency and bandwidth) Flow description message, which maps the upper layer to the stream, MAC multicast channel, and the RF channel The multicast parameters, including the transmit mode (0 to 11) and the RS code rate

11.4.3 MAC Layer Interface: Slots and Streams The physical layer presents a minimum unit of 1⁄8 of an OFDM symbol to the MAC layer for allocation of multicast logical channels. This is done by dividing the 4000 subcarriers into groups of 500 each, called an interlace. These subcarriers are not adjacent, so the frequency diversity is maintained. As shown in Table 11.1, an OFDM symbol carries 7000–14000 bits and hence the granularity of bandwidth that can be allocated is very small. As the MAC layer deals with the physical layer in the time domain, it is more convenient to use interlaces in OFDM symbols for allocation of resources to MLCs. For this purpose, the unit used is a “slot” (in time domain), which is a minimum unit that can be allocated to an MLC. One slot is mapped to one interlace in the frequency domain and consists of 500 constellation symbols (each representing one subcarrier). It should be noted that the MAC layer also applies an outer RS code to the data in the four frames before sending it to the physical layer for transmission to achieve better time diversity as is common in RF systems. Streams The basic logical unit for transmission in FLO is a stream, which is presented by the FLO interface to the transport layer. At the MAC layer, three such streams are assigned to an

Mobile TV Using MediaFLO™ Technology

331

MLC. The transport layer is responsible for generating packets for each video, audio, or data channel. The transport layer receives this information from the application layer, which supports the actual applications, e.g., a mobile TV channel. The transport layer handles each component (i.e., a video channel, one or more audio channels, or a data channel) separately, and each is assigned to a stream, which is delivered by the FLO interface statistically multiplexed via MLCs. The MAC layer has flexibility in assigning resources to the MLC by assigning the required number of slots (which translate into OFDM symbols) to each logical channel.

Table 11.1: Transmit Modes in the MediaFLO Physical Layer and Usable Bit Rates (6 and 8 MHz RF Allocations). Transmit Mode

0 1 2 3 4 5* 6 7 8 9 10 11

Modulation

QPSK QPSK 16 QAM 16 QAM 16 QAM QPSK Layered QPSK with energy ratio 4 Layered QPSK with energy ratio 4 Layered QPSK with energy ratio 4 Layered QPSK with energy ratio 6.25 Layered QPSK with energy ratio 6.25 Layered QPSK with energy ratio 6.25

Usable Data Rate (8 MHz RF Slot) (Mbps)

RAW Physical Layer Bit Rate (Mbps) at 1200 Symbols/Sec (6 MHZ RF Slot)

Turbo Code Rate

Usable Data Rate (6 MHz RF Slot) (Mbps)

7000 7000 14000 14000 14000 7000 14000

8.4 8.4 16.8 16.8 16.8 8.4 16.8

1

2.8 4.2 5.6 8.4 11.2 1.68 5.6

3.73 5.60 7.47 11.20 14.93 2.24 7.47

14000

16.8

1

8.4

11.20

14000

16.8

2

11.2

14.93

14000

16.8

1

5.6

7.47

14000

16.8

1

8.4

11.20

14000

16.8

2

11.2

14.93

Bits per OFDM Symbol

⁄3 ⁄2 1 ⁄3 1 ⁄2 2 ⁄3 1 ⁄5 1 ⁄3 1

⁄2 ⁄3 ⁄3 ⁄2 ⁄3

* Mode 5 is used exclusively for overhead information service (OIS). It is evident that the FLO air interface provides a number of options to the operator in terms of modulation type selected and the consequent physical bit rates. These are given by transmit modes numbered 0 to 11. Of these, transmit mode 5 is used for overhead information symbol (OIS), which is described later in this chapter. The energy ratio is the energy of the base layer as compared to the enhancement layer signals. Only two values are defined: 4 and 6.25.

332

Chapter 11

Figure 11.6: OFDM symbols assigned to multicast logical channels.

The MAC layer has the flexibility to allocate the required number of slots for each MLC, and this allocation can be changed on a per-superframe basis. This leads to a very flexible statistical multiplexing without using an external mobile TV multiplexer. Each multicast channel can be transmitted with a different transmit mode to accommodate the bandwidth and coverage requirements.

Quick Facts MediaFLO Technical Specifications (Updated as of September 2009) TIA-1099-A: FLO Air Interface Specification for Terrestrial Mobile Multimedia Multicast TIA-1102-A : FLO Minimum Performance Specification for Devices TIA-1103-A: FLO Minimum Performance Specification for Transmitters TIA-1104: FLO Test Application Protocols TIA-1120: FLO Air Interface Specification Transport Protocols TIA-1130: FLO Media Adaptation Layer Specification TIA-1132: Minimum Performance Specification for Terrestrial Mobile Multimedia Multicast FLO Repeaters TIA-1146: FLO Open Conditional Access(OpenCA) Specification TIA-1178: FLO System Information Specification ● ● ● ● ● ● ●

● ●

Mobile TV Using MediaFLO™ Technology

333

11.5 MediaFLO Transmission A MediaFLO transmission system in itself is very simple, and includes three types of components: ● ● ● ●

MediaFLO modulators RF TV transmitters Gap-fillers Monitoring and control

In practice, there will be a fiber or satellite network to distribute the FLO baseband signals to a number of transmitters in different markets. All the components need to be able to synchronize to an external clock reference to be able to operate as an SFN network.

11.5.1 MediaFLO Modulator An example of a FLO modulator is the MFL-2000 from TeamCast. The FLO modulator features two MPEG-2 transport stream inputs in ASI format and directly generates an output in the VHF/UHF band per selected frequency. These MPEG-2 streams are FLO distribution streams generated by the FLO air interface and use the modulator only for the physical layer functions, including OFDM modulation. The two inputs can be used to manage the redundancy. There is also a version available that gives an IF output (MFL-1000) for use in transmission in other bands such as L- or S-bands.

11.5.2 Gap-Fillers A number of gap-fillers are available based on the power requirement. These are basically RF repeaters that can work in an SFN network. An example of a gap-filler is the GFX-0300 (Gap Filler Engine for TV Networks) from Teamcast. This is a versatile gap-filler and can be used for any of the TV technologies (ATSC, MediaFLO, and DVB-T/H) as well as the Chinese standards DTMB and CMMB. There is flexibility in regard to the RF band (VHF/UHF) and the spectrum slot (6 MHz or 8 MHz).

11.5.3 TV Transmitters TV transmitters are required to deliver the required effective radiated power (ERP) together with the TV transmit antenna. The ERP is regulated by the national frequency licensing authorities. In the United States, the FCC has formulated rules for use of the “digital dividend spectrum,” which spans the band from 698 MHz to 746 MHz (channels 52 to 59). The spectrum can be used on a “flexible rules basis” under part 27 of the commission’s rules (Title 47—Telecommunications Part 27) as per the band plan (emission limits are given under 27.53). The rules permit ERP of 50 kW with antenna height of up to 300 m. With a 10 dBi antenna, a transmitter with a power rating of 5 kW can radiate at the maximum permissible ERP, i.e., 50 kW.

334

Chapter 11

MediaFLO transmit equipment is available from a number of vendors including Harris, Thomson and Rhode Schwartz, and others. The power ratings range from 100 W to 10 kW. All transmission systems are designed to deliver the requisite quality to receiving devices. This translates into a minimum C/N for signals based on the system characteristics. Table 11.2 shows the reference C/N for various transmit modes (dual pedestrian B 3Km/H). The relationship between the number of mobile TV channels that can be carried by a MediaFLO system and the C/N is given in Figure 11.7. As the C/N falls off, higher level hierarchical modulations are no longer decoded by the receiver and the channel capacity as well as quality falls. Table 11.2: Required C/N for Transmit Modes in FLO. Transmit Mode 1 2 2 3 7 (Basic) 7 (Enhanced)

Modulation QPSK 16QAM 16QAM 16QAM QPSK Layered 4:1 QPSK Layered 4:1

Inner Code 1

⁄2 ⁄3 1 ⁄3 1 ⁄2 1 ⁄2 1 ⁄2 1

Outer Code RS(16:12) RS(16:12) RS(16:14) RS(16:12) RS(16:12) RS(16:12)

Reference C/N 6.8 8.7 9.9 12.3 9.8 14.1

Figure 11.7: A conceptual diagram of a MediaFLO transmission network.

Mobile TV Using MediaFLO™ Technology

335

A good system design will ensure that reference C/N from Table 11.2 is provided to covered locations through suitable dimensioning of power transmitted and use of SFN transmitters and gap-fillers if necessary.

Figure 11.8: Mobile channels carried on MediaFLO system vs. C/N.

11.5.4 Area Covered by a FLO Transmitter There is no single definition of the area covered by a TV transmitter, as it is highly dependent on the terrain. For DTV transmission, the area of coverage of a transmitter is usually taken at the edge point where the field strength falls to 50 dBuV/M. ITU recommendations ITU-R P.1546 and P.1812 present a model for calculating the field strength for TV signals at various distances. However, in the case of mobile transmission, the antenna gain of the mobile is lower and the receiver operates at a height of about 1 m above ground. In such a situation, the urban environment losses become prominent and the Okumura-Hata model is more commonly used to predict the coverage. The model gives predicted loss and coverage in urban, semi-urban, and rural environments. Using the COST 231 Hata Loss formula, the C/N received at various distances in “suburban” environments is given in Table 11.3. The table shows that with the transmission model chosen (Hata-Urban), a transmitter with ERP of 50 kW, the coverage to a mobile indoors will be limited to about 35 Km. (A higher indoor area losses of say 10 dB will provide a coverage range of about 20 Km). For an outdoor area (i.e., minus the 6 dB loss), the coverage will extend to 50 Km. This gives a coverage area of about 2800 Km2. Considering a tuner noise figure (NF) of about 5 dB, the actual coverage may conservatively be estimated to be 25 Km, and the coverage area at 1900 Km2. The field strength for indoor coverage in this example is 59.3 dBμV/M at 25 Km.

336 Chapter 11

Table 11.3: C/N and Electric Field Strength at Tuner Input for Various Distances. Paramater

Value

Noise Floor Noise in 6 MHz Slot (Hata) Power transmitted in dbW Power transmitted dbm Receive Antenna Gain Body or Indoor areas loss Height of transmitting Antenna

132 dbW/MHz 124.22 dbW 47.0 77.0 5 6 200

Distance (km) Path Loss by COST 231 Hata Suburban Model(dB) Total Loss (dB) Power received (dbW) C/N at Tuner input (Indoor) C/N at tuner input (outdoor) Field Strength E(tuner input) for 6 MHz slot at 700 MHz

Unit

dbW dB dB m 1

dB dBW dB

dBμV/M

(50kW)

5

10

15

20

25

30

35

40

50

104.08

124.93

133.91

139.16

142.89

145.78

148.14

150.13

151.86

154.76

115.1 68.1 56.1

135.9 88.9 35.3

144.9 97.9 26.3

150.2 103.2 21.0

153.9 106.9 17.3

156.8 109.8 14.4

159.1 112.1 12.1

161.1 114.1 10.1

162.9 115.9 8.3

165.8 118.8 5.5

62.1

41.3

32.3

27.0

23.3

20.4

18.1

16.1

14.3

11.5

101.0

80.2

71.2

65.9

62.2

59.3

57.0

55.0

53.2

50.3

Mobile TV Using MediaFLO™ Technology

337

The use of layered modulation gives mediaFLO an advantage of approximately 4.3 dB as compared to nonlayered models. This implies that in basic mode, the coverage provided can be significantly higher (or a larger number of mobile TV channels can be supported). Figure 11.9 depicts the coverage area of a transmitter.

Figure 11.9: Coverage zone of a MediaFLO transmitter in a suburban environment.

If regular TV bands are used for FLO transmissions, where the ERP permitted can be higher (e.g., 1 MW) and tower heights of 350 m, the coverage distance can extend to about 60 Km (10,000 Km2 coverage).1

11.6 MediaFLO Transmitter Networks From the discussion in the previous section, it is evident that TV market areas in most cases will need to be covered with more than one transmitter as well as gap-fillers, based on terrain. In a network where two or more frequencies are used (i.e., a multifrequency network), the signals from adjacent transmitters do not add constructively. Adjacent transmitter signals are rejected by filters in the receiving devices. MediaFLO technology permits the use of SFNs for the deployment of transmitters over a wide area. As the SFN network transmitters are synchronized by aligning the beginning of their superframe to the same time mark of the GPS signal, the signals from adjacent transmitters add together to improve the C/N, and overcome shadowing by buildings by creating diverse paths. 1

The area of coverage depends on many factors, such as the height of the transmitter. A hilltop transmitter, for example, can cover an entire city with a few gap-fillers.

338

Chapter 11

Figure 11.10: In a multifrequency network (MFN), signals from adjacent transmitters do not add constructively.

11.7 Terminals and Handheld Units The baseband processors for implementing the FLO air interface and the reference software are provided by Qualcomm and other vendors. An example of an early MediaFLO handset architecture is shown in Figure 11.12. In this example, the FLO air interface is provided by the MBD1000 chip, which works in conjunction with an RF chip such as RBR1000. The MBD1000 is designed to integrate with Qualcomm’s CDMA/GSM processor MSM6550. The protocol stack comprises of a FLO interface operating under the Brew operating system. Other chip vendors are developing new chips that implement the mobile TV function. New implantations use a single system on chip (SoC), which integrates functions of RF tuner, OFDM demodulator, and MAC-layer FLO interface functions. Examples of such SoCs are the NewPort Media NMI700FLO™ mobile digital TV receiver and Qualcomm MBP2600 Universal Broadcast Modem™. Other implementations are available from Siano, Telechips, and others.

Mobile TV Using MediaFLO™ Technology

Figure 11.11: Signals from adjacent transmitters add in an SFN to deliver better C/N.

Figure 11.12: An architectural representation of a MediaFLO handset.

339

340

Chapter 11

A number of mobile phones are now available from Verizon Wireless and AT&T that provide FLO TV services on the CDMA2000 and GSM-based networks, which these carriers support. Receivers for car viewing are also available.

11.8 MediaFLO Electronic Service Guide MediaFLO services (and control information) are carried in multicast logical channels (MLCs). The OIS symbol carries the details of each channel carried. The MediaFLO electronic service guide is provided via an application called the MediaFLO Media Presentation Guide (MPG), which uses the Service Information (SI) stream broadcast as a part of FLO transmissions. Most ESG providers, such as EXPWAY or NDS, offer a customizable GUI that can be tailored to present the information per operator preferences. The MPG has the following features: ● ● ● ●

Managing local and wide area services in multiplexes Multiple levels of subscription Possibility of background transmission of video with purchase of viewing rights Demographical targeting of video content

FAQs MediaFLO 1. How does MediaFLO technology support fast channel change? The fast channel change in MediaFLO is due to its signaling structure. The information on multicast channels is carried in a symbol called the OIS. The receivers need to decode only this symbol to obtain information on all channels in the streams. 2. How does MediaFLO manage multiple multiplexes in a single RF slot for local and national transmissions? MediaFLO technology uses the concept of MLCs for carrying the application flows. The application flows may be mapped to more than one MLC, which may represent different multiplexes, e.g., for local and wide area. However, the physical layer involves the same RF slot. 3. Can I save TV programs transmitted by the operator on my handset? No, this feature is presently not supported. It is possible to download and save clips using the 3G networks. However, this is a different service.

Mobile TV Using MediaFLO™ Technology

341

11.9 MediaFLO Commercial Networks 11.9.1 Mobile TV: AT&T and Verizon Wireless In the United States, the service provider for MediaFLO services is FLO TV Inc. FLO service is provided by a satellite-based network, which distributes the wide-area FLO channels to a network of transmitters in different markets. This network covers all markets after the completion of digital transition. At present, there are two service providers: Verizon Wireless and AT&T. The base offering is of 11 channels with each operator offering two channels each that are specific to their networks. The major channels available include CBS Mobile, CNBC, MSNBC, Comedy Central, NBC 2Go, ESPN Mobile TV, Fox Mobile, Fox News, MTV, and Nickelodeon. Both companies use the same network, which is operated by FLO TV Inc. on channel 55. The closure of analog transmitters now allows the company to expand the use of channel 55 (716–722 MHz), which was acquired in auctions. Service has now expanded to all major markets in the United States.

Figure 11.13: FLO TV.

The services offered by AT&T and Verizon comprise 11–13 channels and are offered at different rates starting at $10 per month. The initial services until the beginning of 2009 were limited to 56 markets. After the completion of the digital transition on June 12, 2009, the spectrum was freed up, opening a way ahead to roll out additional markets. The services in June 2009 had

342

Chapter 11

expanded to 84 markets, getting closer to nationwide coverage. The additional markets included Boston, Charlotte, Cleveland, Houston, Miami, Milwaukee, Sacramento, and San Francisco. A number of commercial trials have also been held for MediaFLO technology. This includes those in Hong Kong (PCCW), Malaysia (Maxis and Astro), UK (BskyB), and Japan (MediaFLO Japan Planning Inc. and Media Scope).

11.9.2 Auto Entertainment Systems With additional U.S. market being available for MediaFLO, FLO TV Inc. is now moving ahead with a large-scale TV rollout targeted at cars and other automobiles in cooperation with Audiovox. This is likely to position the car entertainment systems in a new premium category with live TV as well as audio and data services. With FLO TV having a strong air interface and receivers available with diversity reception, this is a major market initiative.

11.9.3 FLO Personal Television FLO TV Inc. is also launching direct-to-consumer services branded as “FLO Personal television.” These are available on a wide range of devices, from cellphones to standalone receivers. The standalone receiver features a 3.5-inch (diagonal) screen and has sufficient battery capacity for a five-hour viewing time. The device has built-in stereo speakers and can be placed on any flat surface for convenient viewing. Other features include a touchscreen and 5 hours of operation on a single charge.

11.10 Example of a MediaFLO System for Mobile TV: Verizon Wireless Verizon was the first operator to launch MediaFLO services under the name VCAST mobile TV, when it started offering the service in March 2007. As an operator of CDMA2000 and EV-DO network in the United States, it also demonstrated many “proofs of concept.” These included the transmission of live TV channels and operation of subscription-based services on a mobile broadcast network using the CDMA network to manage the subscription of services and authentication of handsets. Services such as VCAST are supported with a companion website (http://verizonwireless. com/mobiletv). The website gives the list of supported phones, subscription details, and program guide. Verizon uses conditional access provided by Qualcomm for protection of content delivered over the MediaFLO network. The keys required for access are provided via the CDMA network. The service is available on a range of mobile phones, including the LG Voyager, HTC Touch, and HTC Touch Diamond.

Mobile TV Using MediaFLO™ Technology

343

Figure 11.14: FLO TV Program Guide.

Before We Close: Some FAQs 1. How does MediaFLO technology carry more channels1 than ATSC Mobile DTV (up to 23 Vs 8) in a spectrum slot? MediaFLO technology uses QVGA (320  240) encoding against 416  240 in ATSC Mobile DTV, with the result that encoded bit rates are higher. Also in ATSC Mobile DTV about 5 Mbps needs to be dedicated to ATSC main services. The multiplexing structure in the FLO air interface is also more efficient. 2. In terms of RF transmission, how is the FLO air interface different from ATSC Mobile DTV? MediaFLO and ATSC Mobile DTV are based on OFDM transmission (4 K carriers); ATSC Mobile DTV is part of an ATSC transmission, being 8-VSB. 1

MediaFLO™ Field Test Report 80-T1021-1 Rev. E, May 15, 2008

344

Chapter 11

3. Is the use of MediaFLO technology dependent on the operator being a CDMA operator? No. The FLO air interface is a broadcast technology and can work with any return path 3G-GSM or CDMA. In the United States, Verizon is a CDMA operator, and AT&T uses 3G-GSM. 4. Do MediaFLO handsets need to be based on the Brew operating system? MediaFLO handsets need not be based on Brew. LG Invision (offered by AT&T) and LG VX9400 (offered by Verizon) are based on a proprietary OS. 5. What are the picture formats supported in MediaFLO? MediaFLO supports QCIF, CIF, QVGA, and QQVGA. Other formats can be supported. 6. Does MediaFLO support clipcasting capability? Yes, MediaFLO supports clipcasting media. Clips can be delivered and stored automatically in users’ mobile phones, which they can view at any time.

CHAPTE R 12

Mobile TV Using WiMAX If I have not seen as far as others, it is because giants were standing on my shoulders. Harold Abelson

When Dennis Sverdlov, CEO of Yota, a WiMAX company in Russia, was invited to the board of directors of the WiMAX forum by Ron Resnik, president and chairman of the forum, it was not only because his was a successful company that had garnered over 100,000 mobile WiMAX users within two months of launch. The innovative media streaming services such as Yota Music and Yota TV delivered on HTC MAX 4G handsets in their words “may be of keen interest to all WiMAX Forum members” and for good reasons. Mobile TV using 3G or wireless networks is essentially an IP-based delivery of video to mobile devices.1 Pioneers in the industry have always strived to bring in new

Figure 12.1: WiMAX can be used to target delivery of services to mobile devices that are today served by 3G networks. 1

This chapter includes figures and material from the author’s book “Mobile Broadcasting with WiMAX” ISBN: 9780240810409, Focal Press, April 21, 2008. The use of this material is acknowledged.

© 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00012-6

345

346

Chapter 12

technologies that can deliver streaming TV reliably using wireless networks. What was needed was a technology that would support the sustained bit rate required for mobile TV (e.g., 256 Kbps) in a mobile environment and be more open, scalable, and cost-effective than 3G. Early adapters tried fixed WiMAX (IEEE 802.16-2004) to provide a wireless extension that could be used for providing an IPTV-like service beyond cable. However, it lacked many features that were needed for success in a mobile environment, such as roaming, handoffs, power saving, and adapting to the mobile received signal quality. Mobile WiMAX (IEEE 802.16e-2005) has turned out to be a technology that fits the bill very effectively. It provides what is possible using 3G networks, such as streaming TV or multicast TV, and goes much beyond to facilitate navigation, gaming, and Web 2.0 services. Like 3G, it also provides roaming and handoffs, and has power-saving features. It ensures a guaranteed bit rate for each service and can adapt the modulation to the receiver requirements. It can also provide a multicast service that can reach millions of users and is fully integrated to the IP world without legacy 3G architectures. The bit rates that are supported can be much higher on the average and users have great flexibility in using all the Internet applications.

12.1 A Brief Overview of WiMAX Technology There are primarily two standards for WiMAX: ● ●

IEEE-802.16-2004: Commonly known as fixed WiMAX IEEE-802.16e-2005: Mobile WiMAX

IEEE 802.16-2004 provides for fixed and nomadic access; the 802.16e standards also handle mobility and handoffs and are designed for use at speeds up to 120 Kmph. WiMAX uses OFDM with a large number of subcarriers, which makes the transmissions resistant to fades and multipath effects. The IEEE 802.16e standard brings in features such as support of mobility and handover, advanced antenna systems, beam-forming, Multiple Input Multiple Output antenna systems (MIMO), spatial multiplexing, encryption and authentication, etc. Most new developments in base station technologies, CPEs, and service architecture are now taking place in the area of mobile WiMAX. We will be confining our discussions largely to mobile WiMAX in the rest of the chapter.

12.1.1 Mobile WiMAX Physical Layer WiMAX systems are used in a point-to-multipoint configuration with one base station transmitting in the downlink direction to a number of subscriber stations (SS). Transmissions by different SS happen in time slots allotted to them and are received at the base station. Transmission in mobile WiMAX takes place using a number of subcarriers (denoted by the FFT size), both in the downlink direction and in the uplink direction. The PHY layer

Mobile TV Using WiMAX

347

of mobile WiMAX is based on the use of scalable OFDMA, which means that the FFT size varies based on bandwidth. In mobile WiMAX, it can vary from 128 (for 1.25 MHz bandwidth) to 2048 (for 20 MHz) bandwidth. In order to maintain interoperability in base stations and mobile devices, the WiMAX Forum has a fixed range of parameters that can be used in WiMAX systems; these sets of values are called certification profiles. The parameters selected by Mobile WiMAX (for initial certification profiles) maintain the subcarrier spacing as 10.94 KHz and the frame size at 5 ms. This implies that the number of subcarriers and the FFT size varies with bandwidth.

Figure 12.2: Mobile WiMAX OFDM parameters.

As shown in Figure 12.2, the channel bandwidth determines the number of OFDM subcarriers that are used in the scheme of scalable OFDMA. In all cases, the subcarrier spacing in the frequency domain remains fixed at 10.94 KHz. This implies a fixed OFDM symbol duration of 102.9 μ seconds and a symbol count of 48 per 5 ms frame length. With a guard band of 1⁄8 (11.4 μ seconds), the useful symbol time is 91.4 μ seconds.

348

Chapter 12

12.1.2 Frame Structure in Mobile WiMAX The mobile WIMAX frame structure has been designed to be truly flexible in regard to the bit rates that can be made available to each terminal in both uplink and downlink directions. Although mobile WiMAX can operate using both FDD and TDD modes, only the TDD mode has been selected for initial implementation profiles. The frame structure of mobile WiMAX consists of TDD frames, each comprising a downlink subframe and an uplink subframe. The downlink subframe begins with a preamble (used for synchronization). The preamble is followed by the frame control header (FCH), which provides information on the length of MAP messages, coding scheme, and the subchannel information. This is followed by the downlink map and the uplink map. The maps carry information on the subframe structure that will be used and the time slots as well as subchannels allotted to the terminals. A subchannel is made available to all mobile terminals to perform ranging. The terminals can use this channel to perform closed-loop power adjustment, and new mobile terminals can also make a request for subchannel allocation. The uplink subframe in case of mobile WiMAX thus consists of bursts originating from individual mobile devices. The downlink subframe is transmitted entirely by the base station but contains subchannels assigned for individual mobile stations. The assignment of subchannels and time slots for individual subscribers is very flexible and can vary on a frame-by-frame basis. A typical representation of time slot and subcarrier allocation is given in Figure 12.3.

Figure 12.3: Downlink and uplink frames in mobile WiMAX.

Mobile TV Using WiMAX

349

It is evident that the MAPs that authorize the mobile terminals to transmit and receive in certain time slots using assigned subchannels are quite critical. These are therefore transmitted with the highest reliability such as BPSK with ½ rate coding.

Figure 12.4: Mobile devices in WiMAX receive and transmit on a small subset of carriers based on their bandwidth requirement.

As shown in Figure 12.4, the downlink and uplink subframes consist of bursts that are preidentified in a particular frame for all mobile devices that have registered for access to the network. Thus a particular mobile terminal assigned burst 2 may receive only the data pertaining to the downlink burst 2 and uplink only on the subchannel (and subcarriers) assigned for burst 2. This helps the mobile terminal to be able to transmit relatively lower power than would be the case if it had to transmit on all the 512 or more subcarriers.

12.1.3 Subchannels One of the salient features of WiMAX is the subchannelization, which allows each mobile station to use only a part of bandwidth associated with the transmission of a symbol.

350

Chapter 12

As discussed earlier, OFDM systems are characterized by the transmission of one OFDM symbol on all the available subcarriers simultaneously. For example, one OFDM is transmitted over all the available 360 data subcarriers. One OFDM symbol, using 64 QAM (6 bits/symbol) modulation, thus can have 3606  2160 bits in mobile WiMAX. If there was no way to further subdivide this capacity, a mobile station or a base station, each time it needed to transmit, would need to transmit a large block of data. In order to make the process more granular, a scheme of subchannelization is followed. Sixteen subchannels are defined in both the uplink and downlink directions, which means that one subchannel is equivalent to 1⁄16 of a symbol capacity and uses 512/16  32 subcarriers in a system with FFT size of 512. The carriers assigned are dispersed in frequency spectrum to maintain fade resilience. This feature of WiMAX makes it possible for CPEs to transmit even small blocks of data, maintaining low latency without impacting on the system capacity in the form of partially used symbols. Put another way, in the time duration of an OFDM symbol (Ts) that has 16 subchannels, 16 subscriber stations can transmit simultaneously if one subchannel each is allocated to them.

Figure 12.5: A conceptual depiction of subchannelization in WiMAX. Each OFDM symbol in uplink frame can have up to 16 subchannels; each subchannel is a group of 12 data subcarriers.

Mobile TV Using WiMAX

351

The subchannels help efficient utilization of bandwidth by subdivision of an OFDM symbol. Transmitting on only one subchannel gives a power advantage of 16 times (i.e., simultaneously transmitting on only 1⁄16 of the available subcarriers). This is equivalent to a link budget enhancement of 12 dB.

12.1.4 Adaptive Modulation Mobile WiMAX provides for the use of adaptive modulation and coding schemes. The support of QPSK, 16 QAM, and 64 QAM is mandatory for the base stations. The mobile stations need to support QPSK and 16 QAM and may optionally support 64 QAM. The adaptive modulation scheme is aptly named, as the modulation scheme can change from frame to frame. Thus a mobile that is operating at 64 QAM may suddenly find itself with an error rate above a threshold and switch to 16 QAM or finally to QPSK. The higher density modulation schemes support a higher data rate but a lower tolerance to intersymbol interference or noise. The support of the different modulation schemes is a powerful feature to maximize the bit rates in actual usage environments. Typically, the modulation reverts to a lower level with distance from transmitter due to lower carrier-to-noise ratio (C/N).

Figure 12.6: Adaptive modulation in WiMAX results in terminals working at the highest possible level of modulation based on location.

352

Chapter 12

12.1.5 Data Rates in the Mobile WiMAX Environment The calculation of total data rates is quite straightforward in mobile WiMAX, as it has a fixed subcarrier spacing and symbol time. For example, for a bandwidth of 5 MHz, there are 512 subcarriers, of which 360 are data subcarriers. If 64 QAM modulation is used (6 bits per symbol), the number of bits that can be coded per OFDM symbol is 3606  2160 bits. In mobile WiMAX, the frames are 5 ms (200 frames per second) and each frame has 48 symbols, of which 44 are usable for data. Hence the symbol transmission rate is 20044  8800 usable symbols per second. With each symbol carrying 2160 bits, this gives a data rate of 88002160  19 Mbps. If an FEC of 5/6 is applied, the data rate is 195/6  15.4 Mbps. It should be noted that these rates are based on the usage of 44 OFDM symbols per frame for traffic and are purely indicative, as they do not take into account downlink/uplink subframe gaps or traffic ratios. Table 12.1 provides the expected bit rates for the most common implementations for these parameters and bandwidths of 5 MHz and 10 MHz.

12.1.6 Mobile WiMAX Classes of Service Mobile WiMAX supports the following five classes of service: ●



Unsolicited Grant Service (UGS): Designed for support for fixed bit rate circuit emulation services (T1/E1). Real Time Polling Service (rtPS): Designed to support services, such as video, that periodically generate variable-size data packets. An example can be a video streaming Table 12.1: Mobile WiMAX Data Rates (Courtesy of WiMAX Forum). System Parameter

Downlink

System Bandwidth FFT Size Null Subcarriers Pilot Subcarriers Data Subcarriers Subchannels

5 MHz 512 92 60 360 15

Symbol Period Frame Duration OFDM Symbols per frame Data OFDM Symbols per frame

102.9 μ seconds 5 msec 48 44

Modulation and Code Rate

QPSK ½ 16 QAM ¾ 64 QAM ¾

Uplink

104 136 272 17

Downlink 10 MHz 1024 184 120 720 30

5 MHz Channel

Uplink

184 280 560 35

10 MHz Channel

Downlink, Mbps

Uplink, Mbps

Downlink, Mbps

Uplink, Mbps

3.17 9.5 14.26

2.28 6.85 10.28

6.34 19 28.51

4.7 14.11 21.17

Mobile TV Using WiMAX







353

service requiring the transmission of a field (one half of frame) every 20 ms. In this case, the subscriber station needs to make a request for each packet to be transmitted. Non-Real-Time Polling Service (nrtPS): Designed to support applications where the delays generated by the system are not critical such as FTP. In nrtPS, requests for bandwidth need to be made and it is allocated based on meeting requirements of higher priority services. Best Effort Services (BE): Designed to support services without any minimal level of guarantees such as web browsing. Extended Real-Time Variable-Rate Services (ERT-VR): A combination of UGS and rtPS. Unsolicited periodic grants of bandwidth are provided but with flexibility in terms of dynamic data rates. The service is designed for support of applications such as VoIP with silence suppression. The support of ERT-VR is a new feature supported in mobile WiMAX only.

12.1.7 Spectrum Bands Used in WiMAX Mobile WiMAX is based on the use of scalable OFDMA and can operate in different frequency bands (e.g., 2.3, 2.5, 3.3, or 5.8 GHz) and system bandwidths from 1.5 MHz to 20 MHz. The number of subcarriers increases based on the available bandwidth; hence there is no degradation owing to multipath interference, even as bandwidth gets scaled up

Figure 12.7: Release 1 certification profiles in mobile WiMAX.

354

Chapter 12

to 20 MHz. However, the WIMAX forum has selected certain frequency bands and profiles in which certification of equipment for interoperability is being done. For mobile WiMAX, the WiMAX forum has indicated that the Release 1 certification profiles will encompass the frequency bands of 2.3, 2.5, 3.3, and 3.5 GHz and bandwidths of 5, 7, 8.75, and 10 MHz. The forum has also specified parameters for each of these profiles, which include the FFT size, number of data carriers, and guard interval.

12.1.8 Coverage Area Using Mobile WiMAX Path loss The most important factors that determine the coverage area includes the operating frequency, path loss, and the power allowed to be transmitted (for regulatory reasons). Path losses in WiMAX transmission may arise owing to three basic factors: ●

The free space path loss (FSL): The FSL is given by the following equation: FSL  10 Log (4π D F/C)2

where D  distance from transmitter, F  frequency, and C  speed of light. The path losses increase with the square of the frequency. Hence the path loss at 2 GHz (IMT2000 frequency band) is about 12 dB higher than the UHF band at 0.5 GHz.

Figure 12.8: Free space loss at various operating frequencies.

Mobile TV Using WiMAX

355

The increase in FSL with the square of the frequency requires the cell sizes at higher frequencies to be smaller in order to maintain link margins. Also, the higher frequencies such as 5.8 GHz and above are best suited for line of sight environment. In NLOS conditions, a link at 5.8 GHz would support NLOS customer premises equipment at distances less than a kilometer. ●



Loss due to non-line-of-sight operations (NLOS): As WiMAX systems in urban environment operate in a non-line-of-sight manner, there is a loss in received signals that depends on the reflected signal strengths. In general, lower frequencies such as 800–2000 MHz have better performance for NLOS conditions than the higher bands. The signal strength in most NLOS conditions varies sharply with the location of the receiver, due to reception of waves reflected from many objects. Ground propagation models are required for path loss analysis rather than free space loss. Hence an additional margin needs to be given for the loss expected. Multiple antenna techniques with spatial diversity are used in WiMAX to improve margins in NLOS conditions. Loss due to in-building penetration: In-building losses can range from 2 dB (for a room with windows) to 6 dB (for a brick wall). The losses can go up to the 10–12 dB range for indoor areas that have enclosures built with metal.

In order to understand the mobile WiMAX transmission environment, we consider an example of 10 MHz bandwidth (1024 FFT size) transmission using a WiMAX base station with 10 watts power output (40 dbm) and using dual 15 dbi antennas each transmitting 40 dbm. For a given receiver (PCMCIA WiMAX card), the maximum allowable path loss is given by Table 12.2. It may be seen that the maximum path loss allowed using 64 QAM is 122.4 dB, which is 20 dB lower than that using QPSK modulation.

Table 12.2: Link Analysis in a Mobile WiMAX Environment. Link Analysis Base Station to Mobile Parameter Receiver Sensitivity Total Margin to be Maintained Receive Antenna Gain RX Diversity Gain Signal Required at Receiver Transmitted Power (dBm) Maximum Path Loss

QPSK 100 dBm 19.56 dB 2 dBi 3 dB 85.4 dBm 58 ⴚ143.4 dB

16 QAM 90 dBm 19.56 dB 2 dBi 3 dB 75.4 58 ⴚ133.4 dB

64 QAM 80 dBm 19.56 dB 2 dBi 3 dB 65.4 dBm 58 ⴚ122.4 dB

356

Chapter 12

Figure 12.9: Link analysis in a mobile WiMAX network.

12.1.9 Advanced Antenna Systems and MIMO in WiMAX One of the strong features of mobile WiMAX is the support of smart antenna technologies and multiple antennas, which helps to increase the throughput that can be attained in a given transmission environment. Advanced antenna systems are able to form a beam in the direction of the receiver and thus provide a high C/N. Multiple antennas can also be used (e.g., two transmit and two receive (called 22 MIMO). Based on prevailing transmission conditions, these can provide data rates close to double of those possible with a single antenna.

12.1.10 Frequency Planning in WiMAX Systems WiMAX is also characterized by a very high degree of resilience to cochannel interference, i.e., the capability of decoding the signals in the presence of signals at the same frequency from adjacent cells or base stations. Mobile WiMAX has been designed with the capability of being used with a frequency reuse factor of one, i.e., all the adjacent cells in a cluster can use the same frequency. In order to reduce interference at cell edges and to increase the capacity of the WiMAX networks, sectorization is commonly used.

12.1.11 Summary: Features of Mobile WiMAX Mobile WiMAX provides robust transmission in NLOS and mobile environments with speeds up to 125 Kmph. Its key features can be summarized as follows:

Figure 12.10: Multiple Input Multiple Output antennas (22 MIMO).

Figure 12.11: Frequency allocation of adjacent cells in WiMAX with frequency reuse of 1.

358 ●













Chapter 12

Mobile WiMAX is very flexible in allocation of resources for both uplink and downlink traffic. It provides QoS-guaranteed classes where quality is maintained by controlling access to the wireless medium. The scheduling is available on a per-frame basis for both the uplink and downlink by using MAP messages at the beginning of every frame. Hence the system is ideally suited for rapidly changing transmission conditions. Mobile WiMAX provides for very robust channel coding for forward error correction. The Convolutional Coding (CC) and CTC provide high resilience against errors. The transmission can be tailored to the channel conditions by using the schemes of Partial Utilization of Subcarriers (PUSC) and Full Usage of Subcarriers (FUSC). The subcarriers can be allotted to a subchannel by being adjacent in frequency or based on random allocation. The use of smart antenna techniques (space-time coding, beam-forming) can provide diversity reception or increase in peak data rates. The use of 44 MIMO is built into the mobile WiMAX standards. Mobile WiMAX assigns only a few subcarriers to each mobile station on which they are expected to transmit. This reduces the peak power requirements significantly. The handover in Mobile WiMAX can be very flexible, with each mobile station being in contact with an active set of base stations. In both the “fast base station switching” and “macro-diversity handover,” the handover happens without loss of data or latency. At the same time, the mobile stations can go into sleep mode and need not be active for every ranging message.

12.2 Why is Mobile WiMAX Suited for Mobile TV? Mobile TV services require a sustained bit rate (e.g., 256 Kbps) to be delivered over wireless media in a mobile and highly variable environment. Due to the unavailability of guaranteed throughput and latency, wireless or 3G video has traditionally been characterized by jerky video, low rate or dropped frames, and frequent buffering. WiMAX now presents a new window of opportunity to deliver video to fixed, nomadic, or mobile devices by virtue of its QoS and “service flow”–based connections. WiMAX can be used either as a wireless extension to an IPTV network or natively using streaming protocols to deliver mobile TV. For delivery of mobile TV, it has the following advantages: ● ● ● ● ● ● ●

High bit rates achievable (3–10 Mbps or more based on configuration) Metro wide or rural connectivity High resilience to multipath propagation through OFDM Guaranteed QoS and a service class for video Universal availability of interoperable client devices Mobility up to 125 Kmph with IEEE 802.16e Minimum or no maintenance on access links

Mobile TV Using WiMAX ● ● ● ●

359

Compatibility with IP-based protocols and IPv6 Capability of WiMAX to connect Wi-Fi hotspots Two-way interactive communications Compatibility and roaming with 3G mobile networks

Figure 12.12: WiMAX systems for carriage of unicast or multicast video.

WiMAX networks have been designed to provide broadband wireless access with QoS parameters that provide assured service flows for each connection. WiMAX also has a service class, the Real-Time Polling Service (rtPS), which is designed to cater to variablerate data packets such as those generated in MPEG with low latencies in order to meet video transmission requirements. WiMAX networks—by providing a very efficient multicarrier OFDM-based physical layer that overcomes intersymbol interference, modulation adapted to transmission conditions, frequency selective assignment to subcarriers to overcome interference, HARQ, beam-forming, and efficient error correction mechanisms—present a combination of technologies that deliver sustained high bit rates. This feature—combined with the characteristics of the MAC layer to establish service classes and ensure service flows with each class—completes the environment needed for delivery of high bit rate and low latency services such as audio and video. WiMAX overcomes the vagaries of the wireless medium associated with non-line-of-sight transmission—sharp changes in signal levels and interference.

360

Chapter 12

Finally, in addition to guaranteed service flows, mobile WiMAX brings in the multicast and broadcast services (MBS) wherein video can be efficiently multicast over a wireless medium, thus using only a fraction of the capacity available in a cell for delivering video services.

12.3 WiMAX-Based Mobile TV Basics The architecture of a network with WiMAX delivery option is given in Figure 12.13.

Figure 12.13: WiMAX-based TV delivery.

TV over WiMAX is a natural application of the WiMAX network and is in principle similar to IPTV. Like IPTV, a limited set of channels is encoded and streamed live from a streaming server. The content would also be protected via a digital rights management system (or DRM), and the transmission may also be protected by an encryption system. Some of the unique features of WiMAX TV are as follows: ● ●

Targeted national, regional, and local advertising Support of full-motion video (e.g., 50 fps/60 fps) with full-screen resolution (VGA and higher, including HD)

Mobile TV Using WiMAX ●

● ● ● ●





361

Better presentation of live content, EPG, and on-demand features with mobile clients or via desktops and STBs Better integration with home networks today served by ADSL-based IPTV Higher quality of channels due to better channel speeds and QoS support Quick introduction of new services and applications due to open standards environment Higher interactivity with customers, including uploading of their pictures or videos and provision of user-generated content Availability of a larger range of channels as well as on-demand content with search, indexing, and a mix of multicast and unicast deliveries Enabling of Wi-Fi hotspots with TV services

12.3.1 How to Set Up a WiMAX TV Service A TV service running over a WiMAX network includes two components: A Set of Channels Being Multicast: Multicast routing is be set up (IGMP 3.0) in all the WiMAX cell areas, and the same channels are then available in a multicast mode, irrespective of the cell in which a user happens to be. The users (who have the authorization to receive the content) can view a list of channels being multicast and join any channel they wish. In the case of mobile WiMAX networks, it will be possible for a user to move to another cell area and continue on the multicast cell available in this area. Unicast Content: A second set of channels may be delivered to specific users as unicast content. In this case, the WiMAX network will continue to direct the video traffic to the cell in which the user exists by its mobility features without the streaming application being aware of the location of the user. In addition, the user application (such as RealPlayer) may also select the stream size that best fits the transmission conditions. A typical configuration of a TV over WiMAX operator would consist of a setup like the one shown in Figure 12.14. The setup consists of essentially three parts. The first part is a digital headend, where the satellite channels are received from satellites and encoded by MPEG-4, Windows Media, Real, or QuickTime encoders and combined into an IP stream. At this stage, the TV channels that may aggregate to a total of 6 to 8 Mbps are ready to be delivered via any media to customers. The second part is the WiMAX TV solution. The compressed audio and video data is streamed using live streamers and encrypted or subjected to DRM so that only authorized customers can view the content. The third part is the WiMAX network itself, where the media streams are delivered for transmission. The media streams may also include RSS streams. The WiMAX TV may be received by using WiMAX customer premises equipment (CPEs) in the case of fixed WiMAX IEEE802.16-2004) or may be delivered directly to mobile devices that have built-in WiMAX

362

Chapter 12

Figure 12.14: WiMAX TV solution.

chipsets for reception (IEEE802.16e-2005). The devices must contain a mobile client for DRM and decryption capabilities. NDS WiMAX TV solution Figure 12.15 shows the typical configuration in a WiMAX TV solution that involves NDS (NDS WiMAX TV). UDcast WiMAX TV solution: MXtv In March 2008, Nextwave unveiled its WiMAX TV platform, called MXtv. This platform is based on the use of the multicast and broadcast feature of mobile WiMAX (MBS) and can handle different types of multimedia content, including video in QVGA or CIF resolution, and can support 45 multicast channels in 10 MHz of WiMAX bandwidth. Helix WiMAX TV solution An example of a platform implementation for TV over WiMAX is given in Figure 12.16. This implementation uses the Helix Producer, Helix Media delivery system, and Helix Mobile Gateway. The implementation has many commonalities with the streaming of TV

Mobile TV Using WiMAX

363

Figure 12.15: NDS WiMAX TV solution.

on mobile networks, where the file formats and protocols need to be in line with 3GPP recommendations. However, the WiMAX networks require additional functionalities relating to multicasting and QoS features. The online TV implementation has the following components: ●





Helix Producer: A multiplatform product supporting multiple codecs and capable of handing live and on-demand media in different formats. It generates streams fully compliant with Real or 3GPP2 formats with support for H.264. Multiple stream outputs with various resolutions and bit rates are available. Helix Media Delivery System: The overall framework for providing a suite of streaming and on-demand services to fixed and mobile networks. The server includes functions for network management using SNMP; content management; authentication, authorization, and accounting (AAA); and billing and customer care functions. Helix Mobile Gateway: Designed to interface to a range of mobile and wireless networks to provide the network interface for streaming applications. It supports native RTP/RTSP streaming (RTSP over TCP [RFC2326], SDP for RTSP describe [RFC2327]), MP3, and 3GP file support. The gateway can handle the protocols for various networks for 3GPP (3G-GSM) and 3GPP2 (CDMA2000 and 1xEV-DO) networks.

364

Chapter 12

Figure 12.16: Helix implementation of online TV.

12.4 WiMAX Devices and Handsets The availability of the receiving devices for WiMAX has been dependent on three major factors: ● ● ●

Release of certification profiles by the WiMAX forum Availability of chipsets based on the profiles released WiMAX certification after interoperability testing in “plugfests” and WiMAX-authorized labs

A wide range of mobile devices is today available for mobile WiMAX. These range from USB modems to PC cards and mobile handsets with WiMAX built in.

12.4.1 LG WiMAX KC1 Smartphone LG has released a smartphone that is compatible with the Korean implementation of the IEEE802.16e mobile WiMAX network. The phone (WiBro KC1) is a multifunction device

Mobile TV Using WiMAX

365

that supports a DMB receiver for mobile TV in addition to the WiMAX functionalities. The phone is based on Windows Mobile 5 and has a 2 MP camera. It also has a 2.4-inch touch VGA screen.

Figure 12.17: LG WiBro KC1 smartphone.

12.4.2 Samsung SPH-M8100 WIMAX Handset Samsung has launched a series of handsets for the Korean WiBro market and also demonstrated products for mobile WiMAX based on IEEE 802.16e. The SPH-M8100 is a PDA smartphone that is aimed at high connectivity options and multimedia services. It comes with 1xEV-DO connectivity in addition to WiBro, which gives it the capability to operate in a WiMAX environment (where available) and to switch to EV-DO for 3G high-speed connectivity. It also has a T-DMB receiver built in for mobile TV and music on the go. It also has Bluetooth with A2DP for hands-free voice and music applications. Other features supported include a 2 MP camera and a VGA camera for video calling, an MMC card slot, and a TV-out port.

12.4.3 HTC MAX 4G HTC has launched a handset (HTC T2890-MAX 4G) that is a mobile WiMAX/GSM dualmode handset. It is primarily designed for the Yota network in Russia. It features a 3.8-inch glass screen with WVGA resolution (800400), and has 8 GB memory and built-in GPS.

366

Chapter 12

Figure 12.18: Samsung SPH-M8100 WiBro phone. (Courtsey of 3g.co.uk and Samsung)

The handset uses Windows Mobile 6.1 Professional and is based on the Qualcomm ESM7206A™ 528 MHz processor.

12.4.4 Samsung SPH-9000 and SPH-9200 Ultra Mobile PCs (UMPC) Samsung has unveiled a mobile WiMAX–compatible UMPC that is based on Intel’s Rosedale WiMAX chip. The PC has 256 MB RAM and a 30 GB hard disk drive and operates on the Windows XP operating system. A successor device is the SPH-9200, which features 512 MB of RAM, a 30 GB HDD, and a 800489 touchscreen. It also has a 1.3 MP camera. In terms of connectivity, it supports Wi-Fi, HSDPA, and mobile WiMAX (WiBro).

12.5 Examples of Mobile TV Services Based on WiMAX 12.5.1 Scartel Yota WiMAX TV Services Scartel provides mobile WiMAX services in Moscow (Yota). The Yota mobile WiMAX network operates in the 2.5–2.7 GHz range. Using mobile WiMAX, data rates of up to 10 Mbps can be provided to handsets and other receiving devices such as UMPCs. One of the services available on the network is Yota TV. The Yota mobile TV service currently carries

Mobile TV Using WiMAX

367

Figure 12.19: Samsung Mobile WiMAX UMPC SPH-9200. (Courtsey of Samsung)

17 channels of TV and music, which are available for viewing free of charge in the initial period (Vesti 24, RBC TV, Music Box Ru, Bridge TV, My, A-One, 7-TV, Universal-Fashion TV, Mezzo, Luxe TV, Mother & Child, Bibigon, World Fashion Channel, DW-TV, France24, First Gaming, MGM, ShansonTV, Mir, PRO Dengi, NCTV, and RTVi). The Yota network also provides an on-demand video service (Yota Video), which has a large catalog of movies and other online content. A TV-out connector permits viewing TV and video on a large screen. Channel surfing is fast, as the Mobile TV channels are being multicast. A music service is also available (Yota Music), which provides a selection from hundreds of thousands of titles and is provided free of charge. The services can be received using the HTC MAX 4G handset. Mobile TV services from Yota have been nominated for “Best Mobile TV Award” for 2009 by the GSM association. The network in addition supports a host of other services, including a social networking service (yap-yap™) and VoIP. All calls within the Yota network are automatically routed as VoIP calls using the mobile WiMAX network. The service also has a collection of electronic books, which can be downloaded for reading.

12.5.2 Korea Telecom WiBro Multicasting Channel Service Korea is an appropriate example of the multicasting channel services, as entertainment is the major application on the broadband networks in Korea (more than 60% usage).

368

Chapter 12

Figure 12.20: Yota® Mobile TV service using Mobile WiMAX and HTC MAX 4G handset.

WiBro services have been available in South Korea since April 2006; KT and SKT are the WiBro operators. WiBro services are IEEE 802.16e–compliant and use a bandwidth of 8.75 MHz in the 2.3 GHz band. KT provides multicasting channel services based on the solution provided by Thin Multimedia Inc. This service operates on a Samsung platform. The KT WiBro services comprise of a live video multicast service. Real-time transcoding and broadcast of IPTV content is handled by thinIPTV™, which is a custom software for H.264, WMV9 coding and transcoding of content. All IPTV multimedia formats and codecs are supported in the application. One of the convergence platforms used by KT WiBro is Intromobile’s convergence solution, IntroPAD™. It is designed to conceptualize and provide innovative applications using Mobile 2.0 and fixed-network convergence. The service components include a customized UI, content push, and its promotion in real time. Another platform is the NetMirror™, designed for use over next-generation high-speed networks such as HSPA, mobile WiMAX, or WiBro. It is a “next-generation” personal media blog and personal broadcasting service platform (unicast or multicast) based Web 2.0 for interactive user-created content (UCC) services in the mobile and fixed-line environment.

Mobile TV Using WiMAX

369

Users have access to a menu with options such as live broadcast, video UCC, Hot UCC, My Album, and others.

12.5.3 Clearwire XOHM® (CLEAR®) Clearwire has built a mobile WIMAX network and has been launching WiMAX services progressively in the United States after its trials and commercial launch in Portland, Oregon; Atlanta, Georgia; Las Vegas, Nevada; and Baltimore, Maryland. In September 2009, 10 more markets were added. The XOHM service, available under the name CLEAR, is in addition to Clearwire’s pre-WiMAX network, which it operates in 50 markets. XOHM is an all-IP network with strategic investment from Comcast, Time Warner Cable, Google, Sprint, and Intel. A number of WiMAX devices are available for the CLEAR network. An example is the ZTE TU25 WiMAX modem. OQO has released a UMPC with a XOHM WiMAX receiver embedded in the device. This is a Windows Vista device. A variety of Dell, Samsung, Fujitsu, Toshiba and other laptops now come with a WiMAX/Wi-Fi option.

Before We Close: Some FAQs 1. Are there any areas where WiMAX has a distinct advantage over 3G or HSPA/EV-DO networks? Mobile WiMAX is much more efficient in delivering high-speed data with guaranteed QoS. Although HSPA networks have increased speeds with 16 QAM and 64 QAM, WiMAX offers adaptive modulation, an increase in subchannels to sustain QoS, large carrier bandwidths (e.g., 20 MHz), and flexibility in uplink/downlink resources used due to TDD-based transmission. It also provides unhindered access to the Internet as opposed to the “walled-garden” approach of 3G networks. 2. Is the MBS service for multicasting in WiMAX the same as MBMS? If not, what are the differences? MBS service is very similar to MBMS, in the sense that both are multicast services. WiMAX uses an OFDM physical interface, and MBMS is based on CDMA. However, going beyond this, MBMS is a 3GPP service that is defined end-to-end including encoding types and higher layer protocols such as the use of FLUTE. The MBMS service requires clients to be installed in phones to negotiate the 3GPP-MBMS protocols. MBS is however more efficient, as the full carrier need not be devoted to MBS. It can even be used with only frames being partially used for MBS. Using MBS is therefore much more practical and resource-efficient. 3. How can WiMAX be integrated in a 3G mobile device? Cellular mobile networks use 3GPP protocols. WiMAX can be integrated as an “external network” using 3GPP-IMS. Such a network can be used for VoIP calls or streaming services, amongst others, without using 3G resources.

370

Chapter 12

4. What is the reason for the relatively slow adoption of mobile WiMAX? Mobile WIMAX is relatively recent; it was only in 2008 that WiMAX forum–certified devices became available. WiMAX requires the building of a new network. It also requires mobile WiMAX spectrum allocation, which has been a slow process in many countries. The lack of definition of end-to-end applications (such as mobile VoIP) has also made adoption of the new technology slow. 5. Is it possible to add a WiMAX receiver to a mobile device? WiMAX USB adapters and PCMCIA cards are available and it is possible to add these to compatible devices such as UMPCs, Pocket PCs, and other devices running Windows XP or Windows Vista.

CHAPTE R 13

Spectrum for Mobile TV Services All progress is precarious and the solution of one problem brings us face to face with another problem. Martin Luther King

13.1 Introduction The wireless industry convention CTIA Wireless is always highly anticipated for potential insight into new wireless devices and technologies that will be unleashed during the ensuing year. However, CTIA Wireless 2009 at San Diego surprised all when the industry leaders sought an unprecedented 800 MHz of additional wireless spectrum. FCC Chairman Julius Genachowski was forced to make a reference to a petition on the looming crisis of lack of “licensed spectrum” and to resolve to provide more “oxygen” to the industry. So is the world faced with the threat of a Malthusian nightmare of a growing population of wireless devices and insufficient spectrum? Perhaps not yet, but the issues need to be addressed with the highest priority. All wireless technologies are dependent on the use of the spectrum to deliver the content to the intended users. The use of the spectrum has a long history dating back to the use of radio waves for wireless communications and broadcasting. The delivery of mobile TV over the airwaves requires the transmission of QCIF or QVGA content coded using H.263, MPEG-4, WMV, or H.264/AVC standards and thus the transmission of a data stream that can vary from a bit rate of 64 Kbps to 384 Kbps depending on the exact technology used and the resolution selected. Operators of services based on different technologies such as 3G, Digital Audio Broadcasting (DAB), wireless networks (wireless LANs), terrestrial digital TV operators (those offering DVB-T or ATSC technologies), and others have all adapted varying approaches to be able to find and deploy spectrum quickly. The approach for allocation of spectrum is now globally harmonized with the ITU. This is done through the World Radio Conferences (WRC) (Called WARC prior to 1993) through a consultative process allocating globally harmonized bands for various services, while leaving country-specific allocations to the governments. In addition to the WRC, there are Regional Radio communication Conferences (RRCs), which focus on the allocation of spectrum on

© 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00013-8

371

372

Chapter 13

a regional basis. The specific allocations vary from country to country, with the underlying principle of optimizing the utilization of this resource, noninterference with other users, and development of new services. There is also a need to coordinate the use of spectrum internationally. The allocation of spectrum goes hand in hand with the technical specifications for the services and intended usage. The challenge of spectrum allocation lies in the fact that there is need to cater to a range of continuously evolving new technologies: mobile phones, 3G, WCDMA, mobile broadcasting, WiMAX, wireless, digital TV, and others. Moreover, the evolution of technologies continues to bring forth new requirements on the use of spectrum that need to be coordinated and allocated. Following ITU-based recommendations for internationally coordinated frequencies makes it possible to use the services uniformly in all countries. The use of GSM spectrum in the 800 MHz and 1800 MHz bands is an example of such coordinated allocation that makes possible worldwide roaming. There have been exceptions to such allocations being done globally due to historical reasons, such as in the United States, where GSM networks operate in the 1900 MHz band. The WRC meets periodically and the gap between meetings has traditionally been due to the consultative process involved. WRC 2007 has made significant decisions on the identification of frequency bands that can be used for 3G-LTE.

13.2 An Overview of Spectrum Bands It is interesting to take an overview of commonly used services and associated spectrum bands in major regions of the world. One might be tempted to take a simplistic view of the spectrum allocation processes and consider only the band specific to the service being planned, such as mobile TV, and ignore the spectrum allocations for other services such as 3G. However, remember that most mobile devices are likely to be multimode devices, with Wi-Fi, WiMAX, and 3G cellular networks being available on the same device. These need to be equipped with sets of antennas that can work in different bands. Chipsets also need to be handle carriers in multiple-spectrum bands. Moreover, services such as multimedia broadcasting (i.e., mobile TV) are likely to be received over one of the many available media—3G, Wi-Fi, WiMAX—or broadcast technologies, such as ATSC, DVB-H, or MediaFLO. Hence an integrated view of frequency allocations is important. Figure 13.1 shows these allocations.

13.2.1 Spectrum for 2G Services Internationally (with country-specific exceptions), the bands that have been recognized for 2/2.5-generation mobile services are given in Table 13.1.

Spectrum for Mobile TV Services

373

Figure 13.1: A macro view of frequency allocations.

13.2.2 Spectrum for 3G Services: IMT2000 The Spectrum for Multimedia Services under the IMT 2000 was finalized by WARC in 1992 and 2000. WARC 1992 allocated the frequency bands of 1885 to 2025 and 2110 to 2200 for

Table 13.1: International Allocations for 2/2.5G Mobile Services. International Allocations ITU R. M.1073-1 800 MHz Band 900 MHz Band

1800 MHz 1900 MHz

824–849 MHz paired with 869–894 MHz 890–915 MHZ paired with 935–960 MHz 880–890 MHz paired with 925-935 MHz 1710–1785 MHz paired with 1805–1880 MHz 1850–1910 MHZ paired with 1930–1990 MHz

Usage CDMA-based mobile services GSM band E-GSM band

GSM band Part of IMT-2000, but also used as American PCS and other systems

374

Chapter 13 Table 13.2: IMT2000 Frequency Bands. IMT2000 Frequency Bands as Ratified by ITU (M.1036) WARC 1992 1885–2025 MHz 2110–2200 MHz WRC 2000 806–960 MHz 1710–1885 MHz 2500–2690 MHz

IMT2000. WRC 2000 subsequently identified additional bands where—based on countryspecific policies—the IMT2000 spectrum could be provided (Table 13.2). The IMT2000 spectrum allocations were not done for any specific technology. Instead, IMT2000 envisaged the use of five types of air interfaces, which could use the spectrum for providing the IMT2000 services. Subsequently, WRC07 added a sixth interface, that of the OFDMA-TDD used in mobile WiMAX based on TDD. The air interfaces took into account the following technologies: ● ● ●

3G GSM–evolved networks using the UMTS technology 3G CDMA–evolved networks using the CDMA2000 and other evolved technologies TDMA-evolved networks (UWC-136) primarily for the U.S. TDMA networks

Figure 13.2: IMT2000 terrestrial interfaces.

Spectrum for Mobile TV Services ● ●

375

Digital cordless networks (DECT or CorDECT) OFDMA-TDD (used in mobile WiMAX based on TDD) (WARC 2007)

ITU also recommended paired frequency arrangements for specific services so that the IMT2000 use can be globally harmonized.

13.2.3 Spectrum for Mobile Broadcast Services The legacy broadcast services—terrestrial analog and digital broadcasts—operate in the VHF and UHF bands III, IV, and V (174–854 MHz).

Figure 13.3: Mobile TV spectral bands.

Digitalization of TV services, which is a major activity in most countries, is placing considerable pressure on the VHF and UHF terrestrial frequencies due to the requirements of dual analog–digital transmissions. With the phasing out of analog transmissions in the United States in June 2009, UHF spectrum has become free (digital dividend), and deployment for new services is underway. The MediaFLO network in the United States, which was restrained in expansion to all markets due to spectrum limitations, has now expanded nationwide.

13.2.4 Background of Spectrum Requirements for Mobile TV Services Allocation of spectrum for mobile broadcasting in the UF/VHF bands is considered a priority for many reasons. First, the 3G services themselves had been planned for use of Internet and multimedia on a modest scale. 3G, as originally envisioned, included data rates of up to 2 Mbps, which could be achieved in a stationary environment. The widespread use of

376

Chapter 13

mobile TV in unicast mode has thrown up new requirements for additional resources and spectrum. Operators have now moved to data-only carriers such as EV-DO and HSDPA, thus accentuating the need for further spectrum allocation. Terrestrial broadcasters, on the other hand, have tried to provide mobile TV transmissions in the UHF or VHF bands, i.e., the very same bands as used in TV broadcasting. The DVB-H technologies are a manifestation of this type of demand where the spectrum earmarked for broadcast is now being allocated on a country-by-country basis for such services. This has become a limiting factor in the growth of mobile TV, as additional UHF spectrum is hard to come by. In the United States, the ATSC has now standardized the ATSC Mobile DTV standard for mobile TV, which uses the same spectrum as that used for DTV transmissions. Other operators have relied on preallocated spectrum—e.g., the DMB services have used the allocations for DAB spectrum, the existing Digital Audio Broadcast service. The FLO technologies in the United States use the 700 MHz spectrum, which is owned by Qualcomm Inc. as a result of winning previous FCC auctions, although the use of technology itself is not limited to this frequency or band. It had never been anticipated that the potential growth of mobile TV using either the IMT2000 or the terrestrial broadcast networks would reach the dimensions that are being envisioned now. In fact, even the growth of mobile networks themselves has exceeded all expectations. According to a Congressional Research Service (CRS) document prepared for the U.S. Congress (July 2009), web browsing from mobile phones (such as the iPhone) was consuming 69% of network bandwidth, leading to a requirement for an additional 40–100 MHz, including technologies such as LTE.1 Digitalization of TV services and the need to simulcast the analog transmissions has led to increased pressure on the spectrum. This has held up the allocation of significant portions of the spectrum for mobile TV services in Europe.

13.2.5 Which Bands Are Most Suited for Mobile TV? Mobile TV is commonly delivered on handsets that are designed to operate in the 800, 1800, and 1900 MHz bands and are having antennas built in for such reception. The use of other bands typically requires the use of additional antennas based on the frequency. Lower frequency bands require the use of larger antennas for effective reception. At the same time, higher frequency bands are characterized by higher Doppler frequency shifts (proportional to the frequency) and higher losses (proportional to the square of the frequency). For the mobile environment, which is characterized by the handsets in motion at high speeds the Doppler shift in frequency can be significant. The Doppler shift is given by the following formula: 1

Spectrum Policy in the Age of Broadband: Issues for Congress Linda K. Moore. Specialist in Telecommunications Policy July 13, 2009. [http://ipmall.info/hosted_resources/crs/R40674_090713.pdf].

Spectrum for Mobile TV Services

377

Ds  (V * F/C) * cos(A) where Ds  Doppler shift, V  velocity of user, C  speed of light, and A  angle between the incoming signal and direction of motion. The loss is given by the equation: L  10 Log (4 π D F/C)2 where D  distance from transmitter, F  frequency, and C  speed of light.

Figure 13.4: Doppler shift and system limits.

Path loss The second factor of importance is the operating frequency and the path loss. The path losses increase with the square of the frequency. Hence the path loss at 2 GHz (IMT2000 frequency band) is about 12 dB higher than the UHF band at 0.5 GHz.This is compensated somewhat with the antenna size needed for lower-frequency bands. The third factor is the in-building penetration loss, which also increases with the frequency. The mobile TV–type applications also require a lot of bandwidth, which can be as high as 8 MHz for a DVB-H transmission. This has an impact on the transmitted power, which is required to be increased with the frequency. The following are the characteristics of the frequency bands when viewed from the perspective of usage for mobile TV.

378

Chapter 13

VHF band The VHF band is used for T-DMB services in Korea. In this band, wavelengths are large (e.g., 50 cm) and hence antennas tend to be of larger size unless gain is to be compromised. However, the propagation loss is low and Doppler shift effects are insignificant. UHF band The UHF band implies the use of frequencies from 470 to 862 MHz and includes two bands: UHF IV and V. The upper UHF band is well suited from the antenna length standpoint, as mobile phones support antennas for GSM800 band. However, country-specific spectrum occupation for GSM 900 services may lead to these bands being not available. The Doppler shifts in the band are low enough to permit mobile reception at speeds of 300–500 Km/hour. L-band Spectrum in the L-band has been traditionally used for mobile satellite communications. Inmarsat has been using the L-band for maritime and land-based mobile communications. The L-band allocable slots include the 1450–1500 MHz band and 1900 MHz band. The propagation losses are very high in this band, as is also the case for Doppler shift, limiting the receiver velocity to 150 Km/hour. The band is better suited for satellite-based delivery as the range is quite limited due to higher losses at these frequencies. S-band The S-band is used for satellite-based DAB, STiMi (CMMB), DVB-SH, and DMB systems (e.g., S-DMB in the 2.5 GHz band).The signals are repeated by the ground-based repeaters for delivery in cities and inside homes where the users may not have a direct view of the satellite. Owing to the high loss with distance, the usage is primarily for the satellite-delivered transmissions and short-distance land-based repeaters such as within buildings and tunnels where satellite signals cannot reach.

13.2.6 WRC 07 Decisions Before looking at the WRC 07 decisions, it is interesting to look at the digital dividend bands in different regions. These bands stand to be vacated as a result of transition to digital TV: United States: 698-806 MHz (DTV transition is complete) Europe: 790–862 MHz Japan: 710–770 MHz Korea: 750–804 MHz WRC 07 made some major decisions on the use of the UHF band. For region 1 (Europe, the Middle East, and Asia), the band 790–862 MHz is agreed on as “co-primary allocation

Spectrum for Mobile TV Services

379

for mobile services,” but effective 2015. For Region 2 (Americas), co-primary allocation to mobile services of 698–806 MHz is immediately effective. For region 3 (Asia Pacific), the allocation except for six countries (Bangladesh, China, Korea (Rep. of), India, Japan, New Zealand, Papua New Guinea, Philippines and Singapore) is 790–862 MHz; for the other countries, the allocation is 698–790 Mhz. China is expected to enable usage of the band of 790–862 MHz only by the year 2015.

13.3 Mobile TV Spectrum Mobile TV services can be provided by a wide range of technologies; the spectrum used is dependent on the technology employed. Technologies such as the DVB-H are based on terrestrial transmission and can use the same spectrum as the DVB-T. The same is the case of ATSC Mobile DTV technology derived from ATSC digital TV. IMT2000 (3G) uses spread spectrum techniques (i.e., W-CDMA) and is based on the use of either the UMTS framework or the CDMA2000 framework. Broadly, the spectrum for mobile TV services falls into the following distinct areas based on the technology used. ●

● ● ●



Broadcast terrestrial TV spectrum as used for DVB-H, ATSC Mobile DTV, FLO, and ISDB-T (UHF) Broadcast spectrum used for Digital Audio Broadcast services (DAB) Broadcast television VHF spectrum (used for T-DMB services) 3G cellular mobile spectrum 1. UMTS 2. CDMA2000, CDMA2000, 1x EVDO, CDMA2000-3x Broadband wireless spectrum—WiMAX

We now look briefly at the features related to the spectra for various broadcast technologies.

13.3.1 Broadcast Terrestrial Spectrum The spectrum for TV broadcast has been assigned to be in the VHF (Bands 1, 2, and 3) and UHF (Bands 4 and 5). The bands lie in the following frequency ranges (some of these bands may be country-specific): ● ● ● ● ●

VHF Band 1 54–72 MHz VHF Band 2 76–88 MHz VHF Band 3 174–214 MHz UHF Band 4 470–608 MHz UHF Band 5 614–806 MHz

380

Chapter 13

Figure 13.5: VHF and UHF band allocations.

The use is country-specific, with the band being divided into a number of channels with either 6 MHz spacing (NTSC) or 7–8 MHz spacing (PAL). The broadcast bands provide a total bandwidth of around 400 MHz, which provides around 67 channels at 6 MHz. At higher bandwidths, the number of channels is lower. The lower VHF bands are not suitable for mobile TV transmissions due to the large size of the antennas to be used and consequent impact on handsets. The higher UHF band (Band 5, 470–862 MHz) is better suited, thanks to its proximity to the cellular mobile bands and consequently its antenna compatibility.

Figure 13.6: Terrestrial DTV and DVB-H.

Spectrum for Mobile TV Services

381

13.3.2 DVB-H Spectrum The DVB-H is designed to use the same spectrum as DVB-T. It can however also operate in other bands as well (e.g., UHF or L-bands). The actual assignments would be subject to country-specific licensing, as much of the spectrum is needed for the digitalization of TV transmissions from the existing analog systems. For digital TV, the ITU has issued its recommendations for 6, 7, or 8 MHz systems (ITU-R BT 798.1). The key advantages of DVB-H and ATSC Mobile DTV are the sharing of spectrum as well as the infrastructure for digital TV, because of which the additional costs for the rollout of mobile TV services based on these standards are minimized. However, due to the transmission characteristics and the small antenna size, additional repeaters may still be required in the area of coverage of existing transmitter networks. The parameters for DVB-H are different from DVB-T for better propagation characteristics. The modulation scheme used is COFDM with the 4 K carrier mode and the data rate possible is 5–11 Mbps depending on QPSK or 16 QAM modulation. This can support 50 or more audio and video services in one 8 MHz slot (25–384 Kbps per channel). This is as opposed to five or six channels in the DTT multiplex of 3–4 Mbps per channel.

Figure 13.7: DVB-H transmissions.

The implementation of DVB-H is based on the use of the same DVB-T MPEG-2 multiplex by the DVB-H transmission streams, or it can be an independent DVB-H carrier, depending on

382

Chapter 13

the implementation. In the former case, the MCPC carrier, which is being transmitted by an existing DVB-T carrier, may undergo little change.

13.3.3 Spectrum for T-DMB Services T-DMB services constitute terrestrial broadcast of mobile TV, audio, and data channels, and are based on the extensions to the DAB standards (Eureka 147) by providing additional Forward Error Correction (FEC). The physical layer is based on the DAB standards. One of the primary reasons why T-DMB is considered attractive in many countries is that the services are designed to use the DAB spectrum, which has already been allocated. This implies that the contentious wait to have spectrum assigned in the 3G or UHF bands is not an immediate hindrance in moving ahead with the services. The DAB or Eureka-147 spectrum consists of 1.744 MHz slots in the L-band (1452– 1492 MHz) or the VHF band (233–230 MHz) per international allocations (ITU R BO.1114). (However, it is important to note that many of the spectrum allocations and channeling plans remain country-specific.) Most DAB receivers work in the VHF I, II, III, and L-bands.

Figure 13.8: DAB L-band allocations.

Commercial T-DMB services were launched in Korea in December 2005 for mobile TV and multimedia broadcast. The spectrum for the services have been allocated in the VHF band III, which consists of VHF channels 7–13 in the frequency band 174–213 MHz. Initially, two channels (nominally of 6 MHz each) have been used for T-DMB transmissions, with each

Spectrum for Mobile TV Services

383

being further subdivided into three carriers of 1.54 MHz each. Thus two 6 MHz slots that have been made available can be used by six operators. T-DMB services had also begun in Germany based on the technology developed in T-DMB Korea in 2006.

Figure 13.9: Spectrum for T-DMB (Korea).

The T-DMB operation in Germany was in the 1.4 GHz L-band, which is the satellite allocation for DAB services. It is noteworthy that Germany already uses channel 12 in VHF for countrywide DAB broadcasting.

13.3.4 Spectrum for Satellite-Based Multimedia Services (S-Band) According to the ITU allocations in the S-band, some frequencies have been reserved for the satellite-based DAB or multimedia transmissions. These bands include (RSAC Paper 5/2005): ● ●

2310–2360 MHz (United States, India, Mexico) 2535–2655 MHz (Korea, India, Japan, Pakistan, Thailand)

These bands are further subject to country-specific allocations. The satellite-based DMB systems (S-DMB systems) are designed to use these bands for delivery of multimedia services directly to the handsets as well as through ground-based repeaters.

13.3.5 Spectrum for 3G Services UMTS has been adapted as the 3G standard in Europe with UMTS Terrestrial Radio Access (UTRA) being the access mode. Other countries such as Japan and the United States also follow the same standard in selected networks.

384

Chapter 13

The spectrum for UMTS consists of 155 MHz of total spectrum, of which 120 MHz comprises paired spectrum (602 MHz) and 35 MHz of unpaired spectrum in the 2 GHz band. The paired spectrum use is mandated for W-CDMA and the unpaired spectrum will be used by TD-CDMA. The following are the allocations that were made by WRC 98: ●





The 1920–1980 MHz and 2110–2170 MHz bands are used as paired spectrum for uplink and downlink respectively for the use of UMTS (FDD, W-CDMA). These bands are for terrestrial use. The bands are 60 MHz each and can be subdivided into 5 MHz FDD carriers. The carriers can be allocated to one or more operators based on traffic requirement. The 1900–1920 and 2010–2025 MHz bands are for the use of terrestrial UMTS with TD-CDMA. The transmission in TD-CDMA is bidirectional and paired bands are not required. The 1980–2010 and 2170–2200 MHz bands are allocated for satellite-based UMTS using FDD-CDMA technology. The bands are paired and the transmissions in this band (from or to satellite) follow the same interface as for terrestrial transmissions (3GPP UTRA FDD-CDMA).

Figure 13.10: European frequency allocations—IMT2000.

Spectrum for Mobile TV Services

385

The IMT2000 recommendations have been implemented in various countries and regions according to their selection of technologies, air interfaces, and setting aside of spectrum. In the EU, for example, the spectrum band of 1900–2200 has been earmarked for different technologies, both terrestrial and satellite-based (MSS). UMTS extension bands as approved by WRC 2000 became available in CEPT (Europe) in 2008, i.e., a portion of 806–960 MHz, 1710–1785 MHz, and 2520–2690 MHz. China has opted for the use of TD-SCDMA as the preferred option for 3G rollout in the 2300–2400 GHz band. Process of allocation of spectrum Different countries have followed different approaches for the allocation of spectrum, the most common method being auction. The alternative method is to allocate spectrum to service providers based on the license for providing such services. European countries went in for the auction of spectrum quite early and the bids for the slots for 3G were very high, placing considerable strain on the companies that had bid for the spectrum. In other countries, including India, the spectrum for 2G services is allocated based on subscriber base and a percentage of revenue share.

13.4 Country-Specific Allocation and Policies In Europe, most countries adopted a uniform method for allocation of spectrum by auctioning the 60 MHz band divided into 5 MHz blocks. Twelve slots of 5 MHz each were thus available for the operators who won them in auctions.

Figure 13.11: UMTS spectrum allocation in Europe.

386

Chapter 13

13.4.1 UMTS Allocation Summary for Europe Unpaired Spectrum: TD/CDMA 1900–1920 and 2010–2025 MHz Time Division Duplex (TDD) TD/CDMA, unpaired, channel spacing is 5 MHz (raster is 200 kHz). Paired Spectrum: W-CDMA 1920–1980 and 2110–2170 MHz Frequency Division Duplex (FDD, W-CDMA), paired uplink and downlink, channel spacing is 5 MHz (raster is 200 kHz). An operator needs three to four channels (215 or 220 MHz) to be able to build a high-speed, high-capacity network.

Figure 13.12: IMT2000 and country allocations.

13.4.2 Spectrum for TDtv Services TDtv (Time Division Multiplexed Television) was created to use the unpaired part of the 3G spectrum reserved for use with TDMA technologies. TDtv is a broadcast technology based on the use of MBMS (3GPP release 6 defines the MBMS services), which can be broadcast to an unlimited number of users. Operators 3 UK, Telefonica, and Vodafone have already announced the launch of a technical trial of TDtv. The trial is based on the use of MBMS and uses the UMTS TD-CDMA (unpaired spectrum) as the air interface. Satellite-based mobile services (MSS) have attracted scant interest after the commercial failure of low-earth-orbit (LEO) global mobile systems such as Iridium and ICO, but mobile multimedia satellites with MSS bands are expected to be in service during 2010 for DVB-SH systems.

Spectrum for Mobile TV Services

387

13.4.3 Spectrum Allocations in the United States 2G and 3G mobile spectrums Spectrum allocations in the United States, seen from a historical perspective, can be categorized broadly into two bands called the “cellular” (850 MHz) and the “PCS” (1850–1990 MHz). The growth of AMPS-based cellular mobile networks in the 1980s led to full utilization of the 850 MHz band, where the FCC had allocated as many as 862 frequencies of 30 KHz each. The spectrum for the AMPS services extended into the UHF band as well. The AMPS services evolved into D-AMPS in the same frequency bands. At the same time, the FCC also permitted new technology-based Personal Communication Services (PCS) in the 1850–1990 MHz band as well. The FCC gave freedom to the PCS operators to choose technology: CDMA, TDMA, or GSM. The PCS spectrum in the band of 1850–1990 MHz (total 140 MHz ) was allocated in six bands termed A to F. As the operators had the freedom to select the technology, the cellular and PCS bands in the United States present a mixture of GSM, CDMA, 3G, and other technologies. As far as the usage for IMT2000 is concerned, these frequency bands—along with the 800-850 MHz bands—stand fully utilized, which has made it difficult for the FCC to allocate frequencies for the IMT2000 technologies in the 2 GHz band per harmonized global allocations (as is the case in Europe and elsewhere). This has led the FCC to explore alternative bands such as the AWS bands. IMT2000 spectrum in the United States The UMTS spectrum bands as recommended by WRC 2000 and the ITU are already in use in the United States for existing cellular/PCS operators. Hence 3G services have been launched through sharing of the 1900 MHz band with the 2G services. Cingular (now AT&T Wireless) launched UMTS services in the 1900 MHz band in the United States. Operators having access to the 1900 MHz band (PCS band) and 850 MHz bands have started using Table 13.3: Allocation of PCS Band in the United States. Allocation of PCS Band in United States PCS A PCS B PCS C PCS D PCS E PCS F SMR and Unlicensed (1910-1930 MHz) Total

1850–1990 MHz 30 MHz 30 MHz 30 MHz 10 MHz 10 MHz 10 MHz 20 MHZ 140 MHz

388

Chapter 13

these bands for providing 3GPP W-CDMA-based UMTS services. This includes T-Mobile (1900 MHz band) and Cingular (1900 MHz and 850 MHz), amongst others. 3GPP2 operators with CDMA-based systems are also moving to 3G (CDMA2000 and EV-DO) based on the existing spectrum. Sprint operates its CDMA network in the 1900 MHz band. Considerable convergence has taken place in the United States with TDMA networks giving way to CDMA or 3G. The technologies predominantly in use now include GSM, CDMA2000, 3G(UMTS), EV-DO, and HSDPA. Digital audio broadcasting in the United States Digital Audio Broadcasting is possible via two technologies: S-DAB (satellite DAB) and T-DAB (terrestrial DAB). The spectrum for DAB has been allocated by the ITU (WRC 1992) in the L-band of 1452–1492 MHz (40 MHz) for international use. This is for both S-DAB and T-DAB services to be used on a complementary basis. According to ITU resolution 528, only the upper 25 MHz can be used for S-DAB services. Further, according to the 2002 Maastricht arrangement in Europe, the band 1450–1479.5 is to be used for T-DAB and the balance of 1479.5–1492 MHz is to be used for S-DMB services. In the United States, the FCC has allocated the S-band (2320–2345 MHz) for satellite radio services, which were being provided by Sirius and XM Radio (DARS). This allocation of 25 MHz permitted 12.5 MHz to be used by each operator. These operators subsequently merged and the services are now known as Sirius XM Radio. The United States has also developed the IBOC (In Band On Channel) standard for Digital Audio Broadcasting. The standard has an ITU approval for DAB services. It uses the existing FM band of 88–108 MHz by using the sidebands of the existing FM carriers for additional digital carriers.

13.4.4 UHF Spectrum Auctions 2008 Digital terrestrial TV in the United States occupies the band 512–698 MHz. The band 698–805 MHz was occupied by analog TV (NTSC), which became vacant in June 2009. This higher UHF band, which adjoins the 2G mobile frequencies, was kept aside by the FCC for auctions for new services. The segregation of spectrum for terrestrial digital TV and that used by mobile services is quite important in order to avoid interference between GSM transmitters and mobile terrestrial transmission. The FCC had preauctioned the spectrum that became free after the completion of digital transition in June 2009. This spectrum auction, termed Auction 73, took place in March 2008 for three blocks of spectrum in different markets. The blocks that were auctioned were labeled blocks A, B, and C. Block D was for public safety services and was not auctioned due to relatively low bids.

Spectrum for Mobile TV Services

389

Figure 13.13: FCC 700 MHz auction of March 2008.

The spectrum at 700 MHz falls into the upper 700 MHz band and the lower 700 MHz band. The FCC made “open access” principles applicable for spectrum use in the upper 700 MHz band, implying that the licensees for this spectrum need to allow third-party devices and applications to use the network built using that spectrum. Contrary to expectations, AT&T and Verizon were the winners for the largest blocks in Auction 73. Verizon won 109 licenses nationwide in the three blocks A, B, and C giving it the most extensive footprint through the new auctions. AT&T won 227 B-block licenses. Qualcomm won many markets in blocks B and E, which have been used to augment its MediaFLO network to additional markets. AT&T and Verizon have indicated that they intend to use this spectrum for LTE. The first rollouts of LTE in Boston and Seattle by Verizon had already taken place in August 2009.

13.4.5 AWS Band In September 2006, the FCC auctioned the AWS spectrum in the 1.7 and 2.1 GHz bands. The spectrum included a total of 90 MHz as paired spectrum. The AWS spectrum is thus organized

390

Chapter 13

Figure 13.14: The UHF spectrum from 692–805 MHz, now held by mobile operators and mobile TV broadcasters.

into two blocks of 45 MHz each (1710–1755 MHz and 2110–2155 MHz). In addition, 90 MHz of available spectrum was divided into five blocks for the purpose of auction. Three blocks called A, B, and F are 1010 MHz each in the 1.7 and 2.1 GHz band, respectively, and three blocks called C, D, and E are 55 MHz each. The AWS blocks straddle the PCS 1900 MHz blocks being used for cellular mobile services with technology neutrality. The AWS band is permitted to be used for 3G services as well as wireless or WiMAX services or mobile TV broadcast services. It is expected to be deployed for 3G, HSDPA, and EV-DO services, amongst other uses. Figure 13.15 also shows the relation of the AWS bands to the PCS bands in the United States and the 3G and GSM bands in Europe. As is evident, the AWS bands do not lend direct compatibility in the use of these bands for 3G or other services. A total of 1100 licenses were issued covering over 160 markets. The AWS markets were divided into cellular market areas (CMAs), economic areas (EAs, blocks B and C), and regional economic area groupings (REAGs, blocks D, E, and F). Key operators that received licenses for major areas were T-Mobile, Verizon Wireless, MetroPCS, and SpectrumCo.

Spectrum for Mobile TV Services

391

Figure 13.15: AWS spectrum bands in the United States.

13.5 Spectrum for MediaFLO Services In the United States, MediaFLO is a wholly owned subsidiary of Qualcomm. Qualcomm is the owner of six economic area grouping (EAG) spectrum licenses in the lower 700 MHz frequency band (716–722 MHz, U.S. TV channel 55), which together constitute a nationwide footprint. The spectrum involves the use of UHF channel 55 with 6 MHz capacity. The FLO™ Network functions as a shared resource for Verizon and AT&T to enable them to deliver mobile interactive multimedia to subscribers. This implies that existing operators AT&T (providing AT&T mobile TV) and Verizon (VCAST) are able to ride on the FLO network on a shared basis without their having to acquire any further spectrum. Recently, FLO indicated its intention of distributing services of its own instead of through partners. MediaFLO in other countries can use UHF or L-band spectrum depending on availability.

13.5.1 Korea In Korea, the broadcasting system used is ATSC with 6 MHz bandwidth spacing in the UHF and VHF bands. For terrestrial mobile TV transmissions, the Korean government has

392

Chapter 13

allocated channels 8 and 12 in the VHF band, corresponding to the frequencies corresponding to 180–186 MHz and 204–210 MHz. The relatively low frequencies assigned allow larger areas of coverage and better mobility, but the mobile phones need to use a relative large antenna. In the S-band, the S-DMB service of TuMedia operates in the 2630–2655 MHz band. For mobile services, the spectrum in Korea is allotted on a fixed-fee basis rather than through auctions. Apart from the internationally harmonized bands, Korea uses the 1700 MHz band for PCS in a manner somewhat akin to the 1900 MHz PCS band in the United States. The 1700 MHz PCS band is a paired band with 1750–1780 MHz (mobile to base station) and 1840–1870 MHz being used in the reverse direction. Following are the operators in Korea: ● ● ●

2100 MHz band: KTF and SKT 1700 MHz PCS band (Korean): LG and KTF 800 MHz: SKT

Quick FAQs Spectrum 1. In which band do the XOHM (CLEAR™) services operate? XOHM operates in the 2.5 GHz band (2.496–2.69 GHz). 2. Do ATSC Mobile DTV services have a specific UHF channel set for use like MediaFLO? No, ATSC Mobile DTV will use the same UHF frequencies as the DTV stations. At present, this band is 512–698 MHz. 3. In which band are the LTE services of AT&T and Verizon expected to operate? The LTE services are expected to operate in different channels in 698–787 MHz based on their spectrum holdings. The WRC has allocated the band of 698–806 MHz for Region 2 (the Americas). 4. Which are the European allocations for LTE? WRC 2007 assigned 790–862 MHz for Region 1 (Europe) and Region 3 (Asia Pacific). 5. In which frequency bands does China’s CMMB services operate? The terrestrial transmissions are in the UHF band, with different frequencies being used for different cities/provincial regions. S-band broadcasts from satellite will be available only after launch of the satellite. 6. What are the DAB frequency allotments for China? DAB in China uses the VHF band of 207–216 MHz. There are six DAB channels in this band. 7. Which frequency bands have been allocated to MBMS services? MBMS services use the unpaired part of the IMT 2000 spectrum: 1900–1920 and 2010–2025 MHz. 8. Is the AWS being used to provide mobile services? Yes, T-Mobile, Verizon Wireless, and MetroPCS are some of the operators.

Spectrum for Mobile TV Services

393

13.5.2 India In India, the digital terrestrial broadcasting services have not yet been opened up for private operators and the state-owned Doordarshan remains the sole terrestrial operator. All terrestrial transmissions are analog, with a few exceptions in the metro areas where DVB-T transmissions have commenced as free-to-air transmissions. Trials have been conducted for DVB-H services using the DVB-T platform in New Delhi and have proved successful. The spectrum for terrestrial broadcasting has been provided in both the VHF and the UHF bands, but only a fraction of available capacity is used for terrestrial broadcast services. It is expected that the DVB-H services will be launched on a commercial basis by the state-owned operator. India had over 400 million mobile users by 3Q 2009, a majority of them being on the GSM networks. India uses the international GSM bands for the 2G networks. CDMA networks are also extensive in India, and are operated by Reliance Communications and Tata Teleservices in addition to the state-owned operator BSNL. The CDMA networks use the 800 MHz band. Table 13.4 gives the spectrum used for the Indian mobile cellular services. The bands for allocation of 3G spectrum have also been finalized by the TRAI with auction as a mechanism for allocation. The 3G spectrum has been identified for both the 3G-GSM services and the 3G-CDMA services (such as EV-DO). The following are the highlights of the 3G spectrum allocation recommendations: ●









225 MHz spectrum in the 2.1 GHz IMT2000 band to be allocated to five operators, each being given 25 MHz for 3G-GSM services. 210 MHz spectrum in the 1900 MHz PCS band is to be considered for allocation to 3GCDMA operators. Capacity of the 800 MHz CDMA band to be increased by new channelization plan, which permits accommodation of 22.5 MHz carriers for EV-DO services. Capacity in the 450 MHz band is to be provided to CDMA and EV-DO operations (25 MHz). Other ITU-recommended bands for 3G be considered for future allocation.

Table 13.4: Spectrum Allocations in India for Cellular Services. Band

Uplink (MS to BS)

Downlink (BS to MS)

Technology

800 MHz

824–844 MHz

869–889 MHz

CDMA

900 MHz 1800 MHz

890–915 MHz 1710–1785 MHz

935–960 MHz 1805–1880 MHz

GSM GSM

394

Chapter 13

13.6 Spectrum Allocation for Wireless Broadband Services An important area emerging in the field of mobile multimedia delivery is that of technologies such as WiMAX. As mentioned earlier, WiMAX can be characterized as fixed WiMAX (IEEE 802.16d-2004) or mobile WiMAX (IEEE 802.16e). In Korea, the Korean mobile WiMAX service (WiBro) based on the IEEE802.16e operates in the 2.3 GHz spectrum. The process of allocation of spectrum is continuing for the WiMAX services in various countries. Similarly, Sprint Nextel in the United States, in cooperation with Clearwire, launched its mobile WiMAX network (XOHM) through spectrum available to it through its earlier licenses in MMDS and its PCS network across the country. Due to the high potential of WiMAX for mobile multimedia delivery, the allocation of relevant spectrum is engaging the attention of all countries. There are a number of bands that have been proposed for WiMAX allocations by the WiMAX forum. These fall into the category of fixed WiMAX and mobile WiMAX services. Korea was an early pioneer in using the WiMAX services after 100 MHz in the 2300– 2400 MHz band was allocated for WiMAX services (called WiBro). In the United Kingdom, the radio agency awarded licenses in the 3.4 GHz band, and Ofcom is planning additional spectrum in the 4.2 GHz band. In the United States, the FCC has opened the 3650–3700 MHz band for unlicensed WiMAX coverage. In Singapore and Hong Kong, the 3.4 GHz band has been allocated for WiMAX services. Table 13.5: WiMAX Spectrum Allocations. Frequency Band

Technology

Channelization Plan

700–800 MHz

Part of UHF band

2300–2400 MHz

802.16e TDD

5, 8.75, or 10 MHz

2469–2690 MHz

2535–2655 allotted for satellitebased broadcasting; planned for extension of IMT 2000 or WiMAX 802.16e TDD 802.16e TDD 3400–3600 for 802.16d (TDD or FDD); 3400–3800 for 802.16e TDD

5 and 10 MHz

3300–3400 MHz 3400–3800 MHz

51.5–5.35 and 5.725 to 5.85 GHz

5 and 7 MHz 3.5, 7, or 10 MHz

Remarks on Usage Being considered by the United States for allocations to WiMAX Being used in the United States, Korea (Wi-Bro) for wireless mobility services Technology-neutral allocation by the United States, Canada, Australia, Brazil, Mexico, and elsewhere Mobile WiMAX Satellite services need to be shifted from part of the band; strong support for use in WiMAX and 4G platforms Planned for unlicensed usage including WiMAX

Spectrum for Mobile TV Services

Figure 13.16: WiMAX frequency allocations.

13.6.1 2.3 GHz WCS Band (United States) The wireless communications services (WCS) band is located at 2.3 GHz and is available with a bandwidth of 30 MHz. The band was created by the FCC in 1997 with two slots of 15 MHz each (2305–2320 and 2345–2360 MHz). These slots are adjacent to the DARS spectrum of 2320–2345 MHz.

Figure 13.17: WCS band (the United States).

395

396

Chapter 13

The limited bandwidth available in the band, coupled with the high-powered DARS transmissions (land-based repeaters) in the adjacent bands, is a reason why these frequencies have not been well utilized for broadband wireless. In the recent past, some companies such as Meganet have commenced use of the 2.3 GHz spectrum.

13.6.2 2.5–2.7 GHz BRS Band (United States) The 2.5–2.7 GHz band was earlier allotted to MMDS systems for the delivery of analog TV using point to multipoint transmissions. As these technologies were not very successful, most of this spectrum was sold to Sprint and Bell South. The spectrum originally consisted of 633 MHz channels, a channelization that reflected the need to carry the NTSC analog signals. Of these, 16 channels were devoted to Instructional Television Fixed Service (ITFS), mostly allotted for educational channels. Clearwire, a Craig MaCaw company, leased a large part of the ITFS spectrum. In 2004, the channelization was changed to reflect the need to accommodate the wireless broadband services. The spectrum band, which used to be called the MMDS band, was also renamed as the BRS band. The reorganization permitted eight blocks of 16.5 MHz each and seven blocks of 6 MHz each.

Figure 13.18: 2.5 GHz BRS band in the United States.

At the time of the AT&T–Bell South merger in 2007, the FCC put in a condition that the 2.5 GHz spectrum held by Bell South must be divested. Subsequently, the same spectrum was taken over by Clearwire, which now holds most of the spectrum in this band. Some band is in use for MMDS or ITFS systems. It is therefore of no surprise that this band, considered the most prime band for mobile WiMAX, is being used by Sprint and Clearwire to launch nationwide mobile WiMAX services as CLEAR.

Spectrum for Mobile TV Services

397

13.6.3 Will Mobile TV be Spectrum-Constrained? The projections for mobile subscribers that had been made by the ITU as well as those of the services that would be provided on these networks fell far short of the actual growth. The growth in mobile customers has been almost exponential in many countries (e.g., India and China), thus placing constraints on the use of spectrum even for existing voice services. The 3G spectrum in the harmonized bands is also seen to fall short for distribution amongst various operators as well as for its utilization for large-scale mobile TV services. The scenario is similar for broadcast-based technologies such as DVB-H, where the UHF spectrum in most countries stands used for terrestrial broadcast TV or other services. WRC had reviewed the allocations and made further recommendations for globally harmonized growth of 3G and broadcast network based services in its meeting in 2007. WRC 07 has identified the 790–960 MHz band for IMT applications in Region 3, covering mainly Asia (except Japan), the Middle East, and Australia. The matters already set for examination include: ● ●



Future development of IMT systems and systems beyond IMT2000 Need for harmonized frequency bands below the presently prescribed bands, including those that are being used for various services UHF and VHF spectrum allocations by Regional Radio Conferences

Spectrum is certainly one of the foremost issues governing the plans of operators in selection of technology as well as the plans for rollout of mobile TV networks. Urgent deliberations are ongoing in various countries to find an early and harmonized allocation for growth of mobile TV services. Before We Close: Some FAQs 1. Why is the upper half of the UHF spectrum reserved for mobile services or open access? This is owing to the proximity to the cellular band of 800 MHz, use of common equipment, and lower interference from UHF to cellular transmissions. 2. Why have WiMAX systems been allocated the frequency bands of 2.3/2.5 GHz or higher? WiMAX systems in many cases have replaced the MMDS systems operating in these bands. The focus earlier was also on line-of-sight links. Lately there has been a move to allot lower bands to WiMAX, such as part of the UHF band, subject to availability. 3. What is the spectrum allocated for DVB-SH services in Europe? The MSS spectrum is 230 MHz in the bands of 1980–2010 MHz paired with 2170–2200 MHz. Solaris Mobile and Inmarsat have been allocated 2–15 MHz each by the EU in this band.

398

Chapter 13

The 15 MHz available is further split into 35 MHz slots by each operator to provide nonoverlapping beams on adjacent countries. 4. When DVB-H is operated in a hierarchical mode, how much additional spectrum is required? There is no additional spectrum required when DVB-H is operated in a hierarchical mode, as the DVB-T stream is moved to the hierarchical modulation scheme as low priority stream. 5. What are open access provisions in 700 MHz spectrum auction? Open access provisions in the 700 MHz auctions require that the 22 MHz of spectrum marked for open access should be made available to other users through network access at wholesale rates. This implies that smaller companies that wish to launch applications using different software, technology, and phones to target users in the domain of the open access spectrum should be allowed to do so, even though the spectrum does not belong to them.

PAR T I I I

Multimedia Handsets and Related Technologies If we could first know where we are, and whither we are tending, we could then better judge what to do, and how to do it. Abraham Lincoln

CHAPTE R 14

Chipsets for Mobile TV and Multimedia Applications If the automobile had followed the same development cycle as a computer, a Rolls-Royce today would cost $100 and give a million miles per gallon. Robert X. Cringely

The chipset industry has so far followed Moore’s Law, which predicted that the component density and computing power of devices will double every 18 months. True to his words, the industry has kept pace with the predictions. However, few, including Gordon Moore, cofounder of Intel himself, could have imagined the new challenges the industry would be grappling with in the mobile world. It was no longer about processing power alone, but also the power consumption and price that needed to go down for Mobile TV to be practical. The chipset industry kept its date with time. Single chips are here for full functionalities of mobile TV. Many of these developments are indeed needed to match the developments happening in the mobile TV and multimedia industry. Without these, the industry cannot become a mass industry of hundreds of millions of users. Knowing that just one standard cannot become dominant in a particular country in the near future, the industry has scrambled to bring forth multiple-standard © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00014-X

399

400

Chapter 14

functionality in its chipsets. These chipsets, available from vendors such as Philips, Dibcom, Freescale, Samsung, Texas Instruments (TI), Broadcom, and others range from single-standard chipsets for mobile TV to multistandard and multiband chipsets, which can support DVB-H, ATSC Mobile DTV, FLO, or ISDB-T standards in UHF or L-bands. Examples are plenty. Siena Telecom of Taiwan has a chipset (SMS1000) that supports DVB-T, DVB-H, DAB-IP, and DVB-T. In the United States, where the MediaFLO technology is a strong contender, along with DVB-H, Qualcomm has already announced its universal broadcast modem, which supports the DVB-H, ISDB-T, and the MediaFLO technologies. The reception of mobile TV requires certain specific features to be added to a mobile device. At the risk of oversimplification, one could say that these include a tuner and a baseband processor to decode and display the mobile TV signals. Typically these functions are implemented in a single chip with associated software stacks, and the operators that launch mobile TV services have handsets qualified for mobile TV. This is the one of the most challenging tasks in providing a mobile TV service. The handsets in use include a wide range, and many of these, such as the iPhone 3G and the Blackberry, have their own niche use and cannot be replaced just because an operator has launched a mobile TV service.

14.1 Introduction: Multimedia Mobile Phone Functionalities A low-cost mobile phone with basic functionality can be made by using a single chip such as the TI “LoCosto” series, with which the phone can be retailed at sub-$20 levels. When we enter the domain of multimedia phones, a number of functions that need to be supported are much more diverse. The mobile devices are now expected to be able to receive streaming video and audio from the network, set up video calls, transfer messages with multimedia content, and in general handle rich media. A multimedia phone needs to support the following functions: ●

● ● ● ● ● ● ● ● ●

Video and audio codecs conforming to JPEG, H.264, H.263, AAC, AMR, MP3 Windows Media, and other common standards Video and audio players Streaming Graphics and image display A camera or video camcorder, including JPEG/3GPP encoding Format conversion for video and audio Rendering of animation and graphics 3D rendering Network interfaces with high error resilience Multiple serial and wireless interfaces for Bluetooth, wireless, and infrared

Chipsets for Mobile TV and Multimedia Applications

401

Figure 14.1: Handset functions in a multimedia environment.

14.2 Functional Requirements of Mobile TV Chipsets So how does a phone design handle all these functions? It is evident that the following functional blocks are needed to complete the requirements: 1. A processor or microcontroller is necessary for handling keypad, displays, wireless LAN, USB, and Bluetooth interfaces, and is usually called the host processor. 2. A communications engine is needed to work on 3G GSM or CDMA networks in various bands. The communications engine consists of RF transceivers and the appropriate type of modem for 3GSM or CDMA networks deployed, i.e., 3G, HSDPA, or CDMA2000 and EV-DO. In some cases, the phone is designed to work on multiple standards such as 2.5G/3G GSM and CDMA2000 1x. 3. A broadcast mobile TV receiver such as for ATSC Mobile DTV, FLO, DVB-H, T-DMB, S-DMB, or ISDB-T networks is necessary. In practice, a single system on chip (SoC) can handle the functions of tuner and mobile TV decoder. Such implementations in the industry are becoming common. The mobile TV functionality can thus be added through a single chip with appropriate software interfaces. 4. A multimedia engine is required to handle encoding and decoding of audio and video and rendering of graphics and animations and to handle camera-generated images and video.

402

Chapter 14

The functions of the microcontroller and the communications transceiver are generally handled in one functional module called the baseband processor or host processor. This essentially means that a multimedia phone can be realized by the baseband processor, the mobile TV decoder, and the multimedia processor (or applications processor) together with some ancillary chips (e.g., for clock generation).

Figure 14.2: Mobile phone architecture.

Terms such as “baseband processor,” “applications processor,” and “multimedia processor” are in fact industry nomenclature for a group of functions that are performed by these chips. The functions performed may vary from one chipset manufacturer to another or between different technology implementations, such as CDMA-BREW and 3G-GSM.

14.2.1 What Are the Core Functions of a Multimedia Processor? The following functions are commonly supported: MPEG-4 audio/video codec, JPEG image codec, 2D graphics engine, audio and video processing engines, display controller, flash memory controller, and CMOS/CCD sensor interface. Multimedia processors have an embedded ARM processor core to support these functions.

Chipsets for Mobile TV and Multimedia Applications

403

Quick Facts An Example of a Multimedia Processor An example of a multimedia application processor is the Telechips TCC7901, which is based on dual-core ARM926 CPUs operating at 488 MHz.The use of the media processor imparts powerful multimedia capabilities, as evidenced by its capabilities: ● ● ● ● ● ● ●

Multimedia capabilities: MPEG-2, MPEG-4, H.264, WMV9, DivX, Xvid Audio formats: MP3, WMA, ADPCM, Enhanced AAC, SBC VGA resolution capabilities (640480) for H.264 and WMV9 Embedded TV encoder and hardware video accelerator Support of multistandard mobile TV (T-DMB, ISDB-T, and DVB-T) Support of Wi-Fi, Bluetooth, USB 2.0 high-speed device Operating systems supported: WinCE 5.0, Linux, Nucleus

The multimedia processor is designed for use in mobile TV, portable navigation devices (PND), and advanced multimedia phones.

14.2.2 Classification of Mobile Handsets In the industry, it is also common to classify the phones in different categories, such as basic phones, smartphones, and business phones (such as the Blackberry or PDAs): Basic phones: Perform the essential communication functions of voice calls and SMS and simple functions such as an address book. Smartphones: The smartphones add a camera, GPRS/3G, MMS capabilities, 3GPP video streaming, and downloading services, with storage capabilities and ancillary features such as HSDPA, Bluetooth, USB, and animations. Multimedia phones: The term “multimedia phone” is a new addition to the nomenclature that became necessary to distinguish the new generation of phones with a focus on rich media clients and multimedia services such as Nokia X. The handset functions are covered in detail in Chapter 16. Multimedia phones require the highest resources in terms of processors and memory. These phones may support very highresolution cameras (e.g., 8 MP), advanced 3D games, image processing, video storage and processing; have a memory of 2 GB or more; and handle multiple formats of audio and video. They need to render high-resolution video at 30 fps.

14.2.3 Multimedia Processors In order to handle the multimedia-related functions that dominate the phone’s functions, it is common practice to use a separate multimedia processor. The multimedia processor or

404

Chapter 14

applications processor is designed to be an engine for handling the video, audio, and images by the phone. It needs to handle their encoding and decoding and the rendering of frames. Typically, a multimedia processor will handle all functions for 3GPP encoding and decoding for the phone’s communications functions and other formats such as MP3, AAC audio, and H.264 video for multimedia applications.

14.2.4 Realizing Mobile Phone Designs—Systems-on-Chip It is evident that the functions needed to be performed by a device such as a multimedia processor or a baseband processor are application-specific, and moreover are closely related (examples are a tuner, A/D converter, and decoder). It is more efficient to realize such functions using a system-on-chip approach. An SoC is an application-specific integrated circuit in which often one “piece of silicon” or chip can contain all the functions required of a mobile phone. The development costs of such chips may be higher, but in volume productions there is considerable advantage in simplicity at the end design realization. An example of an SoC is the baseband processor, which typically involves the functions of the controller, which controls the keyboard, screen, and I/O functions, and the digital signal processor (DSP) for voice coding (e.g., in AMR-NB). This leaves the multimedia functions of the phone to be handled by the multimedia processor, which may be another SoC.

Figure 14.3: System-on-Chip (SoC) architecture.

Chipsets for Mobile TV and Multimedia Applications

405

14.2.5 Mobile Phone Processors The processors for mobile phones may be embedded as part of an SoC or can be independent devices. The common processors used include ARM (Advanced Risk Machine), Motorola Dragonball, MIPS, TI’s OMAP series, Qualcomm’s Snapdragon, and Intel XScale. The processors run at between 100 MHz and 1 GHz, depending on the phone type and applications supported. In general, the advanced multimedia phones will have higher-speed processors and memory. The memory in phones varies from 16 to 64 MB for smartphones. High-end mobile phones and MIDs use multiprocessor core devices such as the ARM Cortex™, which can contain up to four ARM processors. ARM is embedded as a 32-bit core processor in many of the multimedia devices and graphics processors. An example of an embedded ARM processor is the ARM Cortex-M0 requiring only 85 microwatts for its operations.

14.2.6 Secure Core Processors As one would anticipate, there is also a need for secure core processors. These processors can be used in SIM cards, smartcards, or embedded processors in mobile phones, where sensitive data such as encryption keys are held and processed. An example of such a processor is the SC200 based on ARM9 belonging to the SecureCore™ family from ARM. These processors follow a one-way development process and support a randomized design with countermeasures that resist analysis of current flows and so on to break down the software. As an example, Toshiba and Samsung use ARM SecureCore processors in the smartcards and USIMs.

14.3 Chipsets and Reference Designs The mobile industry is continuously being driven by the need for high-quality handsets that support both 3G and multimedia applications. This is not surprising, as more than 300 million 3G phones were in use in June 2009. Handsets in which the circuitry is reduced to a single chip are indeed candidates for such volume production. Typically a manufacturer provides full developer support with the chipset, including a reference design to reduce time for development and to deliver a new product to the market. A typical reference design can include: ● ● ●

● ●

Component list with board design and layout Wireless application suite that has been validated Complete solutions for multimedia as well as personal information management applications and application program interfaces (APIs) Operations system support Test kits and test environment tools

406

Chapter 14

Chipsets, reference designs, APIs, and development environments form an “integrated product set” that goes into the design of new handsets. A classic chipset and reference design is the TI Hollywood chipset. The TI architecture is based on its OMAP platform. The single-chip mobile TV solution consists of an SoC with an OMAP processor (an applications processor or multimedia processor) and a digital TV baseband processor chip. The baseband processor chip is available for DVB-H (DTV1000 chip) or ISDB-T (DTV1001 chip).

Figure 14.4: The Texas Instrument “Hollywood” chipset for DVB-H.

Designed for broadcast digital TV in DVB-H or ISDB-T formats, the chipset contains the tuner (UHF or L-band), OFDM demodulator, and DVB-H or ISDB-T decoder to generate the video. It is the first industry implementation of tuner and demodulator into one silicon chip. The chip is designed using a low 1-V RF CMOS process and power-efficient design, thus requiring only 30 mW of power for a class B DVB-H terminal. Combined with OMAP application processors, the chipset can deliver 4–7 hours of TV viewing time, depending on display size and battery rating. (It should be recognized that the term “single-chip solution” denotes a broadcast TV reception solution. The handset would have additional chips, e.g., for being on a CDMA or a GSM network.)

Chipsets for Mobile TV and Multimedia Applications

407

The DTV1000 supports DVB-H operating at 470–750 MHz (UHF) and 1.670–1.675 GHz (Lband) frequency ranges; the DTV1001 supports ISDB-T 1-Seg operating at the 470–770 MHz frequency range. Figure 14.5 shows a reference design implemented using a DTV100X chip (source: Texas Instruments). The chip has UHF and L-band (1.6 GHz) front ends, an analog-to-digital converter, OFDM demodulator, MPEG-2 TS demultiplexer, and MPE-FEC decapsulator (or link layer buffer). The received DVB-H signals, for example, are TS demultiplexed and the FEC is applied to generate the received data. The data is buffered and delivered as SDIO output to an applications processor for further handling.

Figure 14.5: Texas Instrument DTV100x chip reference architecture. (Courtesy of Texas Instruments)

The chip is powered by its own ARM966E processor core with an independent 128 MB of RAM and is not dependent on the cell’s processor, making the interfacing and software structure simple.

14.3.1 DVB-SH Chipsets ICO’s mobile interactive media (mim) TV service, which is based on DVB-SH, uses a chipset from DiBcom. This chipset enables the reception and decoding of the satellite video delivered

408

Chapter 14

from the G1 satellite or the ATC. The chipset also implements full duplex communications via the satellite using GMR technology. Another example of a DVB-SH chipset from DiBcom is the DIB29098-SH, which supports DVB-H as well as DVB-SH with diversity reception.

14.4 Chipsets for ATSC Mobile DTV The chipset for enabling handsets for mobile TV reception using ATSC Mobile DTV is now being mass-produced by LG for consumer use. This chipset, LG2160A, has been demonstrated in ATSC field trials in a variety of devices such as ATSC Mobile DTV dongles, MP3 players, DTV receivers, mobile phones, and automotive installations. The LG1260A is a baseband chip and the tuners and so on are external, with the device receiving two IF inputs. It provides an SPI (Serial Peripheral Interface) output for A/V transport stream. The LG1260A reflects the initial requirements of the market, rather than high integration and multiple functions, which are the characteristics of a mature market.

Figure 14.6: LG ATSC Mobile DTV chip (LG1260A) architecture. (Courtesy of LG)

Chipsets for Mobile TV and Multimedia Applications

409

14.5 Chipsets for 3G Mobile TV Chipsets for 3G technologies provide the basic functions of RF transceiver, decoding of signals to baseband, and multimedia processing. There is no need to have a broadcast receiver (such as for DVB-H, DMB, or ISDB-T) and such implementations are possible using single-chip SoC for realizing handsets. 3G applications include video calls, videoconferencing, streaming video and audio, and interactive gaming. In addition, Internet applications such as browsing, file transfers, VoIP, and others need to be supported. Support for graphics applications based on SVG, J2ME, and 3D gaming applications is also required. However, as additional functions are incorporated to produce feature phones and smartphones, it is common to move toward a design with an independent application processor. A typical implementation of a 3G handset may use only one OMAP1510 processor. It has a dual-core architecture, which makes the chip suitable for multitasking, common in multimedia applications. OMAP architecture figured in the launch of Japan’s FOMA service, which used such processors extensively in its FOMA 900i series of phones.

Figure 14.7: Example of a 3G phone chipset.

410

Chapter 14

There are many other implementations for mobile phones based on OMAP1510. China’s Datang, which has developed the technology for TD-SCDMA, has a phone design based on Linux and OMAP1510 for use in the Chinese 3G markets.

14.5.1 AVS Encoder Chipset China uses the AVS-M standard for video and audio encoding. Chipsets such as the Spreadtrum® SC6100 are available for the audio and video encoding of content and their transmission via streaming media, broadcast TV, or IPTV.

14.5.2 CDMA2000 1x Chipset CDMA2000 1X is a technology of Qualcomm Inc. An example of a CDMA2000 1X phone that can also support GSM/GPRS (and is therefore good for roaming) is based on the Qualcomm communications processor MSM6300. It works for GSM, GPRS, and CDMA through Qualcomm’s cdmaOne chipset (termed aptly the “Global Roaming Chip Set”), comprising the RFR6000 and the RTR6300. The RTR6300 chipset is a dual-band GSM transceiver and a dual-band CDMA2000 1x transmitter; the RFR6000 is a CDMA2000 1x and GPSOne receiver. Multimedia capabilities are provided by Intel Applications Processor PXA270. A typical implementation is in the Samsung SCH-i819 mobile phone for the Chinese 3G network, which is based on a Linux platform. MSM6300 chips support Qualcomm’s binary runtime environment for wireless (BREW) applications platform.

14.5.3 3G Chipsets for MBMS (Mobile Broadcast and Multicast Service Over 3G) MBMS services unleash the power of multicast and broadcast technologies to deliver highbandwidth services to millions of users simultaneously, a feature that is impossible in the unicast environment on the 3G networks. One of the chipset solutions available for MBMS is the Mobile Station Modem (MSM7200) from Qualcomm. The chipset has support for BREW and Java and can be used for applications such as: ● ● ● ●

QTV: video playback at full VGA resolution at 30 fps Qcamcorder: recording at 30 fps VGA resolution Q3Dimension: animation and 3D graphics at 4 million triangles per second GPSOne: position-location applications

The MSM7200 is dual-core-processor-based and supports HSDPA at rates up to 7.2 Mbps and HSUPA at rates up to 5.6 Mbps.

Chipsets for Mobile TV and Multimedia Applications

411

Figure 14.8: Example of a GSM and CDMA phone chipset.

14.5.4 3G Chipset for 1xDV-DO Technologies 1xEV-DO services are being offered by a number of operators, including KDDI, Japan; SK Telecom, Korea; Vespar, Brazil; Telstra, Australia; and others. Qualcomm’s MSM range of chipsets deliver the functionality to handle 3G CDMA-based networks including the 1xEV-DO. One of the latest releases of the series is the MSM6800 chipset, which provides advanced functionality. The chipset is used in conjunction with transceivers such as RFR6500 and RFT6150 and the power management solution chip PM6650.

14.5.5 HSPAⴙ Chipsets With operators moving to HSPA networks in view of fully used capacities in 3G and HSPA, handsets with HSPA chipsets are increasingly becoming important. The standard HSPA chipset from Qualcomm includes the transceiver QTR8610 amongst others. The new HSPA chipsets include the Mobile Station Modem (MSM) MSM8260™ (3GPP release 7 HSPA), MSM8660™ (3GPP release 8 HSPA), and MSM8270™ solutions (3GPP release

412

Chapter 14

8 dual-carrier 3GPP/GPP2 HSPA) for mobile handsets. These chipsets have a 1.2 GHz Scorpion processor and 600 MHz digital signal processor (DSP). Features that can be supported in handsets using these include 16 MP cameras, full 1080p HD video playback and recording, 24-bit WXGA (1280800) displays, DTS/Dolby 5.1 surround sound, and GPS support.

14.5.6 Chipset for MediaFLO MediaFLO is a technology of Qualcomm Inc. and uses the 700 MHz band for the FLO air interface. Qualcomm has the MSM6500 chipset targeted at powering the new generation of 3G handsets.

Figure 14.9: Chipset for CDMA and MediaFLO™. (Courtesy of Qualcomm)

Quick Facts Chipsets for Mobile TV: Some Examples Universal Chipsets DiBcom: DIB10098-M and DIB10096-M (Octopus family) Qualcomm: QSD8672 (Snapdragon family) Renesas: SH-Mobile L3V multimedia processor DVB-H Chipsets: DiBcom DB7070, DB9090; Samsung S5M8600 (tuner) and S3C4F10 (DVBH/DVB-T channel decoder); Broadcom BCM2940 DMB Chipsets: LG SoC 1.0 (T-DMB), DIB29098-SH (DVB-SH) ATSC Mobile DTV Chipset: LG 1260A ISDB-T Chipsets: Sieno SMS1140, DiBcom DIB10098-M Media Processors: AMD Imageon M180, 2282, and 2182; Renasas SH-MobileR2R; NEC MP211; Freescale MX27; NVIDIA NPX2500

Chipsets for Mobile TV and Multimedia Applications

413

14.6 Chipsets for DVB-H Technologies DVB-H mobile TV consists of a UHF or L-band transmission together with COFDM modulation. The video signals are coded in MPEG-4/AVC. The chipsets for DVH mobile TV therefore need to contain the following elements: ● ● ● ● ●

Antenna selection for the DVB-H (UHF, L-band) and GSM bands Tuner for UHF and L-bands Demodulator for COFDM Decrypters for the encryption system used Decoders for video and audio

We have already seen an example of a DVB-H chipset with the TI Hollywood T1000 digital TV chip coupled with the OMAP 2420 processor to provide a complete multimedia phone solution. There are other chipsets that achieve the same functionality.

14.6.1 DIB7000-H Chipset DiBcom has introduced a chipset for DVB-H that is based on the use of open standards. This chipset, the DIB7000-H, has found implementation in Sagem, BenQ, and other manufacturers’ phones. DiBcom has a reference design for a multimedia phone by using an

Figure 14.10: DiBcom chipsets, designed for a flexible mix of components.

414

Chapter 14

NVIDIA GoForce graphics processing unit, which provides high-quality video encoding and decoding on the mobile phone resulting in delivery of high-quality graphics and pictures. The reference design is completed with the DiB7070-H mobile DVB-H/DVB-T integrated receiver. The receiver features VHF, UHF, and L-band operation, which would permit its use in Europe, Asia, and the United States. The receiver features up to 130 Hz Doppler shift, which means it can be used at moving speeds up to 350 km/hour (in 4 K mode) in the UHF band (750 MHz) and up to 208 km/hour in the L-band at 1.67 GHz. DiBcom also has a new range of low-power DVB chipsets (DIB9090). These are available for automotive applications (with diversity antennas and tuners) and mobile phone use. Alternative implementations with the RF tuner from Freescale, the MS44CD02C, are possible. DVB-H chipsets reduce power consumption by using a time-slicing feature for “activation” of the tuner. The DIB7000-H claims a consumption of only 20 mW as opposed to 200 mW for DVB-T receivers. The DIB7000-H can be used with many operating systems, including Symbian, Linux, and Windows Mobile 5. The company has also ported the drivers of the chipset to the Windows CE operating system. Based on the use of the Windows Mobile software, the chipsets have been integrated and available in various handsets. DiBcom also has a reference design available based on this chipset. The NVIDIA G5500 GPU has the capability for encoding and decoding full-motion video at 30 fps (i.e., NTSC) with a resolution of 700480. The chip has onboard codecs (encoders and decoders) for H.264, Windows Media 9, RealVideo, and JPEG. It also has a display controller for XGA (1024768) and a 10 MP camera. The reference design with NVIDIA is available with Linux drivers.

14.6.2 Samsung Chipset for DVB-H Samsung has introduced a DVB-H chipset that has the distinction of being an SoC, with the integrated tuner S5M8600 and DVB-H/DVB-T channel decoder S3C4F10. The integrated tuner features Zero-IF, which helps it reduce power consumption further. The chipset is designed for international markets including the United States (1670–1675 MHz), Europe (UHF and L-band 1452–1477), and Asia (UHF). Figure 14.11, showing DVB chipsets, depicts only the implementation of the mobile TV tuner, decoder, and baseband and multimedia processors. In addition, the chipset for mobile communications (i.e., for 3GSM or 3G-CDMA networks) is present based on the network where used. As an example, a DVB-H set used in Europe may have a 3GSM chipset in addition to DTV and multimedia processor functions, whereas one used on a CDMA network in the United States may have the Qualcomm MSM and the RFT and RFR chipsets.

Chipsets for Mobile TV and Multimedia Applications

415

Figure 14.11: DVB-H chipsets.

14.7 Eureka 147 DAB Chipset In order to begin looking at the DMB chipsets for mobile TV, you need to understand DAB reception, which follows the same physical layer as T-DMB. The Eureka 147 DAB receivers need to receive RF, down convert the signal, demodulate, and synchronize to the ensemble multiplex stream. The Fast Information Channel (FIC), which is a part of the multiplex, provides all information on the services carried in the ensemble. The receivers consist of a front end to receive the RF from the antenna and down convert it before it is given to a DSP chip, which carries out the functions of synchronizing to the ensemble. A number of chips are available for this purpose, including the Philips DAB452 DAB receiver, Texas Instruments DRE200 and DRE310, and Chorus 2FS1020 chips. Complete integrated modules are also available, which have the front-end components as well as the DAB receiver chip, such as the Radioscape RSL300 DAB/FM receiver, which has the Texas Instruments DRE310 DAB chip integrated in the module.

14.8 Chipsets for DMB Technologies DMB mobile TV broadcasts consist of two technologies—T-DMB and S-DMB. In Korea, the T-DMB has a bandwidth of 1.57 MHz; S-DMB transmissions have a bandwidth of

416

Chapter 14

25 MHz in order to accommodate very high FEC, which is essential for low-strength satellite signals. LG has the distinction of having launched the first T-DMB handset in September 2004 with a T-DMB SoC. The DMB SoC 1.0 featured the DMB receiver and A/V decoder. The following functionalities were included in the DMB SoC: ● ● ● ● ● ●

OFDM decoder MPEG-2 transmit stream demultiplexer Eureka 147 data decoder H.264 baseline profile 1.3 decoder (CIF 30 fps) Audio decoder for BSAC (MUSICAM), MP3, and AAC Mobile XD engine adaptation

The receiver part of the SoC provided the OFDM demodulator and RS decoder functions.

Figure 14.12: An example of a T-DMB chipset.

The mobile communications part of the phone (for the CDMA network in Korea) was achieved by using the Qualcomm chipset, including an MSM together with a cdmaOne transceiver set for CDMA2000 1X. LG T-DMB SoCs were used in the mobile phone LGU100. Other chipsets support multimode mobile TV broadcast operation. An example is the Philips “TV on Mobile Solution,” which includes a tuner, channel decoder, and MPEG decoder together with full software stacks for IPDC, DVB-T, or DVB-H. The software stack also

Chipsets for Mobile TV and Multimedia Applications

417

supports DVB-H middleware, an electronic service guide, and PSI/SI tables. The solution consists of a TV on mobile chip (with tuner, channel decoder, and demodulator) and the Nexperia PNX4008 MPEG source decoder. The tuner has a power consumption of only 20 mW in DVB-H mode, which can provide handsets with seven hours of viewing time. The power consumption is 150 mW in DVB-T mode. The PNX4008 is an advanced multimedia processor. The chipset was designed to be used globally, as it supports multiple band reception including the L-band for DVB-H in the United States and Europe. An example of a recent chipset for the DMB/DAB-based services is the Kino-2 launched by Frontier Telecom. Kino-2 is a true multimode device with built-in flexibility through software customization, supporting all T-DMB variants as well as DAB and DAB-IP in a single chip. The flexibility offered by Kino-2 enables advanced features such as conditional access and data services to be implemented in the software. When Kino-2 is combined with the multiband capability of the Apollo silicon tuner, handset vendors can produce mobile TV– enabled devices that are compatible with the requirements of all T-DMB markets worldwide. Kino-2 also provides full support for DAB digital radio. By integrating Kino-2 into a mobile phone, a manufacturer is able to offer consumers a product that can be used to enjoy highquality TV and radio services.

14.8.1 Chipsets for S-DMB Services Samsung was the first to introduce the chipsets for DMB mobile TV–based services in 2004, which led to their deployment in the Korean DMB launch in September 2005 in the form of the SCH-B100 handset. Subsequently, a series of phones was introduced: SCH-B130, SCHB200, SCH-B250, and SCH-B360.

14.8.2 Chipsets for GPS Services GPS stands for Global Positioning System, and the services are provided by a constellation of 24 satellites at an altitude of 20,183 km. The GPS services help a user identify his or her location to within a few meters using the Standard Positioning Service. A Precision Positioning Service is also available for military use. GPS receivers, which work directly with the satellites, have a receiver module that can receive a signal from up to 12 satellites. Many of the phones are designed to work with GPS data and present it in the form of map guides and position-location information, for which they need to have the requisite software (e.g., HP iPAQ rx 5900). Subscription to the GPS services can be obtained through the mobile carrier for access to maps based on the location information provided by the GPS system in the cellphone. For example, in the United States, Sprint Nextel offers TeleNav and ViaMoto. Alternatively, software packages obtained by subscription can permit standalone GPS services to be used independent of the carrier. Enhanced 911 services in the United States

418

Chapter 14

require all new cell phones to have GPS position-location capabilities. The key requirement of chipsets in mobile phones for GPS is low power consumption, quick-fix capability, and the capability to work with low signal levels due to limitations of antennas. GPS technology is available from Qualcomm (GPSOne) and has been used in a number of handsets from different manufacturers. One of the new chipsets available for GPS reception in cell phones is the SiRF Star III chipset from SiRF Technologies, which has low power consumption and gives a quick fix on the position.

14.8.3 Chipsets for ISDB-T Services It is estimated that over 60 million mobile handsets were available at the end of 2009 that are capable of receiving ISDB-T 1-Seg transmissions. There are a number of 1-Seg and multistandard chipsets that can receive 1-Seg transmissions. A 1-Seg fabless chip is available from Newport media, the NMI325, which has a power consumption as low as 60 mW. Multistandard chipsets include the Siano SMS 1100 (1-Seg) and SMS1140 (DVB-T, ISDB-T, and T-DMB), DiBcom DIB9090, DIB10098-M, amongst others. The Sieno SMS 1140 is an integrated tuner/demodulator and supports antenna diversity and can be used in MRC diversity receivers. High integration on the chip leads to a single-chip implementation of mobile digital TV capability in the mobile phone. The tuner supports the VHF III (170–240 MHz) and UHF (470–862 MHz) bands.

14.8.4 Chipsets for CMMB Services CMMB-based handsets in use in China are based on dual-reception capabilities of the handset from a satellite (STiMi) or a terrestrial transmitter (UHF). The chips also support reception in the S-band. A CMMB chip is a key component of a CMMB mobile TV handset, and many types of handsets can be made using the highly integrated CMMB chipsets. There are a number of manufacturers of CMMB chipsets in China. These include Siano, Iped, Innofidei, Rockchips, Telepath, and Spreadtrum, amongst others. An example of a CMMB chip from Siano is the CMMB receiver chip SMS1180. The SMS1180 is an integrated tuner amplifier and has host processor software available that permits its easy interface with most of the common host processors used in mobile phones. Figure 14.13 shows the architecture of a CMMB handset using SMS1180. There is also a diversity receiver chipset SMS1185 for PND or automotive applications. This is used by handset makers such as AIGO, CEC Telecom, Tianyu, and ZTE. The SMS1180 is compatible with the Chinese standard for CAS—Secure Broadcast System (SBS). A later release from Siano is the SMS1186 SoC, which integrates the real-time descrambling of CMMB streams within the SoC, ensuring that the keys are not exposed. The chip thus receives

Chipsets for Mobile TV and Multimedia Applications

419

Figure 14.13: Architecture of a CMMB handset using the SMS1180 CMMB chip from Siano.

an RF input from the antenna and provides a decrypted CMMB output stream, making it one of the most highly integrated SoCs of its kind. It also includes the “root of trust” hardware (UAM), meeting China Mobile’s data security requirements. In addition it has a sensitivity of 100dBm, making it easy to integrate into USB devices, PC cards, and mobile handsets.

14.9 Industry Trends 14.9.1 Multimode Multifunction Devices The new trend in the mobile industry is undoubtedly the convergence of cellular and broadcasting in handsets. This implies that the handsets are multinetwork (3G-GSM and 3G-CDMA)–compatible and also capable of receiving multimode broadcasts in different frequency bands. Multinetwork functionality and capability for receiving mobile TV broadcasts in multiple standards means that phone models launched need not be restricted to a particular country and hence can be mass-produced. This also permits global roaming across many networks. The new class of chipsets can also handle the larger screen resolutions that are common in new smartphones.

420

Chapter 14

Universal tuners such as the Analog Devices ADMVT102 are now available that can handle reception of terrestrial transmissions such as DVB-H, DVB-T, DTMB, and CMMB. Qualcomm’s snapdragon processor An example of a new multifunctional chipset from Qualcomm is the QSD8672 chipset. This chipset is designed for a new class of wirelessly connected mobile devices and includes multiformat mobile TV capabilities. The chipset is built around the use of 3G-based highspeed data connectivity and has been used in a new mobile TV platform, “Snapdragon,” which was the platform for the launch of Toshiba’s G01 smartphones. The chip features mobile TV using DVB-H, MediaFLO, or ISDB-T technologies. In 3G broadband, support is available for HSDPA, HSUPA, Wi-Fi, and GPS. It can handle display resolutions up to 1440900 (WSXGA). The dual-CPU processor has computing power enough for it to handle a 3D UI interface.

14.9.2 DiBcom Multistandard Chipsets: The Octopus™ Family DiBcom has a range of chipsets that are multistandard as well as multifunctional, such as DIB10098-M and DIB10096-M, which provide for multistandard mobile TV including ISDB-T. Most of the chipsets available today support a high degree of integration and provide output interfaces for mobile TV (MPEG-2 TS), SDIO, camera, and so on. DiBcom has also launched “Octopus,” which enables multiple-standard programming for mobile TV with universal chipsets. Integration of these universal chipsets is possible with application processors and media processors provided by multiple vendors.

14.9.3 Single Chips for Multiple Applications An example of such a processor is the SH-Mobile L3V multimedia processor from Renesas Technologies. The multimedia processor chip comprises a CPU core and a video processing unit. The CPU gives a processing performance of 389 MIPS at 216 MHz, i.e., 1.8 MIPS per megahertz of clock speed. The multimedia processor is designed to handle video rendering at full VGA resolution and frame rates of 30 fps. This makes its use possible in the Japanese market, where the phones such as the LG U900 come equipped with 3G, DVB-T, and ISDB-T (1-Seg broadcasting) as well as analog NTSC TV receiving capability. LG U900 phones were used by Hutchison Italia (3 Italia) for the FIFA World Cup 2006. The SH-Mobile L3V multimedia processor can interface with a 5 MP camera and provide display through a 24-bit LCD interface with 16 million colors. Its high processing power enables it to take

Chipsets for Mobile TV and Multimedia Applications

421

Figure 14.14: The LG U900 DVB-H and 3G phone. (Courtesy of LG)

one picture in just 0.02 sec. The unit is designed for reception of ISDB-T or T-DMB broadcasts, which means that it must process (encode and decode) H.264 or MPEG-4 in real time. It can process IP-based data and provide support to services such as video calls and video mail. The handling of analog or DVB-T broadcasts is a very processor-intensive task, due to the need for scaling of video from full-screen PAL/NTSC resolution to QVGA and scaling up the frame rate from 30 to 60 fps for clear display and transcoding between MPEG-2 and MPEG-4 or 3GPP. Multimedia processors now have sufficient processing power to accomplish these tasks.

14.9.4 Example of a Multifunctional Device: Mobile TV and Navigation An example of implementation of a typical portable navigation device with mobile TV and navigation (by far the most popular combination for outdoor and automobile use) can be implemented by using Telechips TCC7901. As shown in Figure 14.15, the device provides interface functions for TV display, SDIO, flash memory, LCD (for navigation display), a GPS receiver (using a chipset such as Telechips TCC3310), and USB devices. It supports multiple mobile TV standards, including CMMB, T-DMB, DVB-H, DVB-T, 1-Seg (ISDB-T), and digital audio broadcasting (DAB) and DAB digital radio.

422

Chapter 14

14.10 Outlook for Advanced Chipsets Chipsets are constantly being developed along with the advancement of the technologies for broadcast and mobile networks. Newer chipsets need to be available as 3G networks evolve to HSDPA and beyond to LTE. The multimedia capabilities are also on the increase. Camera resolutions are going up and the latest chipsets can support upward of 8-Mpixel resolution. The display resolutions are also going up to full VGA (640480 pixels) instead of merely QVGA (320240). New multimedia applications with new codec types are becoming common to support new applications. The new applications such as gaming require very high speed animations capability, which was formerly available in very high speed graphics processors only. The chipsets are evolving to support such applications.

Figure 14.15: Typical implementation of a PND and a mobile TV device.

Before We Close: Some FAQs 1. What is the feature required in mobile TV chipsets for reception in high-speed vehicles? For high-speed reception, the feature of dual antennas with diversity tuner is useful. 2. Are the mobile TV SoCs linked to the handset being a GSM-3G or CDMA handset? No. Mobile TV SoCs normally are not linked to the communication modems on the handset.

Chipsets for Mobile TV and Multimedia Applications

423

3. Why are processors such as Atom® not used in smartphones? What is the speciality of ARM processors? Atom processing power is roughly equivalent to a Celeron (2004) and requires 2 watts of power. It’s good for a notebook or UMPC but too high for a smartphone. In the mobile arena, even advanced processors based on ARM cores (e.g., Snapdragon) need less than 500 mW. 4. Chip manufacturers still see virtue in chips for analog TV reception on handsets. What are a few such chipsets? An example of a free-to-air multistandard analog mobile TV chipset is the Telegent Systems TLG1120. This chip enables reception of PAL, NTSC, or SECAM programs and provides Doppler compensation up to about 300 kmph. 5. Can multimedia processors perform MPEG-4 encoding? Which chip in a mobile phone is used to generate a TV-out signal? Yes. MPEG-4 encoding is an important function of multimedia processors. They can also provide a PAL or NTSC TV-out signal. An example is the Broadcom BCM 2724 multimedia processor. 6. What is the key requirement of a portable navigation device (PND) in terms of chipsets? A portable navigation device needs at the basic level a GPS chipset and the capability to display maps and video. A chipset suitable for such an application is the SiRFatlasIV, which bundles a GPS chipset with a multimedia processor. 7. What type of multimedia processor is suitable for a portable media player (PMP)? A multimedia processor for a PMP should preferably have the following capabilities: ● ●



Audio formats supported: MP3, AAC and AAC, WMA, RealAudio Video formats supported: H.264 and MPEG-4 (codec), MPEG-2, DivX, XVid, Windows Media video and RealVideo (decoder) TV-out: NTSC and PAL with CVBS or S-video, audio in/out, and TV line-in recording

An example of a multimedia processor meeting these specifications is the SSD 1933 from MagnusCore.

This page intentionally left blank

CHAPTE R 15

Operating Systems and Software for Mobile TV and Multimedia Phones Supporting Windows is like buying a puppy. The dog only cost $100, but we spent another $500 cleaning the carpet. Marc Dodge, “Reality Check,” Open Computing, December 1994 http://www.generalconcepts.com/jms/quotes.html

When MobiTV won the Global Mobile Awards 2009 for the Mobile TV category, the judges’ comments included: “A great service with a simple user interface available on an impressive range of handsets—this is likely to encourage greater uptake of mobile TV services.” This single sentence captures the ingredients for the success of mobile TV. But how can such a “simple user interface” be implemented on an “impressive range of handsets”? The solution, as it turns out, requires an unprecedented familiarity with the software structure on mobile devices.

15.1 Do I Need to Worry About the Software Structure on Mobile Phones? Mobile phones have been with us for well over two decades. Do we need to look at the operating systems and software structure just because we are moving toward multimedia and mobile TV–type applications? What makes these phones different and the software structure important? We will attempt to piece together this information in this chapter. When a new operators enters a mobile TV (or multimedia service) market, the first question that needs to be answered is: How will the service be delivered on various types of phones out in the field? Is it possible for me to upload my software in mobile phones (the Mobile TV client) so that they operate seamlessly? What types of media players are available on phones that people use daily? How do I deliver my service with a variety of media players? Answering these questions requires looking into the architecture of the software that exists on mobile phones, as well as their operating environment (such as the mobile operator, which might create a walled garden on the kinds of software that can be downloaded and used in its network). These present interesting challenges for the operator.

© 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00015-1

425

426

Chapter 15

15.2 Application Clients 15.2.1 What Type of Application Client Do I Need on the Mobile Phones? This is the next question, assuming that the mobile network will permit uploading some type of client and the mobile phone operating system will support it. An application client is the most important visible element of the service provided by the operator. The client relieves the customer of the need to set various parameters in the handset and get access to the service in its richest form. It is common for operators to provide rich media clients that enhance the viewing environment by presenting an attractive menu, easy settings, access to information, and interactivity with content providers and advertisers. So what type of client is required in mobile phones to handle TV and multimedia? The answer is relatively straightforward for 3G networks, as these are 3GPP-compliant. All phones are designed to support, for example, an RTSP or 3GPP streaming service. In this environment, at the most basic level, a client is needed to be able to set up the 3GPP streaming service parameters, present an EPG, and enable the 3GPP player to play the mobile TV unicast content. The client also helps handle the content security and DRM. It is slightly more complex for terrestrial broadcast networks, on which the client uploaded on the handsets needs the functionality to select the channel on the tuner and present the EPG retrieved from the transmit stream. It is also required to present the carousel data (such as headline news, weather, or traffic information). The client also provides interactivity via the return path and enables interactive two-way applications. An example of a client for 3G networks is the SlingPlayer Mobile Software, even though this is not a traditional mobile TV service. This software was initially available for S60 Symbian phones (we discuss operating systems later in this chapter). Once downloaded on the mobile phone, it allows the user to connect via 3G or Wi-Fi to a Sling Media device at home, which would stream the DTH, DTV, IPTV, or cable TV content. An example of a rich media client that can operate over broadcast terrestrial networks is offered by Stremezzo. The client can be installed on multiple mobile operating systems, thus making it possible to use a wide range of phones for delivery of multimedia services. A rich media server at the broadcast end combines the audio, video, web services, and data. This transmit stream reaches the mobile clients via the broadcast network. The client can interface to 3GPP or RTSP players on the phone as well as to the terrestrial mobile decoders (DVB-H, ATSC Mobile DTV, T-DMB, FLO, etc.). The rich media client is written using Java and C/C and can be interfaced to most of the common operating systems. In particular, the Streamezzo client can be used to enable the iPhone 3G for mobile TV, amongst other devices.

Operating Systems and Software for Mobile TV and Multimedia Phones

427

Figure 15.1: A SlingPlayer as an example of an application client for 3G phones.

Figure 15.2: An example of a rich media client (Streamezzo).

15.2.2 How Do the Application Clients Interface with Software in the Phone? It is quite evident that mobile TV operators need to focus on a powerful mobile TV client to deliver a quality end-user experience and to be able to cover the largest possible range of handsets based on different operating systems. The next stage is therefore to understand the

428

Chapter 15

capabilities of the client software, how it interfaces to functions in a mobile phone, and how well it meets the requirements of the mobile TV operator. The application clients are designed to interface with the APIs of the software in the phone. These APIs are available from the operating system (OS) and the middleware installed. Third-party application developers often work with these APIs with the help of the system development kit (SDK) that is provided by the handset vendors. These APIs then provide a complete abstraction of the higher layer functions. This essentially means that the functions of adaptation of the mobile TV client to the phone are then handled separately and independently of the applications such as TV and radio, EPG, and interactive applications. It is best to understand this with an example, such as the case of the onHandTV® TV client software provided by Silicon and Software Systems (S3). onHandTV is used in the DVB-H networks of 3 Italia and Multichoice. Figure 15.3 shows the architecture of the software in onHandTV®.

Figure 15.3: Software architecture of the onHandTV mobile client.

At the application level, the client software provides the functions of TV and audio browsers (these are customizable players and media managers), electronic service guide (ESG), and interactive applications. The application level provides scanning and tuning to operator-provided

Operating Systems and Software for Mobile TV and Multimedia Phones

429

services, fast channel switching, subscription management, and on-demand purchase of content through the lower layers of software. Middleware in the client manages the functions associated with generation of EPG, file delivery, and data casting. These functions can be further tailored to the broadcast or unicast system in use (such as DVB-H, T-DMB, or multicasting) through a lower-level API interface to a file transport stream controller. An operator adaptation package in the client software is tailored to help personalization, including codec types. This simplifies the use of the package across any DVB-H or IPDC network through suitable APIs. The client software supports handling of CAS per the OpenSecurity framework, 18Crypt, or OMA-BCAST. Table 15.1 provides a summary of features of onHandTV software. The high degree of flexibility in selection of CAS, media players, interactive applications, and file transport is evident. onHandTv can be compiled for a range of operating systems that includes Windows Mobile, Symbian, Windows CE, Linux, RTK, and others. In the preceding discussions, we mentioned the interfaces to the operating system of the phone and associated middleware. We now need to take a closer look at these software elements and how they affect the ability to launch various services. Table 15.1: Features Supported by onHandTV Mobile Client. Functions ESG Engine Conditional Access

Interactive Services Subtitles ESG

Functions

Features Supported

Service discovery and maintenance of ESG data Provides an abstraction layer for support of multiple encryption standards

Support of OMA, DVB, and T-DMB standards Supports both CA and DRM schema, including OpenSecurity Framework, ISMA, 18Crypt, and OMA-BCAST smartcard profile Support of CBMS (IPDC) as well as OMA-BCAST interactive services Supports DVB- and 3GPP-compliant subtitle rendering Provides multiple views, including daily, weekly, favorites, now/next, and searching by genre or program classification Scanning and tuning of operator services, fast channel change, playing media files Support of H.264, VC-1, MPEG-4, AAC, and others Support for DVB-H, T-DMB, and IP multicasting

Rendering of Interactive service data streams Rendering of subtitles from stream data Display of program information and service purchase

TV and Audio Browser

Search and playout of video and audio content

Audio and Video Support

Decoding of content with different compression formats/standards Datacasting support specific to mobile TV broadcast systems

Broadcast Systems

430

Chapter 15

15.3 An Introduction to the Software Structure on Mobile Phones As we know, operating systems were initially designed for the management of the “machine,” i.e., handle all input/output, memory, disk drive, display, printers, keyboard, and serial and parallel interfaces on the “computer.” This was supposed to isolate the application software writers from the low-level functions of controlling to the machine peripherals. As the functions that had to be supported on the “machines” increased, such as the support of browsers, media players, Wi-Fi, Bluetooth, and so on, the operating systems went through new versions to support them. The Microsoft Windows operating system, Vista, reportedly has over 50 million lines of source code and needs over 1 GB of RAM to keep the machines running. The case is no different for mobile phones, although everything is at a Lilliputian scale. The older versions of mobile phones were limited to voice applications or SMS and generally had no visible operating systems. The code written was proprietary to the hardware and the specific phone. All applications were directly coded. However, as more advanced phones and applications started coming in, this was no longer practical. The phones needed an operating system that could be ported on different processors and the applications could be migrated to new environments without having to rewrite them. This brief discussion brings us to OS requirements in multimedia phones. What is special about multimedia phones? Smartphones in use today have capabilities that can no longer be provided by the host processor in the phone alone. In Chapter 14, we discussed the use of application processors for multimedia, SoCs for mobile TV, WiMAX or Wi-Fi, and support of cellular

Figure 15.4: Mobile device software evolution.

Operating Systems and Software for Mobile TV and Multimedia Phones

431

communications such as 3G or CDMA. Multimedia phones take the OS requirements to a new level of functionality. The user applications require full frame rate rendering of video and support of encoders and decoders for H.264. The functional modules needed to support multimedia phones need to manage large memories, handle SDIO or microSD cards, and support content security.

15.3.1 Software Organization in Mobile Phones In order to understand the major differences, we’ll look at the functionalities and organization of software in a mobile phone. The software can be classified broadly in the following layers: ●







The operating systems for mobile devices provide direct interface to the hardware and control the device peripherals. Examples of such operating systems are Symbian, Windows Mobile, BREW, Palm OS, iPhone OS, Android, and Linux, amongst others. A middleware layer, which implements basic functions in a common manner across multiple hardware sets and operating systems. Examples of middleware include EPG presenters, media players, media streamers, VoIP software cores, and codec implementations. Middleware is often based on J2ME or other software implementations that provide machine-independence. A user interface layer (UI), which provides a uniform interface to the users for a particular OS and handset vendor. (A UI layer may however not be present separately in all operating systems and the OS may directly control the user interface.) Application clients, which implement the application functionality and present the face of the application to the user. These can include instant messaging (IM), mail, IPTV, mobile TV, office applications, PDF readers, and so on.

The applications layer is the group of programs that provide the user applications making use of the underlying hardware and communications capabilities as necessary.

Figure 15.5: Software architecture layering in mobile devices.

432

Chapter 15

Figure 15.6: Another view of the functions in a mobile phone.

Figure 15.7: Software structure on mobile phones.

Operating Systems and Software for Mobile TV and Multimedia Phones

433

The software structure consisting of OS and middleware in a phone provides a complete abstraction for application clients, such as those for mobile TV, which operate uniformly across a range of phones without the service operator needing to worry about the constantly changing hardware and handsets available in the market. To summarize: ●





The mobile device needs an OS that performs the basic tasks of device and media control. In addition, it needs to support many of the basic communications stacks that are fundamental to any type of networking, such as TCP/IP. These are performed by the operating system. The software programmers need to be able to view all functions as APIs or function calls and cannot be involved in any specific programming for a particular task such as connecting to a network and downloading a media file. This implies the use of middleware such as Java or BREW or third-party middleware that provides all the functions as APIs. Java, for example, supports APIs for mobile media (MMAPI), wireless, gaming, and so on. Applications programming forms the apex level of programs and typically would be entirely dependent on the APIs available and middleware modules for different functions. These are usually provided by application clients.

15.3.2 Why Is the Operating System Important in Mobile Phones? For an operator planning to establish mobile TV services through an application client, the boundaries are always set by the OS. For example, as an operator, do you want to provide touchscreen support, the Adobe Flash Player, and a full-resolution VGA screen? Then you may need an OS such as Symbian S60, fifth edition, because it is in this release that the support for these features came in. Many OS implementations provide 3D features, yet others do not. So what are the capabilities and requirements of operating systems? Functional requirements Due to the feature-rich nature of the phones, it is evident that the applications now dominate the mobile phone scene rather than the basic function of communications. The nature of applications changes very fast as the hardware capabilities grow and to cater to the preferences of the market. This requires that a large number of developers be able to deliver applications that conform fully to the operator network and adhere to the protocols and standards and guidelines such as OMA, 3GPP, W3C, and so on. The operating systems need to provide an environment that not only meets the capabilities of a multimedia device but also allows fast turnaround of new applications. For the iPhone 3G, for example, over 100,000 applications are now available!

434

Chapter 15

Support of device drivers and protocol stacks Figure 15.8, which shows the software structure on mobile phones, also exhibits an important aspect of an operating system, i.e., support of device drivers as well as protocol stacks for important functions that are commonly provided on mobile phones such as Bluetooth, audio and video encoders, infrared, Internet protocols (TCP/IP, RTP, SIP, WAP), support for recording, playback, and so on. Other important functions of a mobile OS are power management, over-the-air (OTA) synchronization, and multitasking functionality.

Figure 15.8: Operating system and middleware on multimedia phones.

Important features of mobile phone operating systems The following are some of the important attributes of an operating system for the mobile wireless environment: ●

Native support of important communications protocols such as TCP/IP, IPv4, IPv6, WLAN, and Wi-Fi. The protocol stacks that are not natively supported will need to be implemented as middleware or application packages, making the portability of applications more difficult.

Operating Systems and Software for Mobile TV and Multimedia Phones ●











435

Capability to operate in a multiple-chipset environment. The OS needs to support hardware for WiMAX, encryption, 3D gaming and graphics, a multimedia processor, cellular modem chipsets, mobile TV chipsets, and audio/video codecs based on the phone design. Provide fast context switching for multiple applications to be supported simultaneously on the mobile phone (such as making a voice call, watching video, and uploading a mail message simultaneously). Provide flexibility in user interface design. The design of a user interface is important for operators in order to distinguish products and offer specialized services. Users do not select the handsets based on the OS but on the user interface, applications, and branding. Mobile virtual network operators (MVNOs) and wireless operators need a lot of flexibility in offering innovative features, animations, and graphics as a part of the user interface. The success of companies such as UIQ has been solely in providing attractive user interfaces. Provide a high degree of hardware portability. Mobile and wireless industries today operate in an environment where the number of mobile devices released in different markets and varying capabilities can be quite high in a year. It is important for the operating system to have a hardware abstraction layer to have high portability of the entire software set up to new devices. Provide native support for Java, Flash, or similar software for development of applications in a device-independent environment. Java, for example, provides a wireless messaging API and mobile media APIs (MMAPI) that make it possible to port a number of programs written for the mobile environment. Some users, however, consider Java to be burdensome and to slow down the speed of applications, and prefer other development environments. Provide a robust development environment. The richness of applications in any given area depends on how many developers are able to work on new products and the software development environment. Skill sets in proprietary systems are hard to come by and in any event have a longer lead time or higher cost. Open source operating systems such as Linux or Android are potentially advantageous in these scenarios.

15.4 Common Operating Systems for Mobile Devices A number of operating systems have come to be used on mobile phones over a period of time. These include Symbian, Linux, Android, Windows Mobile, PalmOS, BREW, Research in Motion (RIM, for Blackberry devices), and iPhone OS (Apple iPhone).

15.4.1 Symbian Symbian is one of the prominent operating systems for mobile phones. It was designed specifically for mobile phones, as opposed to some of the others that are derived from desktop systems. The OS therefore has APIs for messaging, browsing, communications, Bluetooth, Infrared (IrDA), keyboard, touchscreen management, and support for Java Virtual Machine (JVM). This enables the applications to be written in Java for better portability.

436

Chapter 15

Symbian platforms quickly began to be associated with the provision of rich interworking applications and interfaces for mobile phones after their launch. Features such as a full HTML browser, video telephony, streaming, messaging, presence, “push-to-talk,” Java support, branded keys, default wallpapers, and operator menus were common features. The Symbian platforms also provide support for 3GPP, 3GPP2, and OMA applications. Japan’s FOMA network has been the biggest user of the Symbian OS, which it had used exclusively until recently, when it began supporting Linux as well. The Symbian OS is used extensively in Nokia phones. It is also used extensively in Europe and Asia. Some of the latest phones announced are based on Symbian’s fifth edition, including the Nokia N97, Nokia 5800XpressMusic, Sony Ericsson Satio®, and Samsung i8910 Omnia® HD. Symbian has recently come out with the OS version 9.5, which provides native support for multistandard mobile TV, camera features, and multimedia applications. The key features of Symbian OS 9.5 are given in Table 15.2. The feature-rich support by the OS to the applications is evident.

Table 15.2: Symbian 9.5 Operating System Features. Symbian OS 9.5 Wireless Capabilities Multimedia Capabilities

Support for TV Network Support Communications Protocols Supported Java Support

Messaging Capabilities Graphics Support Developer Options

Security

Data Synchronization Telephony

Supported Features Support of Wi-Fi, UMA (Wi-Fi to 3G roaming), Bluetooth V2.0, IrDA Multimedia transfer protocol over USB (MTP); RTP/RTCP; audio and video support for recording, playback, and streaming; multi-megapixel cameras, advanced features, audio and video codec interfaces Support for multistandard digital TV, including DVB-H and DVB-T, digital TV hardware abstraction 3G-WCDMA (3GPP R4 and R5 IMS support), CDMA2000-1x HSDPA, HSUPA, GSM, GPRS (classes A, B, and C) and EDGE (circuit- and packet-switched) TCP/IP stack (IPv4 and IPv6), SIP, RTP, and RTCP WAP 2.0, WAP push, infrared, Bluetooth, USB 2.0 Latest-release Java standards support for MIDP 2.0, CLDC 1.1 (JSR 139), wireless messaging (JSR 120), Bluetooth (JSR 082), Mobile 3D Graphics API (JSR 184), personal information management API (JSR 248) Internet email with SMTP, POP3, IMAP-4, SMS, and EMS Graphics accelerator API, 2D and 3D graphics support, UI flexibility (e.g., display sizes, orientation, and multiple displays) C, J2ME MIDP 2.0, WAP Reference Telephony abstraction layer, high-level multimedia service abstraction, Eclipse- and CodeWarrior-based development environments Cryptographic algorithms AES, DES, 3DES, RC2, RC4, RC5 Encryption and certificate management, secure protocols (SSL, HTTPS, TLS), DRM framework and reference implementation, IPSec and VPN client support Over-the-air (OTA) data synchronization, PC-based synchronization through Bluetooth, infrared, and USB Multimode enhanced telephony (2.5/3G), IMS, SIP/SDP support

Operating Systems and Software for Mobile TV and Multimedia Phones

437

Symbian 9.5 supports the latest CPU architectures with real-time capabilities and provides support for single-chip hardware platforms. The support of new processors such as ARM Cortex A8 implies that applications that need PC-class performance can now be implemented on mobile devices. This is important for the new generation of devices, which implement full VGA (720480) resolutions on PDAs, UMPCs, and others devices.

15.4.2 Symbian OS Architecture The Symbian OS architecture is given in Figure 15.9.

Figure 15.9: Symbian OS architecture.

The architecture of Symbian is somewhat unique, as it has a user interface layer that is separate from the operating system and hence needs to be selected individually by the phone manufacturer. The user interfaces (UIs) are developed by Nokia, UIQ, and network operators such as NTT DoCoMo for its FOMA network. The applications access the OS services via the UI framework. The abstraction of hardware adaptation makes it suitable for faster implementation on various chipsets and processor cores.

15.4.3 Symbian Mobile Phone Series Symbian OS–based phone designs vary considerably depending on the screen size, nature of use, i.e., multimedia phone or a PDA device, and so on. Symbian has come out with a functional set of series for its phones, which are characterized by the screen size and the support for various features. These range from the S20, which was introduced for the initial release of phones, to S60, used for smartphones. There are additional releases, e.g., S80 for mobile communicators and S90 with varying size of displays and feature support. Figure 15.10 depicts the progression of Symbian operating systems.

438

Chapter 15

Figure 15.10: Symbian phone series.

Amongst the Symbian phones, S60 from Nokia was the first OS to implement a major enhancement in processor capacities and has proved to be very popular. It also heralded the release of the first multitasking phone OS. The S60-series phones support more powerful phone processors and multimedia application processors, which gives them an edge in terms of performance in multimedia applications, Internet browsing, and a host of other applications. The Nokia N97 provides an example of the advanced series of S60 phones. More than 100 phones based on the S60 series have been released. The S40 series of phones are designed to be “normal” phones, but the differences are narrowing with respect to the S60 series with advances in processor capabilities. An example of this is the Nokia 5300 (S40 series) and the Nokia 5700 (S60 series), which offer similar features and user interfaces. The Symbian phones are also available under the UIQ series, primarily from Sony Ericsson and Motorola. The UIQ series of phones are based on the user interface technology from UIQ Technologies AB. The UIQ user interfaces provide more advanced animations, transition effects, and themes and reinforce the importance of having the OS separate from the user interfaces. The Symbian operating system, with its widespread representation in the smartphone category, also has the highest number of downloadable software applications. Examples

Operating Systems and Software for Mobile TV and Multimedia Phones

439

of applications include RSS feed readers, social networking (Facebook, Twitter), IPTV, YouTube, streaming video, and mobile multimedia–related programs.

15.4.4 Symbian 9.5 FreeWay™ Symbian has further enhanced its operating system to support high-end smartphones, very high-speed data connectivities, and support for new wireless technologies such as mobile WiMAX. The enhancement of the Symbian 9.5 platform, which happened under the FreeWay communications infrastructure, has added a number of new features: ● ● ● ● ● ● ● ●

An improved SIP stack Capability of the handset to switch between 3G and Wi-Fi networks Support of location-based technologies and GPS navigation Support of networks such as WiMAX Switching between bearers with application continuity Support of higher-speed connections, better context switching Integration of ActiveSync for over-the-air connectivity with Microsoft Exchange servers Compatibility with existing applications such as web browsers Table 15.3: Symbian OS Edition Features.* Platform Baseline S40 2nd edition (6230i or more) S40 3rd edition S60 1st edition

Video Codecs & Features H.263, P0 L10, MPEG-4 H.263, P0 L10, MPEG-4 H.263, P0 L10 H.263, P0 L10, MPEG-4, Real Video (7 and 8)

S60 2nd edition H.263, P0 L10, MPEG-4, Real Video (7, 8, 9 and 10) S60 3rd edition

S60 5th Edition *

Adobe Flash Lite 3.0, MPEG4, Real Video (7 to 10), Support for GPS and Location based Services, 16:9 Aspect ratio

Audio Codecs

Max Frame Rate

Max Bit Rate Optimal Size (kbps)

AMR-NB

25

128

176144

AMR-NB AMR-NB AMR-NB, Real Audio 7, 8,

30 10

384 64

352288 176144

AAC AMR-NB, Real Audio 7, 8, 10, AAC, AAC AMR-NB,

15

128

176144

15

128

352288

30

320

640480

Real Audio 7,8,10, AAC, MP3 AAC

Source: www.forum.nokia.com. (Updated Features are available on http://www.forum.nokia.com/Technology_Topics/ Design_and_User_Experience/FN_vid_table.html.)

440

Chapter 15

The Symbian operating system has had limited use in CDMA networks in the United States, owing to its nonsupport of the frequency bands, which more commonly use Windows Mobile or BREW. However, Symbian S40 second edition has support for CDMA phones.

15.4.5 Linux Linux-based operating systems have a niche space in the mobile devices operating systems, due to their open source status. Even though in percentage terms their use is less than 10%, new markets such as those in China are witnessing release of Linux-based devices, due to their open source status, together with Android. Linux is distributed under the GPL (General Public License), meaning that its source code must be made publicly available whenever a compiled version is distributed. The kernel has been adapted by phone manufacturers, thanks to its open source software and future availability of open source software and applications. China, for example, has adapted the use of Linux as the OS (an embedded version called “mLinux”) for use in its 3G mobile networks, which will influence the mobile market considerably. Also, major manufacturers and operators, including NTT DoCoMo, Vodafone, NEC, Panasonic, Motorola, and Samsung, have announced support for a global platform for open Linux adoption. The Linux core is however limited in functionality and a majority of mobile phone functions such as multimedia support, communication functions, connectivity services, and platform management services need to be supported by middleware. Figure 15.11 depicts how the Linux OS kernel is located vis-à-vis the applications. A large part of the functions of the phone need to be supported by software modules, which are beyond the OS. To alleviate these concerns, the CE Linux Forum is now working on a global reference architecture and common API for various components of software and middleware. The common architecture will include the videophone framework, telephony framework, and multimedia framework. Many vendors have opted for commercial versions of Linux to have better functionality that is natively supported by the OS while retaining its open source nature. An example is NTT DoCoMo, which has selected Monta Vista Linux™ to be an OS for the FOMA phones in addition to Symbian. Monta Vista Linux provides for easy integration of advanced multimedia applications and a standard development platform for wireless handset designs. NEC Corporation has developed phones based on Monta Vista Linux for FOMA (N900iL and N901iC).

15.4.6 Linux Mobile (LiMo) Foundation The Linux Mobile (LiMo) Foundation was formed in December 2006 with the objective of providing a complete ecosystem for mobile software. The LiMo Foundation intends to provide a complete open source mobile phone architecture with APIs and certification processes to facilitate the development of Linux-based phones and other mobile devices.

Operating Systems and Software for Mobile TV and Multimedia Phones

441

Figure 15.11: Typical Linux environment in mobile multimedia devices.

Figure 15.12: The LiMo in mobile phone architecture finalization effort.

The LiMo Foundation focuses on the middleware and enabling software for the UI and application layers. There are other Linux phone working groups as well, and the major manufacturers have released their APIs used in Linux-based mobile phones. Many phones based on the LiMo API have been released, such as the DoCoMo PRIME series P-01A, in use with its LTE network.

442

Chapter 15

mLinux is an embedded version of Linux adapted for use in 3G networks in China. Embedded Linux implies a Linux kernel that has been ported to a particular CPU and is available as code in memory on the SoC or board of the CPU. It supports all major CPUs, such as ARM9, Intel XScale, MIPS, Motorola Dragonball i.Mx, and Texas Instruments OMAP processors (710, 730, 1510, and 1610), amongst others. mLinux features an enhanced bootloader and short boot times of less than 2 seconds. It has multiprocessing and multithreading capability and memory management, and provides support for pipelining. It also supports many protocols and network cards.

15.4.7 Example of a Linux-Based Mobile Wireless Device An example of a mobile wireless device based on Linux is the Nokia Internet tablet N800. It supports multiple networks, including mobile WiMAX. The software architecture of the N800 is based on the Linux desktop from GNOME (GNU Object Model Environment). GNOME provides a stable and reliable development platform that is user-friendly and open source. This is supplemented with MAEMO application development environment (open source developments for Internet tablets). For multimedia applications, software components from Gstreamer have been embedded in the N800 (also available under GNU). Gstreamer provides a multimedia framework for streaming media players and handling of video and audio. The Linux XWindows system is provided by the “Matchbox windows manager,” which is an open source environment for XWindows on handheld devices. The graphical user interfaces are provided by GTK (a multiplatform kit for preparing user interfaces). It uses the official Linux Bluetooth protocol stack (Bluez). It also uses open source digital media players from the Helix community. The access to local phone resources (keyboard, mouse, joysticks, multimedia hardware) is provided by SDL (Simple Direct Media layer cross-platform multimedia library). Interprocess communications are handled by Dbus, a FreeDesktop.org tool.

15.4.8 Garnet OS The Palm OS has recently been renamed and introduced with enhancements as Garnet™. Palm has been the archetypical PDA since the days of the Palm Pilot. It is only recently that the market has diversified through other PDAs based on Linux and Pocket PCs based on Windows Mobile. The Palm OS is a multitasking operating system that provides protective memory management and HotSync capabilities. Palm supports the ARM and Motorola 68000 processors. The Palm OS has two versions: Palm OS Cobalt (OS 6) and Palm OS Garnet (OS 5). The current release, Palm OS Cobalt, has support for Bluetooth and Wi-Fi. Palm OS Garnet is based on the Palm 5.2 and 5.3 operating systems.

Operating Systems and Software for Mobile TV and Multimedia Phones

443

Figure 15.13: Software components in Nokia N800. (Courtesy of Nokia)

SDKs are available for the Palm OS and C, Visual Basic, C, and Java can be used for the development of applications. A number of commercial development suites are also available that provide processor-specific applications and development tools. A lot of third-party software is available for Palm OS, Garnet, such as music players, video players, and utilities. PalmSource has also announced support for a Palm OS platform on Linux. In 2009, Palm Inc. switched to the use of webOS, which is available in its release of Palm Pre. webOS is based on Linux.

15.4.9 Windows Mobile Windows Mobile is based on the Windows CE™ operating system. This is a 32-bit operating system with 4 GB of directly addressable memory space. Many users and developers have a clear preference for Windows Mobile, due to its commonality with the desktop Windows system and support for desktop applications and file formats as mobile extensions to office applications. The initial versions of this software had many problems, due to its adaptation from the mobile environment. Windows Mobile also had different versions for Pocket PCs and smartphones, but as of version 6 this is no longer the case.

444

Chapter 15

Windows Mobile phones have a UI that is similar to that of Windows Vista. Windows Mobile phones support Microsoft Outlook Mobile, Office Mobile, and Explorer Mobile, making the experience familiar to desktop users. Microsoft has recently come out with Windows Mobile 7. However, at present, the most common releases are versions 6.1 to 6.5. Version 6.2 adds support for full HTML, Flash, and JavaScript. Multimedia capabilities in Windows mobile phones A number of operators have come out with Windows–based mobile phones with live TV and multimedia capabilities. An example of a Windows Mobile TV phone that works on the DVB-H network of 3-Italia is the LG U960 DVB-H/HSDPA series X Phone. It supports a DVB-H receiver with TV-out and a DiBcom SoC for DVB-H mobile TV. The 3.6 Mbps speed via HSDPA helps support a variety of applications including Skype, Windows Live Messenger, eBay, Flickr, and others. The LG KU960 uses the DiBcom DVB-H receiver, Telechips host processor, and GPS location-based services.

Figure 15.14: Visualization of LG KU960 phone for DVB-H.

Operating Systems and Software for Mobile TV and Multimedia Phones

445

In the United States, Cingular (now part of AT&T) introduced the Cingular 2125 smartphone, which featured the Windows Mobile 5.0 operating system. The phone is based on the TI OMAP 850 processor and has 64 MB of RAM and ROM, which can be extended with an extension card slot. The phone has Windows Media Player 10 Mobile and in addition supports MP3, AAC, and .wav music and MPEG-4 video. T-Mobile and Sprint in the United States have also unveiled Windows Mobile phones for use with their networks. All the networks now offer devices using Windows Mobile 6.1 or higher. These include the Samsung Omnia™ (Verizon), HTC Dash 3G (T-Mobile), Samsung Jack® (AT&T), and HTC Snap® (Sprint).

15.4.10 BREW ™ BREW (Binary Runtime Environment for Wireless) is an open source development platform promoted by Qualcomm for use on CDMA-based devices. Being open source implies that the source code is made available to developers for use and further developments. In addition, BREW also provides an SDK free of charge to developers. The software developers can use a Windows development environment such as Microsoft Visual Studio 6 for application development and an ARM compiler for code generation. It has powerful user interface tools (such as UiOne™, which enables developers and phone manufacturers to modify or personalize the interfaces). The applications developed can be viewed in the “open marketplace” and downloaded over-the-air using the application delivery management. The applications fall into various categories: ●





BREW applications that have been developed and are available to any manufacturer as fully developed mobile phone services User applications that can be developed by using the BREW environment and development tools Third-party applications that can be downloaded

BREW applications include push-to-chat, email, photo sharing, wireless broadband, locationbased services (LBS), multimedia players, and others. Examples of BREW’s predeveloped application program interfaces that were available with BREW 2.1 to operate over the ARM9 processor or MSM 6100 chipset included the camera module (enabling applications to access the camera), 3D graphics engine, MPEG-4 video module (enabling MPEG-4 video-based applications on the mobile device), position location, encryption (HTTPS, SSL), and multimedia modules. APIs for real-time streaming, USB support, and removable storage media were introduced with the BREW 3.1 release. The UiOne user interface development tools make it possible for operators or service providers to customize user interfaces. It also enables the users to personalize the interface and services over the air.

446

Chapter 15

Although the BREW platform was initially developed solely for the CDMA networks and mobile station modem (MSM) chipsets from Qualcomm, it now includes products for the 3GSM and the mobile multimedia transmission networks. The universal baseband modem (UBM), for example, is intended to serve the MediaFLO-, DVB-H-, and DMB-based services.

Figure 15.15: BREW software architecture.

15.4.11 Multimedia Capabilities of BREW BREW is known for efficient development of multimedia applications, due to the direct porting of the BREW components on hardware and chip-based firmware. 3D graphics and gaming applications needing high interactivity operate more efficiently thanks to this architecture. In addition, scalable vector graphics (SVG) can be directly integrated with BREW. SVG is scalable over multiple display sizes, has smaller file sizes than rasterized formats such as JPEG or BMP, has rich text options, allows for interactivity, and can be created by powerful tools such as Adobe GoLive CS2 or Ikivo Animator. BREW supports SVG animation and playback including play, pause, and rewind of content. It has APIs for keypresses, rotate, pan, zoom focus, and pointers.

Operating Systems and Software for Mobile TV and Multimedia Phones

447

The multimedia applications of BREW are also supported by its multimedia components, which include: ● ● ● ● ● ● ● ● ● ●

Support of MPEG-4 encoding and playback Camera interface, JPEG compression, and recording of video 3D graphics engine Mobile-assisted position location Connection and content security through HTTPS, SSL, and encryption Media support for various video and audio formats Streaming and playback of PCM and QCP media formats Serial and USB interface, and removable storage Battery management Messaging services

A number of extensions to the BREW operating platform are available; for example, Microsoft Games (MSN games) on the BREW platform, J2ME on BREW, Microsoft Live Anywhere, Microsoft MSN Messenger, and Microsoft Office Mobile.

15.4.12 BREW Support of Broadband Wireless BREW platforms provide support of broadband delivered over wireless (802.11x) or via CDMAbased networks using the MSM or UBM chipsets from Qualcomm. The support of mobile WiMAX or WiMAX chipsets was not announced until 2009. However, owing to the widespread use of CDMA in the United States, Korea, India, and some other countries, which are also the strongest movers into WiMAX networks, it may be expected to be available with full APIs. Table 15.4: Major Operators Using BREW Worldwide. Operator Verizon Alltel KDDI KTF China Unicom Vivo Telstra

Country USA USA Japan South Korea China Brazil Australia

Technology CDMA 2000-1x, EV-DO CDMA 2000-1x CDMA 2000-1x, EV-DO CDMA 2000-1x, EV-DO CDMA 2000-1x CDMA 2000-1x CDMA 2000-1x

Preferred OS BREW BREW BREW BREW BREW BREW BREW

15.4.13 Android A new mobile software platform system based on open standards has been announced by the Open Handset Alliance. Although there are more than 30 companies backing the Android OS, the major ones are Google, Motorola, Sprint, T-Mobile, Qualcomm, and HTC. Android is a

448

Chapter 15

complete framework for the mobile handset consisting of the OS, middleware, open Internet applications, and a development environment. The OS core is based on the open Linux kernel. The Android ecosystem includes an Android Software Development Kit (SDK) as well as a mobile applications marketplace. Unlike some operating systems, Android places the full capabilities of the phone at the disposal of the application developers, providing a superior integration with the phone resources.

Figure 15.16: Android operating system architecture.

In the United States, T-Mobile was the first to launch a phone based on Android: the G1, which was followed by the launch of myTouch 3G®.

15.4.14 Research in Motion (RIM) The name RIM is not as familiar as that of its Blackberry devices, which use Blackberry OS as the operating system. With over 30 million devices in use by business users, the Blackberry has a niche place in the mobile phone arena. Blackberry devices operate based on the use of

Operating Systems and Software for Mobile TV and Multimedia Phones

449

a Blackberry Enterprise Server, which provides push mail services. The phones are based on Intel PXA 901 processors. (CDMA devices use Qualcomm MSM 600 chipsets.) Blackberry OS is a multitasking OS. A number of third-party applications are available for Blackberry OS.

15.5 Middleware in Mobile Phones The term “middleware” in mobile software terminology denotes software that carries out a specific well-defined function and is built on the OS stacks. Commonly used functions provided by middleware include implementations of various types of codecs, communications, and protocols used in 3G and broadcast networks. An example of mobile middleware is Packetvideo’s pvTV™ solution for DVB-H. pvTV provides a complete platform for video, enabling the mobile phones based on various operating systems (Linux, Symbian, Windows Mobile). The pvTV middleware provides complete stacks and codecs for mobile TV, including H.264, MPEG-4, WMV, AAC, and WMA. It also provides support for Microsoft DRM and OMA conformance. The pvTV client can enable various phone manufacturers with multiple OS types to have a quickly configurable mobile TV solution with full compliance to standards. Another example of middleware can be had from the middleware for NTT DoCoMo’s FOMA network. FOMA is a 3G network in Japan and provides services such as i-mode (Internet access), i-appli™ (Internet applications), Deco-Mail™ (HTML email), Chara-Den™ (videophone with cartoon-type characters), and Chaku-motion™ (combining video and AAC audio to signal incoming calls). The services are network-specific. The operators are naturally interested in ensuring that the service operates in an identical manner on all the phones, regardless of the operating system or the software structure. There are two methods to achieve this goal. The first is that the applications be written in a language such as Java (Java MIDP 2.0 and J2ME) that is independent of the operating platform. The second is to have specific middleware to support all the services that are network-specific (e.g., all FOMA services). Initially, the FOMA phones were released with Java-based applications and Symbian OS, which provides very strong native support toward realizing application stacks, such as those for H.264 coding, H.263 video calls, AAC audio coding, various players and display drivers, Java-based applications, and games. Now NTT DoCoMo has announced the use of middleware for its services such as Renesas Technology’s Mobile Videophone Middleware Package. The middleware provides the entire 3G-324M video call package, including the encoding and decoding of video and audio, echo cancellation, and videophone protocols. The middleware is ported on the SH-Mobile applications processors. The new architecture is available for FOMA series of handsets. Using the middleware makes it easier to port the applications to other platforms such as Linux.

450

Chapter 15

Similarly, NTT DoCoMo licensed embedded “Push to Talk over Cellular” (PoC) software from Ecrio Inc® for use on the mobile phones. The new service is available on the FOMA 902i series of handsets, which also have Linux implementations.

15.5.1 Revenue Enhancement Opportunities Using Middleware Mobile middleware is the key to the delivery of customizable service to users that can help in taking the use of the phone far beyond voice and provide revenue enhancement. We are now at the dawn of a new era in multimedia in which video, audio, animation and games, information, and services can be delivered at a price to the user. Mobile phones have broken the barrier of pricing for multimedia handsets to a level where critical mass has already been breached in many countries. The success of FOMA has exhibited the importance of rich content being delivered over 3G and other networks. Middleware enables the networks to deliver new services, downloads, and applications that may not have been envisaged when a mobile phone or a network commenced the services. The use of middleware makes it possible to transport applications to new networks with a variety of phones in use without major efforts in reporting these to different operating systems, phones, or networks.

15.5.2 Examples of Mobile Middleware Platforms AT&T video platform “CV” In addition to the MobiTV service and the MediaFLO services, which are available (AT&T mobile TV), Cingular (now part of AT&T) launched an on-demand video service under the name “Cingular Video.” The service is now available from AT&T as “CV.” The on-demand service permits the users to view or download premium content. Premium TV channels can be subscribed to on a monthly rental basis and music and video clips can be viewed on a pay-per-view basis. News and weather from over 100 local stations is available. Users must subscribe to an unlimited data plan (which was retailing at $15 per month in 2009). Cingular used RealNetwork’s Helix™ Online TV platform to deliver the services. However, the unique thing about the network is that the platform is both a content aggregation and distribution platform both for retail and wholesale customers. Thousands of content providers can thus ingest content and indicate prices per view, and the users can have access to the content. This is done by using the Qpass M-commerce™ solution. Qpass also gives AT&T control over the way the content is priced, displayed to customers, distributed, and billed. As mobile networks present a big opportunity for revenues from video content, the RealHelix Online TV platform together with the Qpass commerce solution is set to provide a major advantage to users as well as content providers and operators. AT&T operates in multiple countries through 3G and EDGE networks and customers can roam and still have seamless access.

Operating Systems and Software for Mobile TV and Multimedia Phones

451

The service configuration and delivery of Cingular Video has been possible through the use of Helix Mobile Server, which interacts with the Helix DNA client. Digital rights management is assured through Helix Digital Rights Management and the gateway to mobile networks is Helix Mobile Gateway. Content ingest and delivery is handled by the Helix Mobile Producer and Helix Service Delivery Platform. The Helix DNA client performs a number of functions: ● ● ● ● ●



Auto bandwidth detection Playnow: Near TV-like playback experience Truelive: Live playback of TV stream (if necessary without error protection) Trickplay: Playback at various speeds, fast forward, reverse, DVR-type functions Visual progressive download: Viewers can see the progress of download (bytes transferred and transfer rate) 3GPP release 6 compliance

3D graphics and mobile multimedia middleware MascotCapsule™ engine The HI corporation of Japan has supplied the MascotCapsule Multimedia Middleware with a 3D rendering engine for operators in Japan, Korea, the United States, China, and Europe. The 3D rendering provides unique capabilities for multimedia and games. The MascotCapsule enables 3D applications to run in a mobile phone (and other devices, PDAs) independent of the operating system or hardware architecture. All three Japanese carriers have adapted the use of this middleware in addition to carriers in other countries (SK Telecom, Korea; China Unicom; Sprint USA) and global phone manufacturers Motorola and Sony Ericsson. The middleware can operate with Java, BREW, Linux, Symbian, or PalmOS. It has a 3D development tool that interfaces to 3D software packages such as 3DSmax™, Lightwave™, Maya™, and others.

Figure 15.17: 3D graphics and animation middleware.

452

Chapter 15

15.6 Application Software Functionalities for Mobile Multimedia The previous sections have shown that the application software on a multimedia handset needs to provide a rich and intuitive user experience in the use of services. Typically, such functions are implemented with a combination of media delivery servers operating over a broadcast network and mobile TV clients in handsets. For example Adobe Flash Lite has been widely used in the design of websites and multimedia content with rich animations that are characterized by the relatively low bandwidth needed for transmission. Flash movies, for example, can run on a fraction of the bandwidth needed to code every frame using MPEG-2 compression. Adobe Flash Lite has been tailored to meet the resources available on mobile phones in terms of screen size, pixels, colors, and available network resources. Consequently, it was natural that Flash be used to generate application and animation channels or clips for the mobile content as well. This was indeed the case and the FOMA network has been using Flash applications extensively. The Flash content is run on the phones using the client–server architecture. The client is provided in the phones and interfaces with the phone software and the operating system. The Flashcast server delivers the content when accessed using the HTTP or HTTPS protocols. An example of a mobile TV service using Flash Lite content delivery to mobile phones is the mioTV® service from Singtel. The service uses a Media Delivery Solution from Nokia Siemens Networks. The mobile phones download a mobile TV client that enables the use

Figure 15.18: Flashcast delivery of content.

Operating Systems and Software for Mobile TV and Multimedia Phones

453

of the service. The mobile TV client is based on unicast delivery of content. As the mode of delivery is unicast streaming, there was the need for a client interface for providing program information for on-demand viewing as well as video on demand. This is provided by an application client that resides in user handsets.

15.6.1 Java-Based Mobile Device Architectures The Java Virtual Machine (JVM), which implements Java Micro Edition (J2ME) based on the MIDP 2.0 profile, is very common in the mobile handset architectures. By 2007 there were an estimated 1 billion mobile devices based on Java in use. J2ME is a powerful environment for the development of applications for mobile devices such as cellphones or PDAs. Applications written with J2ME are portable and provide quick development cycles for new products such as games, animated clips, and so on. An example of an API using Java is the JSR 272 Mobile Broadcast API. The API supports different elements of mobile TV such as ESG API, service purchase, service management (controlling logical or physical tuners), presentation and recording (control of PVR functions, uses JSR 135 MMAPI), and broadcast objects (for auxiliary data services). The use of JSR 272 mobile TV applications enables the applications with physical devices to be hidden from the applications.

Figure 15.19: JSR 272 mobile broadcast API.

454

Chapter 15

15.6.2 An Example of a Delivery Platform: MobiTV Optimized Delivery Server The MobiTV platform provides mobile TV services to handsets and PCs over broadband networks and is the largest service of its kind. MobiTV is used by a number of third-party content providers to provide services over a large number of 3G networks, and had over 6 million users in mid 2009. MobiTV now provides a media delivery platform that includes both live and on-demand content. A part of the solution is an Optimized Delivery Server (ODS) and mobile clients. The clients take care of presenting the program information for live as well as on-demand programming. The clients also take care of the user authentication. There are two steps to the execution of applications using Java on mobile phones: ● ●

Delivery of Java files using operator’s WAP gateway or web server. Execution of files on the mobile phones. The phones need to have Java support in the software suite for the Java applications to be executed. Java-based application support is present today in a majority of phones.

A number of applications are delivered using Java support in the handsets. For example, Sprint in the United States uses Java extensively for applications in its CDMA network.

Figure 15.20: Java applications for mobile phones.

Operating Systems and Software for Mobile TV and Multimedia Phones

455

The Symbian Operating System (Symbian 40 third edition and others) provides native support for Java MIDP 2.0.

15.7 Applications for Mobile Phones The preceding discussions make it evident that the software operating systems available today present an interface to middleware and external applications, which enable these applications to use the functions implemented on the phones. The middleware provides an abstraction for functions implemented such as browsers, TCP/IP stacks, ESG, media players, and streaming functions. The phones support applications through the underlying OS and middleware, appearing to the applications as a Java Virtual Machine (JVM) or a Dalvik Machine (Android), or a Linux or a Windows device. Application programmers compile applications for the different operating systems, which are available either as preinstalled applications, an operator-provided application, or for purchase from the marketplace.

Before We Close: Some FAQs 1. Does Android support Flash? Why is such support important? Yes, Android has announced support for Flash in its HTC handsets. Support of Flash gives a better web browsing experience, as a majority of websites have Flash content. 2. Is support of screen size an OS feature? Yes. Most operating systems support multiple screen sizes through their screen adaptation feature. They also switch between the different size screens delivered by applications to display these properly in the screen area available. 3. Can multimedia applications downloaded from the market place affect the voice functions on a mobile phone? No. The marketplace requirements are that the applications must not affect the normal phone functions, including user settings, in any manner. 4. What are the important software features in a phone to enable social networking? The important features in a phone apart from messaging are presence (or location information), video editing (users should be able to send videos without using a PC), and highresolution codecs. Chipsets such as the Movidea™ MA1110 with associated software can create special phones (e.g., Facebook phones). 5. Are middleware packages OS-specific? For example, do such packages work on Windows Mobile, Symbian, and other operating systems? Most middleware packages are OS-agnostic (work with any OS). For this purpose, they have an adaption layer that handles the OS-specific features.

This page intentionally left blank

CHAPTE R 16

Handsets for Mobile TV and Multimedia Services He also had a device which looked rather like a largish electronic calculator. This had about a hundred tiny flat press buttons and a screen about four inches square on which any one of a million “pages” could be summoned at a moment’s notice. Douglas Adams, The Hitchhiker’s Guide to the Galaxy (1979)

16.1 Introduction: Do You Have a Target Audience Out There? If a mobile TV operator were to be asked of the greatest challenges to launching a mobile TV service, the mobile handset would rank as one of the highest. This is particularly true of a terrestrial broadcaster who does not own the mobile network. The mobile market is today populated with devices with features that span a very wide range of capabilities. The phones at the upper end of the range have a screen large enough to be worthy of any mobile display, but there are a far larger number of phones that are just functional devices for voice calling. And as if this were not enough, many of the handsets are used for business applications, such as the Blackberry or the Palm. There are also smartphones with multimedia capabilities that can operate on 3G or HSDPA networks such as the iPhone 3G or Sony Ericsson P1i. Most of these do not carry tuners for terrestrial mobile DTV reception. In addition, many mobile operators provide “walled-garden” services as far as access to mobile TV is concerned. They permit specific services to be streamed in or delivered. This always presents a question for a mobile TV operator: What is the target audience? Can you launch a mobile service and expect a large enough number of users to buy a new phone that can receive the service you launched? If there are many operators, how do you target the users? There are no easy answers, but there are many examples. Korea and Japan launched free-to-air mobile TV services (T-DMB and 1-Seg), which in hindsight appears to have been the biggest success factor in achieving fast penetration of the service. In Japan, the number of 1-Seg tuner–equipped handsets exceeded 60 million by 2009 and about 85% of all new handset shipments now come with 1-Seg tuners built in. In Korea, it is common to have T-DMB tuners on most handsets, with the result that people watch mobile TV just about anytime without having to carry a separate device for mobile TV. This lays the foundation of a successful mobile DTV market for future operators. Terrestrial mobile TV is there to stay. © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00016-3

457

458

Chapter 16

But this is not the case with operators in Europe, who—having launched DVB-H pay services—battle heavy odds, as the penetration remains dismal. 3 Italia, the pioneer for DVBH in Europe, started giving free access to some of its channels, including RAI 1, RAI 2, Mediaset, Sky Meteo 24, La3, and Current TV. 3G operators (including those with HSDPA and EV-DO) can find some reassurance there, as most smartphones shipped are 3G-capable, and support players that can be used to provide a 3G streamed TV service. However, this is still not true for all devices (including the iPhone 3G) and workarounds are needed to provide a functional mobile TV service. The question of handset availability, however, remains an open question for future operators, particularly those who are brave enough to launch a pay DTV service using a proprietary encryption. The markets are also not static. The United States is set to witness a phenomenon similar to that of Japan and Korea as ATSC Mobile DTV stations go online progressively with free-toair broadcasting or open (i.e., OMA-supported encryption). Things will not change overnight, but an increasing number of new mobile handsets shipments are likely to include ATSC Mobile DTV tuners. For existing handsets, unfortunately, there is no easy solution. People are unlikely to give up their existing handsets immediately. It is possible to provide USB-based or microSD-card based receivers, but making them work across a spectrum of devices is a challenge. In the initial period, therefore, most operators are likely to target launch of their own handsets, particularly for terrestrial transmission, just as MediaFLO operators (AT&T and Verizon Wireless) have launched for FLO services. This also helps them provide their own clients in the receiver with desired features such as data streaming (weather, traffic tickers), EPG display, interactivity, VoD, or “music on demand” features. For new mobile TV operators, the quest for new handsets will require evaluating the latest chipsets, which bring enough power for full-speed rendering of video, 3G animations, and games. They will need to look at the operating system, which will support the features they want to implement. Such potential mobile TV operators, along with handset vendors, then become responsible for all the features that the handset supports. There is no room for splitting the responsibility if a seamlessly integrated handset is to be presented. We will also look at such handsets in this chapter.

16.2 Mobile Receiver Devices Receivers for mobile TV and radio transmissions are not limited to handsets (or phones), although these constitute by far the largest range. Some of the successful products include portable navigation devices (PNDs), personal media players (PMPs), auto receivers, and handheld or desktop receivers. In fact, auto receivers have emerged as one of the major focus areas now being targeted by satellite-based operators. The earlier environment of separate domains of services by cellular operators, terrestrial broadcasters, GPS navigation providers, and satellite-based TV providers has clearly given way to devices that operate across two or more networks. Many of these devices provide a return path using the mobile network.

Handsets for Mobile TV and Multimedia Services

459

Figure 16.1: Mobile receiver devices are now multifunctional.

In this environment, is it possible to classify mobile devices? The answer is yes, and operators need to understand the market in terms of the types of devices, their operating systems, and how many estimated devices exist in each segment, together with the future growth. This helps create products and software for the market and optimize deliveries.

16.3 Handset Features for a Rich Multimedia Experience So what features should the operators target in the handsets they might like to offer? Handsets are no longer about handling just voice calls or even mobile TV. Operators who have attempted to bring out basic phones with TV tuners have found this out at great cost. Instead, users expect a full range of functionalities combined with ease of use and style. Users of such phones have high expectations, such as finger-based navigation, voice commands, intuitive maps, and multinetwork connectivity: ●



Basic functions include voice call capabilities, SMS and MMS, 3G, phone book, and basic personal information management features such as profiles and ringtones. Multimedia functions include the following capabilities: Receive audio and video in the highest quality (such as AAC, H.264, and MPEG-4) Support 3G or HSDPA/EV-DO networks





460

Chapter 16

Figure 16.2: Classifying receiver devices for the mobile TV market.

Support video players (such as Windows Media, iTunes, Real, DivX mobile, or Adobe Flash) Download and store media (similar to iPods) Management of video and audio, including its editing, manipulation, and retransmission to be able to use services such as YouTube Capability to view live TV using streaming or progressive download (using 3G/ HSDPA networks) or to receive broadcast mobile TV using ATSC Mobile DTV, FLO, or DVB-H networks Business functions include the availability to receive and send mail on the move, which is an essential requirement, as has been demonstrated by the extensive use of devices such as the Blackberry. ●

● ●





16.3.1 Features of Multimedia Phones A multimedia phone typically has a large screen that is sufficient to be able to show video that is at least QVGA (240360 pixels) with more than 250K colors and a full frame rate of

Handsets for Mobile TV and Multimedia Services

461

30 fps or 25 fps. The screen should preferably be tiltable to have the 43 aspect ratio picture correctly displayed. WVXGA phones can display higher aspect ratios and are used as the default in advanced multimedia phones. Some of the external features that are important to users are: ● ● ● ● ● ●

Large screen size (2 to 4) Audio: stereo, high fidelity, with AAC or eAAC External ports: TV-out, USB, FireWire, video and audio, printer, and sync via USB Wireless capabilities: Bluetooth A2DP, wireless LANs, Wi-Fi, WiMAX FM Radio MicroSD RAM

Some of the internal features that are important to users are: ● ● ● ● ● ● ● ● ● ● ● ● ●

● ● ● ● ●

Connectivity: GPRS, EDGE, CDMA, 3G GSM, CDMA2000, EV-DO Bands: 800 MHz, 850 MHz, 1800 MHz, 1900 MHz, 2100 MHz, AWS Audio support: AAC, AMR, RealAudio, WAV and MIDI, MP3, MP4 Video support: 3GPP standard H.263, H.264, Windows Media, RealVideo Clip playing: AVI, MOV Operating systems: Windows, Linux, Symbian, Palm Applications HTML browser, e-mail client, image viewer, PC suite (sync) Voice recognition, voice dialing Personal information manager (PIM) Video call Media player Games Business software: Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Adobe Acrobat Fax receive, send, view, and print Business card scanner Picture gallery Picture editor Messaging: SMS, MMS, and e-mail, Chat

These lists of features should be considered essential. There are many other functions available as well, such as GPS and location-based services, support of broadband wireless such as WiMAX, and 3D animations or gaming. Mobile TV applications and mobile multimedia place considerable demands on the mobile phone in terms of processing power, memory, connectivity to networks, graphics handling, and rendering of displays. These phones therefore come with high-power CPUs and memory

462

Chapter 16

Figure 16.3: Features for support of multimedia applications.

Table 16.1: Screen Resolutions of Mobile Handsets. Resolution

Name

Use

16001200 pixels 1280960 pixels 640480 pixels 320240 pixels 800480 pixels 176144 pixels 12896 pixels

2 MP (2 megapixels) 1 MP (1 megapixel) VGA QVGA WVGA QCIF SQCIF

Camera Camera Camera/Video Video Video Video/Video Telephony Only Video Telephony

of 64 MB with application memory of up to 1 GB or higher. Due to the nature of phone services, multiple applications need to be open while a call is on. Not all phones are suited for mobile TV, as their supported screen resolution is too low. Table 16.1 lists the screen sizes commonly available.

Handsets for Mobile TV and Multimedia Services

463

Quick Facts Handset Features an Operator Must Consider for Delivering Mobile TV ●

● ● ● ● ●

● ●

Handset display (shape, resolution, and minimum size), keypad navigation, keys for multimedia functions Understanding the user interface and related functions through which users will access multimedia Availability of over-the-air firmware update functions (FOTA) Understanding existing applications and power management on the handset Available software APIs and their interfaces to operator-provided ESG and mobile TV clients Identifying base-level handsets and high-end handsets for the service. The high-end handsets will provide integration with navigation functions, advanced UI, 3D mapping, and hardware acceleration Available functions on the handset and whether the operator wishes to curtail some features Understanding of how content protection schemes will work on the handset

16.3.2 Mobile Phone Architecture Mobile phones for multimedia applications need to support multiple functions as well as a range of interfaces. They come equipped with mobile TV tuner/decoders, GPS, and navigation applications, which need to be handled with additional chipsets. This has led to mobile handsets being increasingly processing-intensive. In fact, the CPU power of some of the mobiles today can compare well with the personal computers or desktops of 2001–2002.

Figure 16.4: Mobile phone processing power.

464

Chapter 16

Many factors determine which phones can be used to meet the objectives of an operator. The networks (or operators) certify the phones after they are tested for conformance for all parameters and software used on the network. Network technology Mobile networks are based on various technologies such as GPRS, EDGE, 3G-GSM (UMTS), 3G-CDMA (CDMA 2000/CDMA-1x), or EV-DO. In broadcast networks, the technologies involved may include DVB-H, DVB-T, SDMB or T-DMB, ATSC Mobile DTV, and others such as ISDB-T, DAB-IP, etc. Not all phones are designed to work in all bands or all technologies. Hence in every segment of technology, specific phone models are validated and released for use. A recent trend has been to have universal phones that support all bands as well as a number of technologies of mobile TV. This makes their use possible on many networks. Application software Most operators use specific software for providing the network-originated services such as an Electronic Service Guide, interactive screens, weather, and other information. In many cases, these are based on specific underlying software that needs a corresponding client in the mobile phone to view the services. For example, the NTT DoCoMo network has been based on the use of Adobe Flash and needs the corresponding player in the phones. Verizon Wireless, for its CDMA 1x network in the United States, uses BREW as the underlying technology and hence the phones need BREW support. (Verizon now supports Flash as well.) Also, it operates in the PCS (850 MHz) and the cellular (1900 MHz) bands for the United States and accordingly the phones need to support these bands. Examples of phones that can be used in the network are LG VX-8300, Motorola RAZR V3M, and dozens of others, all of which support CDMA 850/1900 and BREW. Other networks may use technologies such as Java J2ME and SVG-T, and this mandates the use of phones that support this functionality. T-Mobile USA is a 3G-GSM operator in the 1900 MHz band that supports J2ME- and Flash-based applications. Phones for other 3G networks such as AT&T may not function as intended for applications in this environment. User interface Mobile phones are also characterized by the user interfaces (UI) and operating systems. The Symbian operating system, for example, is used in NTT DoCoMo and the UIQ interface (by UIQ technologies) is used to provide a FOMA interface. Multimedia file handling As new phones are announced, we see devices with the capability to handle a broader range of file formats and to store more video and audio clips. Some of the phones (such as the Sony Ericsson W950) are designed to be predominantly a Walkman™ or an iPod type of device

Handsets for Mobile TV and Multimedia Services

465

with capabilities to store and play back a large number of clips, movies, and the like. These phones are essentially music and video machines that can play downloaded or live video and audio streams. Video

If interactive applications are desired, mobile phones need to have the capability to capture video, as well as to play and enable its sharing on live calls or via messaging. This requires the mobile handsets to handle video capturing in 3GPP, MP4, or H.264 formats (MPEG-4 video, AAC-audio) with CIF, QVGA, or VGA resolution. The phones need to have media players to play a wide range of files formats including Windows Media, Real, DivX, and Adobe Flash Lite, amongst others. However, not all players need to support all formats. Audio files

The handling of audio files requires the use of a range of decoders and players such as MP3, RealAudio, Windows Media Audio, MPEG-4 audio (AAC), AAC, and others in addition to native formats of WAV, MIDI, and voice codecs of AMR-NB and AMR-WB. Most multimedia phones would have support for stereo audio with multiple file types and player support. Equally important is the capability to save content, and hence a high amount of flash memory via pluggable memory cards is a key requirement.

Figure 16.5: Sony Ericsson Walkman phone.

466

Chapter 16

Phone series Owing to the almost continuous development in mobile phones, these are best viewed as part of a series with continuous development. Each series has certain properties or attributes that are preserved or enhanced with time. Nokia has come out with N-Series of phones, which are designed specifically for multimedia applications. Cell phone antennas Mobile TV tuners require an antenna for reception of VHF/UHF signals. This can be an issue with users who are used to phones without antennas. In many networks where the density of transmission towers is not high enough, the antenna needs to be extended, which may put off users from buying such handsets or services. This emphasizes the importance of a properly designed transmission network.

16.3.3 Handling Video, Audio, and Rich Media: Media Processors The need to handle multimedia on the mobile phones, which includes functions such as rendering of animations and video frames, audio, graphics, MIDI tones, and so on, has led to an increasing preference for offloading these functions from the CPU to specialized devices such as media processors. These handle functions such as 3D rendering, which can be very processor-intensive. They are also used for special effects such as games and 3D effects in music. Once the baseband functions are handled by the specialized device, the CPU becomes free for other tasks such as network functions. An example of a media processor is the Imageon 238x, which can handle 30 fps playback or recording for full-resolution (SD) video and supports the full range of codecs—H.263, MPEG-4, H.264, RealVideo, Windows Media. For 3D, it provides a performance of 100 MP per second and 3 M triangles per second. It can also support video telephony and video streaming applications.

16.4 Handsets for 3G Services 16.4.1 Mobile Devices for 3G Networks An example of a mobile TV phone for 3G networks is the Toshiba G01. The Toshiba G01 is an interesting example, as it supports a WVGA screen of 4.1 inches. Based on Windows Mobile 6.1, it supports a unique REGZA TV technology together with GPS, navigation, and games. The handset supports DivX, Adobe Flash Lite, GPS, and Internet Mobile Explorer 6. It is designed to operate with HSDPA and HSUPA networks (Figure 16.6). The iPhone 3G, available on the AT&T network, is a feature-rich multimedia phone. It has a video recorder, editor, and rich MMS capabilities. It can be used to obtain videos and movies from the iTunes Application store.

Handsets for Mobile TV and Multimedia Services

467

16.4.2 HSDPA Handsets An example of a HSDPA phone is the LG VU (CU 920) with a 3-inch touchscreen. The phone is designed to operate on the 3G/HSDPA network of AT&T and is a quad-band phone with support for 850, 900, 1800, and 1900 MHz. It supports access to video and TV via the Cingular Video (now called CV) service operating over 3G. The phone also has a TV tuner for MediaFLO services, which are now provided by AT&T using the FLO network. As expected in a multimedia phone, the device has a 2 MP camera and supports video.

Figure 16.6: HSDPA phones: LG VU CU260 and Toshiba G01. (Courtesy of LG and Toshiba)

The LG VU also has a music player (MP3 and AAC) with a customizable equalizer. NTT DoCoMo in Japan has an HSDPA service that can give a download speed of 3.6 Mbps and above. The HSDPA service is in addition to its FOMA service, which is a 3G service with expected download speeds of 374 Kbps. Examples of NTT DoCoMo FOMA handsets are the FOMA HT110 and FOMA F1100, amongst others.

16.4.3 CDMA Phones Phones for the CDMA services are available from a number of vendors. These include those with the BREW or Symbian operating systems. Sprint operates an all-CDMA network that

468

Chapter 16

has been upgraded with 1xEV-DO. The Sprint (including Nextel) platform is characterized by the use of Java for support of various applications. A number of phones can be used on the network, including the LG LX370, Samsung SCH i830, Palm Treo 770p (CDMA), and HTC Apache. Nokia has a number pf phones using Symbian S40 third edition and having Java support (CLDC 1.1 and MIDP 2.0). This includes handsets with 320240 resolution and music player support such as the Nokia 6275. The Verizon CDMA network uses BREW as the primary environment for mobile phones and applications. Over 100 BREW applications were available in 2009 on the Verizon network. It is estimated that over 40% of the mobiles used in Verizon use BREW technology, which is provided free to handset makers.

16.5 Handsets for Terrestrial Broadcast Services Terrestrial broadcast technologies such as ATSC Mobile DTV, DVB-H, CMMB, and T-DMB are used by broadcasters to deliver mobile TV to customers who “traditionally” do not fall in their domain. Such customers may be subscribers of AT&T, Verizon, T-Mobile, or other mobile operators. Delivering mobile TV using terrestrial broadcasts is an “off-network” technology, as opposed to 3G- or HSDPA-based TV. Handsets for terrestrial broadcasting are characterized by the tuners and decoders, which need to be provided in the handsets to present the decoded signals to a media processor in the handset. Ensuring that a sufficient number of customers buy such a handset has proved to be a challenging task, except in environments where these services were initially offered free, such as Korea and Japan.

16.5.1 Handsets for T-DMB Services With the success of free-to-air T-DMB services in South Korea, a large number of handsets are available with T-DMB tuners. One of the handsets is the ETEN-Glofiish V900 mobile phone. This is a T-DMB device integrating Streamezzo’s technology for the rich media client. The device supports a DVB-H/DVB-T/T-DMB/DAB TV broadcast receiver as well as a GPS receiver with SiRF Star III chipset and Windows Media Player 10. It features a finger-touch interface (Spb Mobile Shell) and is based on a Samsung S3C6400 processor and a 2.8-inch VGA resolution screen.

16.5.2 Handsets for DVB-H Services With the commercialization of DVB-H services worldwide, a number of phones have been introduced that can receive these transmissions. The Nokia N92 has been a traditional bellwether N-series phone used in many DVB-H networks from their inception. The phone has a 2.8-inch main screen with QVGA (320240-pixel)

Handsets for Mobile TV and Multimedia Services

469

resolution and a second display of 12836 pixels. It is quad-band with 3G (UMTS), GSM, GPRS, and EDGE support in the 900/1800/1900 GSM and 2100 WCDMA bands. It has a DVB-H receiver with MPEG-4/AVC video (384 Kbps) and a full range of audio codecs (voice: AMR-WB, AMR-NB; music: MP3, MP4 [AAC and AAC], WAV). On WCDMA 2100, it can operate simultaneously in circuit-switched (64 Kbps) and packet-switched (384 K uplink/384 K downlink) modes. It has separate media keys for play/pause, stop, next, and previous. With the launch of its DVB-H network, 3 Italia has announced the use of a series of more advanced handsets. Examples of some devices are the LG KU900, LG KU 950, and LG Ku 960 mobile phones. These are GSM/3G-compatible for Europe with support of all four bands.

16.5.3 Handsets for ATSC Mobile DTV Services ATSC Mobile DTV services are relatively new and only a few handsets at present come equipped with ATSC Mobile DTV tuners. However, according to a study coordinated by the National Association of Broadcasters (NAB), over 130 million devices are expected to be in use by 2012. Receivers for ATSC Mobile DTV are expected to span the entire range from mobile phones to gaming devices, PMPs, PNDs, and mobile receivers for vehicles. An example of an ATSC Mobile DTV handset used in initial trials in 2009 is the LG Voyager (VX 10K) with ATSC Mobile DTV tuner. A number of other handsets have been available for initial transmissions of ATSC Mobile DTV. These include the LG Lotus (LX600), LG Maize (KM770M), LG VX9400, and LG CU 920. It is easy to recognize these phones as those used in Verizon and AT&T networks, where an ATSC Mobile DTV tuner decoder chip adds the new functionalities.

Figure 16.7: ATSC Mobile DTV phones. (Courtesy of LG)

470

Chapter 16

16.5.4 Handsets for ISDB-T Networks ISDB-T is the technology used in Japan and Brazil for terrestrial transmission. Specifically, the transmissions for mobile TV are known as 1-Seg transmissions. A majority of smartphones being sold in Japan now include a 1-Seg tuner. The phone can thus act as a TV receiver for the terrestrial broadcasts in a manner similar to an FM receiver and independent of the 3G multimedia functions. As the market for ISDB-T services is mature, a large number of handsets, portable devices (such as the Kodak 3-inch OLED TV), integrated digital TVs (IDTVs), game consoles, PNDs, and others now support 1-Seg receivers. An example of an ISDB handset is the Sony Walkman NW-A910. Another example is the Vodafone 905SH phone. The phone can record TV broadcasts direct to the memory card in case the user is busy with a call or other activity. It also has an analog TV tuner as well as a terrestrial digital TV tuner. To make the viewing of TV a practical proposition, it is designed to provide a viewing time of four hours for digital terrestrial TV. The mobile phone comes with a 2.6-inch (240400-pixel) screen to make video a live experience. On the 3G network, all applications—including video calling, Vodaphone Live™ FeliCa, Vodafone Live™ Cast, NearChat™, and Chaku-Uta—are supported. For the T-DMB services, Samsung launched its first phone, the SPH B1200, in 2005. Since then, it has launched the T-DMB phones for the European launch of the services, i.e., the SGH P900. It also has dual-mode (S-DMB and T-DMB) phones such as the SGH P900. LG also has a full range of S-DMB and T-DMB services.

16.5.5 Handsets for MediaFLO Services There have been only a few handsets released for the FLO TV services provided by AT&T and Verizon Wireless. An example of a handset for FLO TV from AT&T is the LG Invision™ CB630. It has a 1.3 MP camera with video, supports AT&T’s mobile TV service (AT&T mobile TV) and CV over 3G network, and includes a multistandard music player and video sharing application. AT&T also has the LG VU and Samsung Access for FLO services. Verizon has the Motorola Krave™ ZN4 and LG Voyager as available handsets.

16.6 Handsets for Satellite Technologies with a Terrestrial Component Technologies such as DVB-SH and CMMB are based on the delivery of mobile TV signals using either a direct satellite signal or via a terrestrial repeater. The terrestrial repeaters may be at the same frequency as satellite transmission (as in DVB-SH) or may be in the UHF/VHF bands.

16.6.1 DMB Multimedia Phones In the S-DMB network in Korea, which is based on S-band transmissions from the satellite or terrestrial repeaters, the phones carry built-in antennas for satellite reception and

Handsets for Mobile TV and Multimedia Services

471

demodulator/decoders for S-DMB. Samsung’s SCH-500 is an example of a satellite DMB phone that was introduced in Korea for S-DMB. An example of later versions of S-DMB phones include those in combination with Korea’s WiBro technologies or 3G. Samsung SPH-1000 is an S-DMB-WiBro combo phone. It has a full QWERTY keyboard and a 2.2-inch QVGA screen.

Quick FAQs Handsets for Mobile TV 1. Can the iPhone 3G be used for watching mobile TV? In the United States, the iPhone 3G is offered by only AT&T. At present, AT&T restricts the type of content that can be watched on the 3G network (e.g., its restriction of SlingPlayer Mobile). However, downloadable applications are available that enable the phone’s video player to be used for streaming mobile TV. An example of such an application is Sky Mobile TV which is free to download on iPhones in the UK and comes with a subscription of £6 per month. 2. Do all networks (such as DVB-H) require specialized handsets? Specialized handsets are required only under the following conditions: ● ● ●



The service is encrypted and requires an embedded CA handset. The frequency band in use (such as AWS or L-band) requires specific tuners. The operators wish to deliver ESG using an application client that has been designed only for certain handset types. The service is two-way interactive, requiring specific software validated for certain handset types.

3. Why do operators provide “locked handsets”? Operators may provide locked handsets for many reasons. These can include: ●





The handsets are given away for free or at a nominal charge based on a time-linked subscription package. They wish customers to migrate from other networks for handset features that customers may find irresistible. The handsets are customized for certain network features (such as FOMA in Japan).

However, the locking of handsets has proved counterproductive in most cases.

16.7 Handsets for CMMB The CMMB services have been rolled out rapidly in China, with over 170 cities going live by mid-2009. As China uses TD-SCDMA as the technology for 3G, the Chinese Ministry of Industry and Information Technology (MIIT) has licensed TD-SCDMA/CMMB handsets for use in China. Yulong and Hisense are among the first companies to have licensed handsets

472

Chapter 16

(for example, the TM86 Hisense). In addition, a number of manufacturers are developing handsets based on China Mobile’s open mobile system (OMS). These include TCL, LG, Samsung, Lenovo, and Motorola, amongst others. As the subscribers of 3G are still limited in China, a number of handsets with GSM/CMMB and CDMA/CMMB technologies are also available. The receiver devices for CMMB include mobile phones, PNDs, and GPS devices.

Figure 16.8: Examples of CMMB devices. (Courtesy of Taier and C-XINDA)

16.8 Phones for WiMAX and WiBro Technologies Samsung unveiled the world’s first handset based on Wi-Bro (which is based on 802.16e) in January 2006. Since then, a number of WiMAX handsets have been offered by operators. The Yota network in Russia uses the HTC MAX 4G as a GSM and mobile WiMAX phone. The handset has a 3.8-inch screen with WVGA resolution (800400, in this case) and has a built-in GPS. The Yota service offers over a dozen live TV channels and over 50,000 music titles online. The high-speed mobile WiMAX network is the key to the mobile TV service and online music access. An example of a WiBro phone used in Korea is the Samsung SPHM8100, which is a T-DMB, CDMA, and WiBro phone.

Handsets for Mobile TV and Multimedia Services

473

Figure 16.9: WiBro and WiMAX handsets. (Courtesy of Samsung and HTC)

16.9 Portable Navigation Devices (PNDs) An example of a PND is the Mio C728 from DiBcom. The PND is meant to be mounted in automobiles and has a diversity antenna-based DVB-T/H mobile TV receiver. It also has a GPS receiver with navigation software and is available with TeleAtlas of Europe. It has a 7-inch (800480-pixel) screen. Typical components in such a receiver would be a DiBcom diversity DVB-M/H receiver chip, a GPS receiver, and a multimedia processor. Another example of a PND is the Garmin nuvi900T.

16.10 Can Handsets Be Upgraded with the Latest Technology? The average lifetime of a handset is now less than 2 years, because of the launch of phones with newer features by manufacturers all the time and because of users moving up the usage chain for new features. A question frequently asked is whether the phones themselves can

474

Chapter 16

Figure 16.10: Portable navigation devices with mobile TV. (Courtesy of Garmin and DiBcom)

be upgraded with technology advancements. The key issue is whether the users can use new applications with existing handsets that may be in use. Open OS operating systems such as Linux (open source), Symbian, and Windows Mobile 6 (widely used OS) certainly act as facilitators. However, at present most operators predominantly use network-specific features and services. These are accomplished by using software downloads to mobile phones as well as client software applications. The services are tailored to a specific network. Most operators announce new series of phones that are designed to work with the new releases of applications. In the future, as the pace of release of applications and features picks up, it is expected that upgrades of phones (using websites of operators, for example) will become common.

16.11 Summary The arena of handsets is the most dynamic face of the mobile industry. The usage of applications that can be delivered over the new generation of networks is dependent on the features available on handsets and their intuitive use. The handsets with all the inherent limitations of power, memory, screen size, and keypads have kept pace with the availability of new applications and features. Handsets need to be network-specific at present due to the operator-specific features and service configurations (except in select GSM networks). However, more and more handsets are becoming available for multiple bands and multiple operators, implying support of multiple technologies for mobile TV and multimedia.

Handsets for Mobile TV and Multimedia Services

475

Before We Close: Some FAQs 1. I bought a handset through one operator of DVB-H mobile TV. Can I change my operator using the same handset? Only if the handset has OMA-BCAST-based security and the new mobile operator also supports OMA-BCAST. If services are unencrypted, any handset can be used with any operator. 2. Are all 3G handsets HSPA-compatible? No. Handsets are now available in the market that support HSDPA, HSPA, and 3G. 3. Does the iPhone 3G support HSDPA? Yes, the iPhone 3G supports HSDPA with speeds of up to 3.6 Mbps while iPhone 3GS supports HSDPA at 7.2 Mbps. 4. Why is RAM so limited in mobile handsets (e.g., 32 MB to 256 MB) as compared to flash memory, which can be 8 GB or more? RAM is required only for runtime programs in the mobile phone. The OS and all files (pictures and multimedia files) are saved in flash memory. This also ensures that the data is not lost when the battery is removed. 5. What type of memory read speeds are needed for recording video in a memory device on a handset? A QVGA video (320240) with MPEG-4 compression requires about 100 Kbytes/sec read/write speeds.

This page intentionally left blank

CHAPTE R 17

Mobile TV and Multimedia Services Interoperability Everything has been said before, but since nobody listens we have to keep going back and begin all over again. Andre Gide

17.1 Introduction We are in the early days of mobile TV and video calling. But Internet access, browsing, video and audio file downloads, MMS, and other multimedia applications are now being used extensively. Video calling and 3G mobile TV services are available in many networks across countries, but their usage at present is minimal and will grow over time with the universal availability of handsets. One of the key factors in the widespread growth of the mobile multimedia services and mobile TV will be the capability of these services to function across multiple networks. This also means a larger range of handsets being usable in a network. This aspect has been drawing the attention of the industry players, including standards organizations, operators, handset manufacturers, and application designers. The players also realize that we are in a bipolar world of CDMA and GSM evolved 3G networks and coordination is necessary to impart network interoperability, roaming, and porting of applications. The industry is investing considerable resources toward an early harmonization of standards and services. This chapter gives an introduction to the principles on which roaming and network interoperability will be built in IMT2000 networks for multimedia and mobile TV services.

17.1.1 Interoperability: A Multidimensional Issue At the turn of the century, in the year 2000, most of the world was basking in the glow of mobile interoperability. The GSM mobile networks—which had spread by now to all continents including the Far East, Asia, South America, and Africa, in addition to the whole of the United States and Europe—now provided seamless roaming. Apart from the networks

© 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00017-5

477

478

Chapter 17

that were interoperable, the commercial arrangements had also fallen into place and testing had been completed to enable customers to roam seamlessly. This was so at least in the GSM world. Large domains of customers that were covered by the CDMA networks, however, had only limited interoperability. Even though many countries such as the United States, Japan, Korea, and India had a large base of mobile customers based on CDMA, the interoperability and roaming remained out of reach due to different bands of operation, differences in the CDMA technologies themselves, and unavailability of sufficient roaming arrangements. By 2002, networks had started rolling out for 3G services based on GSM as well as CDMA. The FOMA network in Japan demonstrated the power and commercial viability of 3G services by virtue of its UIs and menus, which rode on top of the technology to make the network offerings friendly and user-centric rather than technology-centric. At the same time, developments continued in the GSM and CDMA worlds under the 3GPP and 3GPP2 partnership forums to deliver advanced services that could be globally implemented by building on a base of common standards and protocols (e.g., 3GPP, 3GPP2, H.364-M and SIP). These attempted to standardize all elements involved in setting up a multimedia call such as file definitions, call setup, and information delivery mechanisms, as well as interoperability by defining a common base level standard that needed to be supported in all networks, with only the enhancements in features being optional. This did work out quite well for the individual 3G or GSM networks and CDMA networks. There was a clear recognition and an objective before all groups coordinating the developments that interoperability and roaming alone can lead to greater penetration of devices and networks. No one wished that the situation of the early days of GSM/CDMA networks should return. The launch of new broadcast-oriented mobile TV networks has now added another dimension since 2004. The new networks are based on technologies such as ATSC Mobile DTV, DVB-H, DMB, 3G, FLO, and ISDB-T. The launches were constrained to be in different bands and with differing technologies. The true dimensions of the interoperability were now obvious to everyone. True interoperability involves harmonizing the base technologies, internationally coordinated frequency bands, conditional access or digital rights management, and service personalization in various networks. TV, mobile video, and music can mean a very wide range of services, and whether these can (or should) work interoperably with all other networks is a major issue for consideration. For example, mobile video telephony and live video streaming services have been defined as standards—can this be taken further? Can an e-mail service with animated facial expressions become an international standard? Or can push-to-talk be a common feature across networks? One way to overcome the country- and operator-specific issues is that of multiple-standard support in handsets, although this is not true interoperability. There can be interoperability only if the parameters are coordinated sufficiently so that all devices support the same standards across networks.

Mobile TV and Multimedia Services Interoperability

479

Figure 17.1: Dimensions of mobile interoperability.

Bodies such as the Open Mobile Alliance (OMA), Broadcast Mobile Convergence Forum (bmcoforum), Open Handset Alliance (OHA), Open Mobile Video Coalition (OMVC), and others have been concerned with these developments sufficiently to come out in support of features that would enable interoperability of networks, devices, and services. The release of OMA-BCAST, based on open standards by OMA, was in fact seen as a step in this direction. The response of the mobile world—standards bodies, country regulators, operators, handset manufacturers and others—has been equally broad-based and multidimensional. Even China, known in the past for its proprietary technologies, has announced support of mLinux, an open embedded operating system for phones, technology collaboration for 3G (TD-SCDMA), and also already supports roaming for business travelers between Korea, Japan, and the United States. In the STiMi and CMMB technologies now adopted by SARFT for nationwide rollout of mobile TV, standards such as those for audio and video encoding, transport stream, encryption, and handsets are now aligned to international standards—a break from the past of having its own homegrown standards. Industry efforts at interoperability include the chipset makers who have in the past promoted the use of global roaming chipsets for various frequency bands, handset makers with dual-mode 3G-GSM and CDMA2000-1x and EV-DO-1X phones that can roam globally, multistandard tuners for all the broadcast mobile TV transmissions, and middleware for service characterization. Middleware also helps to select and launch common software and features across multiple networks and handsets. Interoperability for voice networks is now a reality in large parts of the world.

480

Chapter 17

Broadcast operators have also been seeking interoperability, but for different reasons. They broadcast programs for HDTV, DTV, and mobile screens and cannot afford to produce programs separately for each screen. They also face the problems of being able to target handsets with different content security mechanisms and used in up to a dozen different mobile networks in each market. Adoption of common underlying technologies of content encoding (i.e., H.264, AAC), datacasting (IPDC), and content encryption (OMA-BCAST SCP) helps in a manageable service rollout. This should however not suggest that we are there yet in terms of interoperability. In fact, true interoperability and roaming for multimedia services and roaming is quite distant on the horizon. There are however concerted efforts in the industry to usher in interoperability. We look at some of these efforts and preview some real-life networks in terms of interoperability in this chapter.

17.1.2 Understanding the New Mobile Environment The mobile world has evolved under the umbrella of standards developed by the 3rd-Generation Partnership Projects (3GPP and 3GPP2). Similarly, the digital video broadcasting standards have developed under the aegis of the DVB, ATSC, FLO, ISDB, and other standards bodies. All the services, which have developed independently for their own networks (such as for television broadcasting, radio, Internet, voice, streaming via the Internet, and mobile services), now attempt to target the same mobile device: the mobile phone. This is leading to considerable convergence, and requirements of cooperation between different standards bodies.

17.1.3 Implementation Profiles Another dimension of interoperability is the profiles used in networks for choice of parameters used, even where the technologies are common, such as OMA-BCAST. The bmcoforum, for example, provides its recommended parameters, which help in the standardization of operator networks and handsets. Interoperability is of interest to users as well as handset manufacturers. Users for example might want a video conference or a video call with other users who may be on different networks. The handset manufacturers need to be able to have standard devices that will work in various networks for economies of scale. Interoperability, while appearing to be a noble virtue, may however not be viewed in this perspective by those who can benefit by having exclusive products, such as the linkage of iPhone to the AT&T networks and subsequent acquisitions by them of customers desirous of using these phones. Many operators want to distinguish their networks by key services, which may not be available in other networks—mobile TV being one of them. In such cases, they may not permit its access from other “me-too networks.”

Mobile TV and Multimedia Services Interoperability

481

Figure 17.2: A mobile device operating with multiple services is governed by interdependent standards-setting from multiple organizations.

Figure 17.3: Moving toward an interoperable environment.

482

Chapter 17

17.2 Organizations for Advancement of Interoperability in Mobile TV 17.2.1 Open Mobile Alliance (OMA) OMA is an association of over 300 companies representing mobile operators, handset and device suppliers, IT companies and others with a mission to “facilitate global user adoption of mobile data services by specifying market-driven mobile service enablers that ensure service interoperability across devices, geographies, service providers, operators, and networks.” http://www.openmobilealliance.org/. OMA has been working toward creating an interoperable environment by laying guidelines for implementation of technologies. The standards in many cases do not specify either the parameters that should be used for implementation or the characteristics of handsets. This gap is filled by the OMA. It also organizes “Testfests”—along the lines of the “Plugfests” by the WiMAX forum—to test the interoperability between various products and technologies. A large number of mobile applications today follow the OMA guidelines, including those for Digital Rights Management (DRM), OMA MMS, OMA browsing, e-mail notification, Instant Messaging and Presence Service (IMPS), OMA game services client server interface, OMA push-to-talk over cellular, OMA downloads over the air, and others. OMA interoperability specifications are essentially “enabler releases,” which are used in implementing interoperable specifications.

Figure 17.4: OMA enabler releases for interoperable applications.

Mobile TV and Multimedia Services Interoperability

483

17.2.2 Mobile DTV Alliance The OMA is involved in mobile interoperable applications; the Mobile DTV (MDTV) alliances build on this work by providing interoperable guidelines for mobile TV. In the United States, the MDTV alliance’s area of focus now is the ATSC Mobile DTV technology, in addition to the DVB-H systems for broadcasting of mobile TV, where they have issued the recommendations for North American Mobile TV Implementation Guidelines. These guidelines are designed to provide an interoperable technical foundation including: ●



Implementation guidelines for encryption systems and EPG to support different business models Implementing a uniform service layer with multiple underlying broadcast systems

The MDTV alliance issues guidelines for the North American market, but these guidelines are used globally. The MDTV alliance has an agreement with the bmcoforum (Broadcast Mobile Convergence Forum) for the testing and implementation of mobile TV outside of North America. The bmcoforum guidelines are widely used in Europe and Asia. The joint efforts are aimed at making mobile TV a mass-market service.

17.2.3 The bmcoforum The bmcoforum is engaged in providing guidelines that enable implementations using the strengths of both terrestrial broadcasting and mobile networks. Terrestrial broadcasting provides a means to deliver content to a large number of devices and transmission of cyclic content (carousels), but this must be complemented with open encryption systems. The bmco implementation profiles reflect these methodologies. The bmcoforum recommendations on implementation profiles of OMA-BCAST 1.0 (Table 17.1) are an example of specifying specific parameters so that the implemented services across many operators are interoperable. These recommendations restrict implementations of OMA BCAST 1.0 to subsets of values permitted under the OMA-BCAST 1.0 profile, thus making interoperability more practical.

Table 17.1: bmcoforum Implementation Profiles for OMA-BCAST V 1.0. S. No. 1 2 3 4 5 6 7 8

BMCO V 2.0 for BCAST V 1.0 Implementation Distribution V2.0 OMA BCAST V1.0 DRM for Connected Devices OMA BCAST V1.0 DRM for Unconnected Devices V 2.0 OMA BCAST V1.0 DVB Adaptation V 2.0 OMA BCAST V1.0 MBMS Adaptation V 2.0 OMA BCAST V1.0 Service Guide V 2.0 OMA BCAST V1.0 Services V 2.0 OMA BCAST V1.0 Smartcard Profile V 2.0 OMA BCAST V1.0

Date June 30, 2009 June 30, 2009 June 30, 2009 June 30, 2009 June 30, 2009 June 30, 2009 June 30, 2009 June 30, 2009

484

Chapter 17

17.2.4 Open Handset Alliance Any discussion on the OMA must also mention the role of the Open Handset Alliance (OHA), which is a recently created association of industry players committed to developing applications, services, and handsets that are based on open standards. One of the initiatives of the alliance is the release of Android, which is the first complete, open, and free mobile platform. The new platform, released in mid-2008, is complete, as it covers the operating system, middleware, and mobile applications. The open platform helps go beyond the “walled gardens” created by applications that need to work on proprietary platforms or restrict users in installing open applications. The Open Handset Alliance is backed by Google.

17.2.5 International Multimedia Telecommunications Consortium (IMTC) IMTC that was set up in 1993 with an open membership policy “is an international community of companies working together to facilitate the availability of real-time, richmedia communications between people in multiple locations around the world” (www. imtc.org). It has been actively working to promote the development of interoperable multimedia products based on standards. The objective is to have international standards that span across the networks and technologies to provide interoperable services for multimedia. In addition, it aims to provide increased compatibility in rich media products and services. IMTC member companies recently completed testing of new 3GPP features for multimedia streaming such as fast content switching (FCS) and 3GPP rate adaptation.

17.3 Interoperability in Mobile TV There are two ways through which the industry is moving toward greater interoperability. The efforts of the industry are focused on making mobile TV systems more generic. This implies common core technologies, chipsets, handsets, and mobile TV headend systems. Having technologies that use common core modules leads to significant cost savings even though the end products are different. Within a country or a region, efforts are moving toward common implementation profiles. Such profiles are prepared by industry bodies and define a subset of parameters out of a much wider range than would otherwise be possible under the standards. For example, mobile TV implementations may follow the bmcoforum guidelines in Europe, even though they are based on a common standard (OMA-BCAST).

17.3.1 Making Mobile TV Generic Mobile TV implementations are now more generic, based on the following events: ●

Use of common core technology of IP datacasting. DVB-H, ATSC Mobile DTV, CMMB, and the MediaFLO make use of IP datacasting. This leads to the higher layer protocols being independent of the transport mechanism used.

Mobile TV and Multimedia Services Interoperability ●



485

Using common ESG formats. Using ESG based on OMA-BCAST implies that the application layers that govern the program information display, service purchase, and display of interactive information are largely standardized. Using common encryption. Use of common encryption schemes such as that provided by the OMA-BCAST smartcard profile or DRM profile implies greater freedom to customers to choose handsets or other user devices such as PNDs, PMPs, or pocket TV receivers. Mobile TV encryption products can be delivered using a MicroSD, USIM, or SD card, making the service widely available.

Increasing use of common core technologies has meant that broadcast equipment can be developed with multiple options using a common product line.

17.3.2 Commonality in Equipment for Mobile TV Services An example of equipment that can be used to target broadcast and multicast via different systems ranging from terrestrial broadcasting (FLO, DVB-H, or DMB) and 3G (3GPP, 3GPP2, and MBMS) is the HERA™ 3200M mobile TV transcoder from Mediaxcel. The transcoder is designed to adapt satellite or cable TV content (available in standarddefinition ASI format) to multiple types of mobile devices over unicast, multicast, or broadcast networks. It accepts up to four input channels in DVB-ASI format and provides up to 15 simultaneous output profiles per channel.

Figure 17.5: Mobile TV transcoder HERA 3200M for multisystem content delivery. (Courtesy of Mediaexcel)

486

Chapter 17

The specifications of the mobile TV transcoder provide an insight into the multiple transmission formats as well as multiple content formats, which can be handled using the same device.

17.4 Interoperability in Terrestrial Mobile TV Networks The reception of mobile TV using broadcast technologies is essentially a matter of the following requirements: ●





Having the right type of handset, which supports the appropriate frequency band tuners so as to tune to the local transmissions Having the rights for reception through the encryption system or digital rights management system Having a flexible mix of media players for playing video and audio clips, live TV, and radio transmissions. The transmissions in different systems can be in H.264, MPEG-4, Windows Media, or RealVideo.

If there are two DVB-H networks operating in the same market, will the subscribers be able to use them both? If so, what type of handsets will they require? The answer to the first question would have been negative until recently, when an application framework based on OMA-BCAST began to be used and external devices such as MicroSD- or SIM-card-based implementations of mobile TV started to become available. The OMA-BCAST smartcard profile enables interoperability between all smartcard-profile-enabled handsets and SIM cards, as well as with all smartcard-profile broadcast service management platforms.

Table 17.2: Specifications of Mobile TV Transcoder (HERA 3200M). Inputs Input Interfaces Networks Transmission Formats Media Formats

Outputs

DVB-ASI, SDI, Analog TCP, UDP, RTP Unicast or Multicast DVB-ASI SPTS/MPTS, DVB-T, DVB-S MPEG-2-TS, MPEG-4, ASP Audio, MPEG-1 Layer 1 to 3 or AC3

DVB-ASI, TCP/UDP, RTP over Gigabit Ethernet TCP, UDP, RTP Unicast or Multicast DVB-H, DVB-T, 3GPP, DMB, MP4, ISDB-T, ISMA 3GP, Windows Media 9, H.264/AVC, Flash 9 (compatible with Darwin Streaming Server), H.264, Flash 8 (On2 VP6), Audio AAC or AAC-HE, AMR; Simultaneous real-time outputs to multiple output codecs, bit rates, and resolutions

Transcoding Features Video Resolutions Audio Formats

H.264 ASP/SP, Windows Media 9, MPEG-4 SP/ASP VGA, CIF, QVGA, QCIF, SQVGA AAC-LC and AAC-HE (MPEG-2 and MPEG-4), AMR(3GPP), MPEG-4-GA

Mobile TV and Multimedia Services Interoperability

487

Content security of mobile TV services is discussed in greater detail in Chapter 21. However, it is useful to review the features of OMA-BCAST that facilitate interoperability: ●









OMA-BCAST is broadcast platform–agnostic. This allows it to be used for DVB-H, FLO, ATSC Mobile DTV, 3G, or other types of physical delivery mechanisms. The scope of OMA-BCAST includes all the key services such as ESG, datacasting and electronic commerce, and service purchase and protection. It permits the use of a smartcard profile (SCP), which enables it to be used by secure external components or SIM cards. The authentication is done by using a 3G network. All interfaces in a handset with the SIM and SDIO are standardized. Its architecture includes the interfaces for a smartcard management server and management of a client database. The OMA framework includes a DVB-H-, ATSC-, or 3G OMA–compliant handset with a SIM card. The Testfests carried out by the OMA ensure that the handsets incorporate and meet all the interoperability requirements. The handset manufacturers also need to spend less time integrating mobile TV applications, as the only software component that is now required is the OMA-BCAST smartcard profile agent (SCP agent). An OMA-BCAST implementation enables targeting of multiple networks with the same platform, including terrestrial broadcast and 3G.

17.4.1 Example of an Interoperable Platform for Mobile TV An example of an interoperable platform is the mobile TV service delivery platform used by 3 Italia. This platform, provided by castLabs, is based on the OMA-BCAST platform. It supports the OMA-BCAST DRM profile and has the capability to support the OMA-BCAST smartcard profile. Content aggregators such as RRD provide an end-to-end service using this platform and the OMA-BCAST SCP for delivery of their mobile TV offerings. 3 Italia uses Harmonic’s Rhozet® carbon-coder video transcoding solution to make mobile TV available via their DVB-H network.

17.4.2 Multistandard Handsets Knowing that multiple standards will be the way of life in mobile TV, one direction of development is to have multiple standard handsets that support mobile TV delivered through a range of networks. These are based on multistandard chipsets, which are available from companies such as Siano. An example of a multistandard handset is the ZTE M7100 (DVB-H, DVB-T, DAB, DAB-IP, and T-DMB). An example of the type of features that are needed in the handsets can be seen from the product features of a solutions provider for mobile TV: Nextreaming®.

488

Chapter 17

Handsets with the Nextreaming embedded application for live TV have been widely used in Korea’s satellite DMB services (SDMB) and terrestrial DMB services (TDMB), and its media players have been used in DVB-H services launched by Telecom Italia and 3 Italia in Europe. Nextreaming players have also been used in SK Telecom’s EV-DO network, where—in addition to the standard players such as MP3—the handsets also support SK Teleom’s PMP. The live TV application from Nextreaming is known as NexTV™, which provides an embedded solution for live TV broadcasting on a mobile phone. The application provides for optimized video and audio decoders (H.264 and HE/AAC and BSAC), image encoders and decoders (JPEG), and processing functions for handling video at 30 fps using QVGA resolution. It can be ported to various CDMA or OFDM chips and multimedia processors.

Figure 17.6: NexTV™ architecture.

Nextreaming™ players and live TV applications have found use in over 50 phones, such as the Samsung SGH-P920 (for 3 Italia’s 3G/EDGE and GPRS and DVB-H), Mitac MioC810 (Korean T-DMB), and Pantech IM-U100 for SK Telecom’s EV-DO network. Its NexPlayer application is a video and audio player application that is fully compliant with 3GPP and 3GPP2 standards and supports application-specific enhancements from major operators such as Japan’s i-mode and Korea’s SK Telecom. It supports both CDMA and 3G-GSM networks and provides EVRC, G.723.1 decoders (CDMA), AMR-NB (3GPP), and MP3, AAC-LC/HE, aacPlus, and other codecs for audio. Video codec support is equally comprehensive, including MPEG-4, H.263, and H.264 video decoders.

Mobile TV and Multimedia Services Interoperability

489

The application supports the latest features such as 3GPP release 5–compliant local/ progressive download and streaming, 3GPP2 local/progressive download and streaming, and streaming formats for i-mode, SK Telecom, KWISF (Korean Standard), China Unicom, and others. NexPlayer has been deployed in a number of handsets such as the Pantech Hero used in Helio (SK Telecom and Earthlink EV-DO network), Samsung SGH-P910 (3 Italia) and SGH-E-770 (used in 3G networks in Europe) and SGH E-770 (Orange France), and others. NexTV for CMMB is the application for the Chinese market. Nextreaming and RMI (a manufacturer of SoCs) are delivering a complete solution based on the RMI’s Alchemy Au1250 Processor and the Au1300 Processor family, NexTV player, Nagravision’s Conditional Access System (CAS), and China Mobile’s Mobile Broadcast Business Management System (MBBMS).

17.5 Interoperability in 3G-Based Mobile TV Services Mobile TV is provided in almost every 3G network across North America, Europe, and Asia. How interoperable are these? Is it possible to use the same handset on two networks? Or watch streaming mobile TV while on another network? What are the features that permit activities? We will look at these issues in this section.

17.5.1 Network Interoperability and Roaming It is important to understand the difference between network interoperability and roaming. Under the 3GPP-IMS, roaming is defined in two forms. In the first type of roaming, the user terminal uses the IMS in the home network. This means that the user terminal uses the resources of the visited network (VPLMN) to connect to the IMS core network, which resides in the user’s home network (HPLMN). This means that the user is always connected to the home network for all resources via the IP facilities of the visited network, which in fact functionally provides a tunnel to the home network. Other functions such as charging are done in the home network for the resources used in the IMS home core network. SK Telecom introduced WCDMA Automatic Global roaming service in cooperation with Vodaphone K.K. of Japan in June 2005. Using this service, customers of SK Telecom could roam in Japan and make video telephony calls using the same handset as used in Korea (Samsung W120). The service was later extended worldwide, including Hong Kong, Singapore, Italy, the Netherlands, the United Kingdom, and Germany. A menu is provided in the handsets, which enables them to receive the local WCDMA provider’s frequency. Similarly, roaming was also demonstrated by Korea Telecom’s 3G operator Kitcom with J-Phone (now Vodafone) and others. Vodafone complemented the offering with the V801SH, which offered roaming in all 3G-WCDMA, GSM, and GPRS networks.

490

Chapter 17

Figure 17.7: Roaming in 3GPP-IMS networks.

The second type of roaming is where the terminal uses the IMS of the visited network. In this case, the terminal will be assigned the IP number and all resources from the visited network. It is likely that all roaming in the initial phase will be of the first type, i.e., involving the use of the home IMS system. Internetworking in the IMS framework means that the two IMS networks (home and visited networks) are connected via network packet resources through an Inter-PLMN IP network. The two user terminals set up sessions using SIP, which involves the two IMSs, while the user traffic flows directly using the GGSNs. (The IMS roaming and Interworking Guidelines 3.6 were issued by the 3GPP in November 2006 in document IR.65.)

17.5.2 Roaming Roaming amongst 3GPP networks is now fairly well established. The use of common radio interfaces, availability of multiple band handsets, and the commercial arrangements between different networks are now extending roaming, which is firmly in place for voice services to multimedia services. 3GPP networks: FOMA The world’s first 3G network was launched by NTT DoCoMo 2001 in Tokyo. FOMA was a portfolio of voice, data, and multimedia services, which were delivered using the 3G network. NTT DoCoMo’s use of 3G was based on WCDMA as standardized by ARIB and in the frequency bands of 1920–1980 MHz and 2110–2170 MHz ranges. Video calling, packet-switched data, and MMS were some of the services introduced by NTT DoCoMo in its network.

Mobile TV and Multimedia Services Interoperability

491

Figure 17.8: Interworking in 3GPP networks.

In 2002, J-Phone in Japan also launched its 3G services based on Vodafone Global Standard (VGS). Both services were based on 3G-324M but initially interworking was not possible due to different implementations of the standard. This meant that subscribers in FOMA and J-Phone could place video calls within their networks but a FOMA subscriber could not call J-Phone with a video call and vice versa. Interworking tests were conducted in 2003 to establish interconnectivity. As far back as 2004, FOMA customers in Japan could make video calls to 3 in UK and 3HK in Hong Kong using its World Call videophone services and data call [64K] services. By 2005, FOMA interconnectivity under World Call videophone was available to 13 countries, including countries in Europe, Asia, and North America. 3GPP2 networks 3GPP2 networks today are characterized by data services, video streaming, audio, and games. As 1xEV-DO is essentially an overlay network for data, 300–700 kbps of data rates can be delivered to a user and roaming is possible for data calls. For voice calls, the network still reverts to CDMA 1xRTT and hence services such as video calls are not common.

492

Chapter 17

The 1xEV-DO networks have now been widely deployed in the United States (Verizon Wireless and Sprint), Canada (Bell Mobility), Japan (KDDI), and Korea (SK Telecom and KT Freetel) in addition to other countries such as Brazil, Mexico, Australia, and New Zealand. Roaming services for both voice (1xRTT) and data (1xEV-DO) are available between Sprint, Verizon (United States), and Bell Mobility (Canada). The Sprint 1xEV-DO services are available under Sprint Power Vision™, which includes Sprint TV, music, games, and video mail. Similarly, KDDI offers roaming (including data roaming to South Korea, China, and North America).

17.5.3 Roaming Between 3GPP and 3GPP2 Networks 3GPP and 3GPP2 have defined the air interfaces, protocols, and codec standards for 3GGSM evolved networks and 3G-CDMA-based (CDMA2000-1X, EV-DO-1x) evolved networks. There have been considerable developments in evolving common ground such as agreement for IP-based core networks, support of IPV6 and the protocols for call setup and release, and so on. In the 3GPP forum, release 6 specifies the operation of the 3GPP Packet Switched Streaming Service (3GPP PSS Release 6); in the 3GPP2, it is addressed under MSS (Multimedia Streaming Services). Both standards have achieved considerable harmonization, as can be seen from the use of the standards. However, this does not imply that we have achieved interoperability in multimedia services and mobile TV. First of all, even though there are standards prescribed by 3GPP and 3GPP2 for audio and video coding, these are not identical. This implies that the handset design needs to necessarily use multiple players and necessary intelligence that the players be launched for the appropriate video or audio file type. As an example, the file formats for the two systems represent persisting differences. Second, there are variations based on implementation. For example, the standards for packetswitched streaming services are different for 3GPP, 3GPP2, Korean Standard for Streaming (KWISF), Japan (i-mode progressive download/streaming), and China Unicom. This implies that the handset needs to use a player that would be able to function on the streaming in multiple standards. The use of such a player is to have interoperability in multiple network environments.

Table 17.3: Features Support in 3GPP and 3GPP2. Feature

3GPP

Video Codec Support Support for Audio (Music) Speech Codec

H.263, MPEG-4 (simple visual profile), H.264 optional MPEG-4/AAC-LC (optional), AAC AMR-WB

3GPP2 H.264 AAC QCELP 13K, EVRC

Mobile TV and Multimedia Services Interoperability

493

17.5.4 Frequency Issues Mobile services based on different standards have historically been placed in a number of different frequency bands. Despite significant global alignment after the adaptation of GSM bands of 900, 1800, and 1900 MHz, there are services in other bands, including 850 MHz, 450 MHz, 800 MHz. Fortunately, world roaming chipsets are now available that can provide selection of any frequency band. Frequency issues, although important, take second stage to harmonization of protocols, codecs, and services for multimedia.

17.5.5 Network Interoperability The basic question that needs to be answered is whether commonly used voice, messaging, and multimedia services will remain interoperable. For example, can a video call be made between a FOMA phone and an AT&T 3G phone? Or can an MMS be sent and delivered successfully from one network type to another? If so, in which networks, and what are the limitations? The issue of interoperability needs to be classified under: ● ●

Interoperability between 3GPP networks and between 3GPP2 networks as separate groups Interoperability between the 3GPP and 3GPP2 networks

The guidelines for video telephony interoperability (Video Telephony Circuit-Switched Video Telephony Guidelines version 1.0) were issued by 3GPP in June 2005 in IR.38. The document provides complete guidelines including codec standards, call setup, multiplexing protocols, media exchange and internetwork protocols and roaming guidelines for calls to be set up in accordance with 3G-324M standards.

Figure 17.9: Internetworking: 3GPP and 3GPP2 networks.

494

Chapter 17

A number of interoperability tests have been conducted under the aegis of the IMTC, such as interoperability testing for: ● ● ● ● ● ● ●

H.323 (video conferencing services) 3G-324M (video calls with circuit-switched bearers) 3G-PSS (packet-switched streaming services) SIP (Session Initiation Protocol or call setup) VoIP (voice over IP) T.120 (data conferencing or NetMeeting®) H.320 (videoconferencing over IP)

Table 17.4 shows the status on operation of multimedia services over 3G and evolved networks. Voice, data, and SMS and MMS services can today be seamlessly delivered across networks both CDMA and 3G-GSM. It is evident that interoperability between 3GPP and 3GPP2 networks still has a long way to go before it can be deployed widely in commercial networks.

17.5.6 Packet-Switched Streaming Services (PSS): Mobile TV Packet-switched streaming services (PSS) require a number of features to be supported. These include: ● ● ●

Streaming bit rate adaptation Progressive download (H.264 video and AAC audio) Multitrack signaling

Table 17.4: Status of Operation of Multimedia Services in 3G Networks. Multimedia Service Circuit-Switched Video Telephony 3G-324M or H.324 with Mobile Extensions MMS Service Version 1.2 3G-PSS Point-to-Point Streaming Services Based on RTSP Packet-Switched Video Telephony (RTP-Based) 3G Broadcast Services MBMS or MCBS Multicast File Transfer (FLUTE) Multicast Streaming

3GPP

3GPP2

Available from all major 3G UMTS operators based on standards

Limited availability

Widely available and deployed Available with H.263/H.264 or MPEG-4 codec support Under implementation in networks (based on 3GPP release 4) Will be available (MBMS)

Widely available and deployed Available with H.263/H.264 or MPEG-4 codec support No decision on codecs in 3GPP2 Will be available (MCBS)

Mobile TV and Multimedia Services Interoperability

495

This requires that devices such as encoders at one end and decoders on the other be protocolcompliant and parameter-compliant. The Packet-Switched Streaming Activity Group (PSS-AG) of the IMTC is responsible for interoperability of devices and networks and their testing. Trials were conducted by the GSM association for global interoperability using SIP and IMS interworking, with over 24 operators of 3G services taking part. Video share trials in which users can send video clips while on video calls have also been conducted for global availability of the services. The availability of 3G streaming services is quite common in 3GPP as well as 3GPP2 networks (i.e., 1xEV-DO). Most of the 3G carriers offer services for streaming of video clips or music. These services can be used while roaming to other networks through IP connections to the home IMS. The availability of these services is also dependent on the handset type, which should have global roaming capabilities.

17.5.7 Mobile TV Based on MBMS Broadcast Technology 3GPP release 6 provides for Multimedia Broadcast and Multicast services for mobile TV. This makes it possible to multicast a number of TV channels for mobiles using unidirectional unpaired 3G spectrum. Orange in Europe has announced the launch of services based on MBMS using 5 MHz of unpaired spectrum in the 1.9 GHz band using TD-CDMA technology. This permits bundling of up to 17 channels of QVGA resolution in the 5 MHz bandwidth. The service (TDtv) uses technology from IP Wireless.

17.6 Interoperability in Mobile TV Provided via the Internet: IP Networks By virtue of the fact that the Mobile networks now provide access to high speed Internet as well as have core networks based on IP, the delivery of multimedia services based on IP is gathering considerable attention. This includes out-of-band IP delivered by networks such as WiMAX. Applications are emerging that can deliver multimedia and mobile TV services based on IP connectivity. An example of an IP-based videophone service is the Streamphone™ 2.0 wireless video call service. The service can work on any IP network such as Wi-Fi, WiMAX, EV-DO, or landline Internet services. Handsets are available that can receive WiMAX (e.g., the Samsung i730 Pocket PC and Samsung M800 WiMAX and WiBro phones). Video call services can be used seamlessly from any network in the world providing IP or from a place where access to WiMAX is available. The Streamphone 2.0 service has been tested over the Verizon 1xEV-DO network.

496

Chapter 17

17.7 Interoperability of Multimedia Services 17.7.1 Messaging Interoperability: MMS In November 2003, the OMA (Open Mobile Alliance) announced standards for MMS 1.2, which was a major step in taking the industry toward interoperability. MMS is an important service and can carry multimedia content including video and audio files and video clips. The MMS 1.2 specifications define the minimum requirements and conformance to enable end-toend interoperability. The effort was a culmination of long-ranging efforts between 3GPP and 3GPP2 for various protocol levels involved in MMS services.

17.7.2 3G-324M The 3G-324M standard is the standard for circuit-switched video telephony, streaming, and other services based on underlying reliable circuit-switched network architecture (or IP network–emulating circuit switching). The 3G extension to the H.324 standard provides for the codec types that can be used in mobile networks and protocols for call setup and release. Being a stable standard, the initial releases of video calls have been based on the use of the 3G-324M standards. It is possible to place these calls from land-based or mobile terminals and vice versa. Most of the 3G networks today provide video calls using 3G-324M and its enhancements. Network specific enhancements are required in 3G-324M because: ●



3G-324M does not deal with the call setup or termination process. The call setup and release are handled by the mobile network’s radio interface as a layer above the circuitswitched services provided by 3G-324M. The features are therefore network-specific, and features such as call forwarding and roaming to SS7 networks are not supported. In case of a video call to a user handset that does not support video or is outside the roaming area where such calls are possible, the call may simply disconnect. Enhanced Internet features are needed to support establishment of a simple voice call in case video calls are not supported. These are now available in many networks.

17.7.3 Video Conferencing (H.323) H.323 is an ITU standard for audiovisual conferencing. H.323 video conference standards have been in use for a long time over packet-switched and IP networks for both fixed and mobile usage. Internet applications such as “Netmeeting” use the H.323 protocol. The support of the videoconference function is required in mobile networks for Internet to mobile phone interworking. The FOMA network in Japan, for example, uses a protocol conversion function (developed by NTT DoCoMo for protocol conversion between H.323 on the IP network and 3G-324M

Mobile TV and Multimedia Services Interoperability

497

on the mobile network). For video calls, the communications pass through ISDN from the FOMA network, which provides bearer-based video call for FOMA subscribers who use the 3GPP 3G-324M protocol. A protocol conversion function presents the video service in the form of H.323 to the IP-based video terminals. The protocol conversion provides full interoperability.

Figure 17.10: H.323 functions in the FOMA network.

For messaging applications, the instant messenger developed by NTT DoCoMo supports dual protocols, i.e., SIP and H.323. The PCs communicate using SIP and the H.323 communication to FOMA are converted and delivered to the handsets, which use the i-appli™ downloaded software. The protocol conversion function SIP/H.323 ensures compatibility.

17.7.4 SIP The initial implementation of video calls in 3GPP networks was based largely on the 3G324M technologies, as these standards have been stable for some time and the codecs for these have been agreed upon in both 3GPP and 3GPP2 as well as in the fixed-line circuitswitching domain. The SIP (session initiation protocol) technologies are now bringing in a transformation in the use of videophones. The implementation of extensions of SIP—i.e., “instant messaging” and “presence” (sometimes called SIMPLE)—are now being widely implemented in both mobile and IP networks. SIP, being based on IP technology, provides a medium for providing new services such as voice over IP (VoIP) and instant messaging. SIP clients are available in mobile phones that help implement the applications based on SIP. The implementation of IP-based protocols through clients that use the underlying IP layer enable the provision of services such as:

498 ● ● ● ● ● ●

Chapter 17

Push-to-talk Presence Instant messaging VoIP IP-based video calling Clip streaming and download

As an example, Verizon Wireless operates the 1xEV-DO network in the United States. It has embraced the use of the IMS system for the provision of SIP-based services, which it calls “advanced IMS” or A-IMS. The advanced IMS provides a number of innovative SIP-based services.

Figure 17.11: Verizon Wireless advanced IMS services.

The implementation by Verizon of the A-IMS standards is a recognition of the fact that SIPbased services for mobile networks need extensions to take care of security and loss of signals while roaming, as well as variable-bit-rate environments. Correspondingly, the applications need to be able to deal with these events.

17.8 Summary Interoperability and roaming in multimedia services such as video calls, streaming services, and live TV have been accepted as very important criteria for widespread use of the services

Mobile TV and Multimedia Services Interoperability

499

that are being delivered today from advanced IMT2000 networks. Efforts to increase interoperability of networks and services, roaming, and universal use of handsets are clear goals before the industry. Coordination work is proceeding in multiple forums such as 3GPP, 3GPP2, IMTC, ITU, OMA, and others to ensure better compatibility in multimedia services. Interoperability and roaming is now becoming a reality in 3GPP networks and (to an extent) in 3GPP2-CDMA networks. However, it will be some time before we are able to reach the standards of interoperability and roaming that are now available for voice services.

Before We Close: Some FAQs 1. How does the Open Mobile Alliance’s globally interoperable mobile TV standard relate to bmcoforum? OMA’s interoperability is based on OMA-BCAST Release 1.0, which specifies the ESG, content protection, and service interactivity. However, the release is still generic. bmcoforum recommendations narrow down the specific parameters that should be used in the bmco profiles. For example, for the service guide update, OMA has the OMA PUSH fragment. However, it is not supported by bmco. 2. Do products such as USB tuners or SDIO tuners work with any mobile phone? Not completely, as these may be OS-specific. For example, Philips’ SDIO TV1000/TV1100 DVB-H card works with any Windows Mobile 5.0 or Linux Phone. Siano products such as Abilis AS-102 support multiple interfaces such as USB 2.0 and SDIO and can interface to many devices. An SDIO mobile DTV receiver such as that from Innoxius™ Technologies works for DVB-H, DMB-T, DAB, and other standards, providing an MPEG-2 stream that would work with any smartphone or PDA phone. 3. Does OMA-BCAST support simulcrypting by multiple operators, e.g., MVNOs? Yes, simulcrypting of a transmission is possible by multiple operators. 4. Is the CMMB format interoperable with DTMB (both formats used in China)? No, the formats, including the framing, error codes, audio, and video formats are all different. 5. What is the role of handset manufacturers for mobile TV interoperability? Mobile handset manufacturers have been collaborating to make their products interoperable in different markets by adhering to common specifications and providing multistandard devices. They also take part in “PlugFests” to test interoperability. 6. Is it possible to transmit the OMA -BCAST ESG and DVB-IPDC ESG simultaneously so that the systems are interoperable? Yes, this is possible in most OMA-BCAST ESG generators through a feature called “unified bootstrapping.”

This page intentionally left blank

PA R T I V

Content and Services on Mobile TV and Multimedia Networks You see things; and you say “Why?” But I dream things that never were; and I say “Why not?” George Bernard Shaw, Back to Methuselah (1921)

CHAPTE R 18

Mobile TV and Multimedia Services Worldwide The significant problems we face cannot be solved at the same level of thinking we were at, when we created them. Albert Einstein

18.1 Introduction 18.1.1 Mobile TV The rollout of mobile TV has defied almost all the projections that predicted a certain degree of uniformity or predictive behavior as has been witnessed in other broadcast or mobile services. Services in every country seem to have their own character, usage patterns, and content types. These patterns give an insight into mobile TV that no other studies based on mathematical models can provide. This is also the reason for this chapter in the book. The launch of mobile TV and multimedia services across the globe has gathered momentum, with networks having migrated to 3G, new handsets with multimedia capabilities, and increasing focus on mobile content. The growth however has been quite skewed, with Korea, Japan, and China being the frontrunners in subscriber growth. © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00018-7

501

502

Chapter 18

To briefly recap the history of launch of broadcast-mode mobile TV services, the first services were launched in Japan in October 2004 using the S-DMB technology. South Korea followed, with S-DMB launched by TU Media and SK Telecom in May 2005 and T-DMB by KBS and KTF in December 2005. ISDB-T services followed in Japan in April 2006 and DAB-IP services were launched by BT Movio in the United Kingdom in June 2006. In the meantime, a number of trials were conducted for DVB-H services in countries in Asia and Europe, which led to the launch of the DVB-H service in Italy in June 2006 by operator 3 and in October 2006 by Mediaset. In Germany, T-DMB services were launched by MFD in June 2006 and the DVB-H trials went live with the FIFA 2006. China Multimedia Mobile Broadcasting (CMMB) debuted with the 2008 Summer Olympics. In the United States, services based on MediaFLO technology were launched by Verizon Wireless in March 2007 followed by AT&T in May 2008. The momentum of new launches has been somewhat hampered by the licensing process and the allocation of spectrum, but the field has now been set for full-fledged growth of these services through 2010, when most of the networks will find realization. A number of networks are now under active implementation in China, Europe, Asia, and the United States. Although Korea and Japan were early movers with terrestrial and satellite broadcast–based mobile TV services, the United States had a strong beginning with 3G-based mobile TV offerings such as MobiTV and V CAST. MediaFLO services, launched through AT&T and Verizon Wireless, were the other alternative for TV on mobile devices in the United States, even though these were not available in all markets after the initial launch in 2007. Services such as MobiTV are available not only in the United States but now cover many countries in South America such as Brazil and Mexico. MobiTV services are also available on WiMAX and Wi-Fi networks. Since their launch, mobile TV services have made strong headway in Japan and Korea, countries that today constitute the earliest offerings and the biggest installed bases of over 80 million combined. China, a later starter, has made a strong beginning with over 189 “prefecture-level cities” having a CMMB service by 3Q 2009. As per a report by Pacific Epoch (March 24, 2009), SARFT (China’s State Administration of Radio, Film, and TV) has predicted a subscriber base of 500 million for CMMB-based mobile TV by 2011 in China. Mobile TV is set to grow strongly in the markets of the United States, Europe, and Asia, based on both the 3G technologies and terrestrial broadcasting. TV, media streaming, multimedia messaging, and video on demand now complement the features of smartphones and the revenue models of mobile operators are now strongly oriented toward providing multimedia services as a core activity. The Open Mobile Alliance, IMTC, 3GPP, and other industry organizations are helping the deployment of the services efficiently across many countries and regions in the world.

Mobile TV and Multimedia Services Worldwide

503

18.2 China China presents an interesting case for many reasons. First, the number of mobile subscribers in China is the highest in any country in the world (over 650 million in mid-2009) and therefore it presents the greatest market opportunities for any new technologies. It is also the second-largest broadband market globally, with mobile handsets being a major media for accessing the Internet. Second, only a few companies (no more than three) control the majority of the market after a consolidation of operators based on mobile technologies in 2009. Finally, China has been inclined toward the use of its own standards for terrestrial television as well as mobile TV, which are actively driven by the Chinese government through its state organizations such as SARFT as well as subsidies on handsets for new technologies. This makes the Chinese market ripe for the most rapid developments in the coming years. First, a brief background. Licenses for 3G services were granted by the MIIT, China, in January 2009. Three companies were granted the licenses: China Mobile: 3G based on TD-SCDMA technology China Telecom: 3G based on the CDMA2000 technology China Unicom: 3G based on the WCDMA technology China’s main broadcasters include CCTV (state broadcaster controlled by SARFT), Hunan TV, and Phoenix TV. CCTV operates 16 channels (CCTV-1 to CCTV-16) covering all genres. Hunan TV and Phoenix TV are primarily satellite broadcasters with many popular channels. SARFT has a project to provide free-to-air satellite DBS broadcasting of over 80 channels, which is now getting implemented in end 2010 (Cuncuntong rural direct broadcast satellite [DBS] project).

18.2.1 Terrestrial Broadcasting in China: DTMB China has over 400 million TV households, with 150 million receiving TV programs via cable. Digital cable TV has been gaining rapidly, with over 30 million households. The majority of households still depend on the terrestrial transmissions. DTMB is the national terrestrial broadcasting standard for China. Transmissions under DTMB can be in a singlecarrier mode or multiple-carrier mode (similar to OFDM). DTMB can be operated in MFN or SFN configurations. The initial launch of DTMB was at the time of the 2008 Summer Olympics, when the service was launched in eight cities, with one HDTV and six SDTV channels. DTMB can be used for all screen sizes, from mobiles to HDTV. However, in designated cities, the mobile transmissions have used the technologies of CMMB. The Chinese government has stated as a goal to have all broadcast using digital transmission by 2010.

504

Chapter 18

18.2.2 Mobile Broadcasting in China: CMMB Early implementations of mobile broadcasting in China were based on DAB. The SARFT then adopted the STiMi standards for multimedia broadcasting. A series of standards that provided a complete ecosystem for CMMB technologies (GY/T214-2006) were released by SARFT. An implementation followed in Beijing, Guangdong, Dalian, and Zhengzhou. Subsequently SARFT placed its seal of approval on a satellite-terrestrial standard, CMMB, with a technology called STiMi. CMMB services in China are provided nationally by the Satellite Mobile Broadcasting Corp. (CSMB), which is a wholly owned subsidiary of the SARFT. China has seen a runaway growth in mobile TV after the MIIT mandate of granting network access approvals for 3G (TD-SCDMA) handsets only if they contain the CMMB-based mobile TV reception capability. At present, all three mobile companies are striving to provide mobile TV. However, CSMB, the nationwide operator for mobile broadcasting services, has signed an agreement exclusively with China Mobile for rollout of CMMB services. This agreement potentially enables China Mobile to roll out mobile TV services in 337 prefecture-level cities by end of 2009. Over 189 such cities were already receiving CMMB services in 3Q 2009. This places China Mobile, which is an incumbent operator, in the lead for mobile TV using the CMMB technology. China Mobile has been subsidizing the handsets for the growth of 3G, which is also leading to the spread of CMMB.

18.2.3 CMMB Coverage CMMB transmissions were launched initially in time for the 2008 Summer Olympics and included the group 1 cities (Olympic cities, provincial capital cities, and four municipalities). Subsequently, the coverage has been extended to other cities: Group 1-37 Locations: Olympics cities(6), municipalities(4), provincial capital cities(27) Group 2: 113 cities As an example, in Beijing, with six SFNs operating at DS-20 (530 MHz), a coverage radius of 22 km could be achieved with average signal levels of 57 dBm.

18.2.4 Terminals for CMMB CMMB services are provided by SARFT in cooperation with China Mobile. The products that receive CMMB transmissions include, apart from mobile phones, the TD-SCDMA CMMB notebooks, USB modems, and PC cards. CSMB has also been procuring CMMB “terminals,” which consist of TV receivers, PMPs, USB receivers, and SD cards, in addition

Mobile TV and Multimedia Services Worldwide

505

Figure 18.1: Initial coverage of CMMB transmitters in China. By 3Q 2009, the coverage extended to over 190 cities.

Table 18.1: CMMB Coverage Group 1. Olympics Cities Capital Cities

Municipal Cities

Shenzhen, Ningbo, Qinhuangdao, Dalian, Qingdao, Xiamen Jinan, Haikou, Fuzhou, Nanchang, Shenyang, Chengdu, Yinchuan, Zhengzhou, Xining, Changchun, Changsha, Guangzhou, Hangzhou, Nanjing, Wuhan, Hefei, Kunming, Guiyang, Lhasa, Xi’an, Taiyuan, Shijiazhuang, Lanzhou, Nanning, Harbin, Hohhot, Urumqi Beijing, Tianjin, Shanghai, Chongqing

to handsets. Over 10 million terminals are being deployed in 2009, and the number could exceed 50 million in 2010.

18.2.5 Satellite for CMMB Services CMMB is designed to be a satellite–terrestrial service, with satellite transmissions and terrestrial repeaters in the S-band. In February 2007, China Satellite Mobile Broadcast Ltd.

506

Chapter 18

(CSMB), a company under the Wireless Bureau of SARFT, announced that it has selected China Mobile Broadcasting Satellite, Ltd. (CMBSat), a Hong Kong–based affiliate of EchoStar Communications Corporation (DISH), as the primary provider of S-band satellite capacity for China’s mobile video project. The technology to be used was STiMi, which was approved by SARFT, and the satellite has been manufactured by Space Systems Loral. However, the satellite, which was to be launched by China Satellite Mobile Co, a joint venture with Echostar, did not get the requisite clearances for operation over China with the use of the S-band spectrum by various regulatory arms of the Chinese government. Hence the services were launched using the UHF spectrum, over which SARFT had full control, in order for availability at the 2008 Summer Olympics. Since then, the services have expanded terrestrially to cover most of coastal China. It is expected that the satellite will be launched in 2010 to harmonize the offerings nationally, which is not the case at present, with a patchwork of different UHF frequencies being used to provide coverage for CMMB in China. However, the receiver devices used have S-band capability and will be used in a new environment when a satellite is available (compliant with CMMB Standard GY/T 220.1-2006 and 220.2-2006).

18.2.6 Content Security in CMMB Services in China CMMB has the capability to provide encrypted channels by using a CAS system. The CMMB standard provides the specifications of the CAS to be deployed. However, the growth of

Figure 18.2: CMMB handsets and PC, cards using MicroSD content protection.

Table 18.2: CMMB in 189 Cities/Provinces in China. Cities/provinces

UHF Center Channel Freq (Mhz)

650

DS-31

658

DS-32

666

DS-33

674

Jinzhou, Yining, Taian, Wuzhong, Guyuan

DS-34

682

Tianjin, Kaifeng, Tonghua, Heze, Jining Matsubara, Shandong(17), Jinan, L u’an, Lijiang, Lianyungang Quanzhou, Siping, Anqing, Chuzhou, Wuzhou, Yueyang, Hengshui, Guangdong(5), Guangzhou Zhumadian, Linfen, Jilin, Huangshan, Handan, Yan’an, Jaingxi(11), Nanchang, Huaian, Wuxi, Tieling, Nanyang, Suizhou, Huzhou, Cangzhou, Jaingsu(11), Nanjing, Yangzhou, Guizhou(2), Guiyang Tieling Dalian, Dandong, Pingdingshan, Shangri-la, Yingtan

DS-35

690

DS-36

698

DS-37

706

DS-38

714

DS-39 DS-40

722 730

Anyang, Texas, Yulin, Jiujiang, Leshan

DS-41

738

610

Xinxiang, Chifeng, Ma On Shan

DS-42

746

DS-26

618

DS-43

754

DS-27

626

Shanghai, Binzhou, Xuancheng, Zhoushan, Wenzhou, Qingyang, Shangrao Yingkou, Luoyang, Yiwu, Taizhou, Suzhou

DS-44

762

DS-28

634

DS-45

770

DS-29

642

DS-46

778

DS-47 DS-48

786 794

DS-13

476

Sanya, Yuxi, Mianyang

DS-14

482

Weihai, Jingzhou, Changde, Ningbo, Tibet Lhasa Dongying, Guangxi (5), Nanning, Shaanxi (5), Xian Longyan, Yichang, Huizhou, Sichuan (9), Chengdu, Dujiangyan, Zigong Jiuquan, Yunnan (4), Kunming, Chaozhou, Bijie Fujian (6), Fuzhou, Chizhou

DS-15

490

Ds-16

498

DS-17

506

DS-18

514

DS-19

522

DS-20

530

DS-21

538

DS-22 DS-23

546 554

DS-24

562

DS-25

Liaocheng Henan (15), Zhengzhou, Hebi, Weifang, Yichun, Ganzhou Shanxi (4), Taiyuan, Yanji, Fuyang, Langfeng Linyi, Zaozhuang, Suzhou, Haikou, Hainan(4), Danzhou, Qionghai Shiyan, Fuzhou, Deyang, Luzhou Shuangyashan, Yabuli, Huangshi, Lishui, Zhangjiakou, Tianshui, Yibin Inner Mongolia (2), Hohhot, Zhongwei, Shenghan Anshan, Xiamen, Laiwu, Hunan(6), Changsha, Zhuzhou, Xiangtan, Xingtai, Shizuishan, Baoji

UHF Center Channel Freq (Mhz) DS-30

Jiaozuo, Heilongjiang, Harbin, Suihua

Beijing, Chongqing, Rizhao, Zhejiang (12), Hangzhou, Baoding, Yancheng Qingdao, Yantai, Gulin, Gansu, Lanzhou, Ji’an

Cities/provinces

Sanmenxia, Yuncheng, Putian, Anhui(15), Hefei, Yiyang, Wuwei, Ningxia(5), Yinchuan, Suqian, Shaoguan Xuchang, Xinyang, Urumqi, Zibo, Hubei(8), Wuhan, Beihai, Shaoxing, Hebai(11), Shijiazhuang, Quinhuangdao, Chengde, Tangshan Jilin (6), Changchun, Jingdezhen, Xuzhou, Liaoning (7), Shenyang Liuzhou, Quinhi,Xining

Shangqiu, Luohe, Zhangzhou, Jingmen, Jinhua, Hanzhong Qiqihar, Tongling, Bengbu, Xiangfan, Quzhou, Jiaxing, Xinyu, Changzhou

Huainan, Huaibei, Pingxiang Datong, Jixi, Wuhu

508

Chapter 18

CMMB subscriptions has been lagging since encrypted services were introduced at the end of 2008. Encryption as per CMMB standards can be provided by using an integrated SoC such as the Sieno SMS1186 (which provides a decoded CMMB output from RF input) or by using smartcard-based implementation. PC cards for CMMB have MicroSD slots for decryption or optional interface to a CAM. Handsets include HTC’s Qulin (Windows 6.5, based on OMAP3), the Motorola A3300, and the Dopod 8388. An example of a CMMB USB receiver is the Siano YoTian® CMMB 2.0 receiver. The receiver can enable any notebook or PC for CMMB reception. The USB device features a dual-chip implementation based on Siena SMS 8021 and SMS 1185. The SMS 1185 is an antenna-diversity receiver/decoder. It has an SDIO interface for secured broadcast and CA.

18.2.7 Interactivity in CMMB Services Interactive services are now being launched on the CMMB network. These include online games, stock information, electronic wallet, sports lottery, and others. The specifications of data transmission and interactivity are contained in CMMB-GYT220.5 (2008: data broadcasting, types of data, content package formats). 3G-based mobile TV in China is available for streaming from all the three operators. China Telecom, which is a CDMA-based operator, has launched its mobile TV service based on the technology provided by UTstarcom Inc. in Hainan province. It has an EV-DO network that facilitates delivery of video streaming with good quality. The service uses a centralized 3G CDMA EVDO video streaming platform provided by Nokia Siemens Networks. China Unicom, which is a WCDMA operator, now offers live streaming mobile TV as well as video on demand (clip view) services. The package of six CCTV channels was priced at RMB 38 a month in August 2009, when the services were launched. Higher basic subscription plan packages (RMB 586, 886, or 1686) all include streaming mobile TV as one of the services. China Unicom has also begun its trials of CMMB terrestrial services. As the technology used by China Mobile (TD-SCDMA) is the favored homegrown technology for 3G in China, handsets with combinations of 3G used by China Telecom and China Unicom are comparatively few. In addition to the three-year exclusive deal between CSMB and China Mobile, the other two operators have begun trials of their own in addition to the 3G services.

18.3 Japan Japan has had the distinction of having the first 3G network–based (i-mode and FOMA) services, in place since the year 2001. The earliest mobile clip and animated pictures

Mobile TV and Multimedia Services Worldwide

509

download service was introduced by NTT DoCoMo in 2001 with the launch of the i-motion service in November 2001. The service featured 3GPP-encoded content (MPEG-4 video and AMR audio) in the 2001 implementation. Today the FOMA network is one of the leading networks with over 50 million users.

Figure 18.3: NTT DoCoMo services. (Courtesy of NTT DoCoMo)

In 2003, Vodafone KK (now part of Softbank) introduced handsets with analog tuners that could receive NTSC standard programs. Subsequently, the service was enhanced to be a two-way interactive service with the launch of EZ-TV™ by KDDI . The EZ-TV service involved analog reception on the handset coupled with interactive services such as download of background music. The programs on which such interactive services could be used were marked as “Chaku-Uta Full Content.” The handsets that could be used with the service included W32SA (Sanyo), W31CA (Casio), W31T (Toshiba), A5511T (Toshiba), and A5512CA (Casio). One of the handset versions came with an FM transmitter, enabling the songs downloaded to be played on FM radios (such as car radios). The EZ-TV service also included capability, using the KDDI mobile network, to access the websites of programmers, download EPG information, and purchase the media tracks online. For broadcasting of TV programs, Japan has adopted the ISDB (Integrated Services Digital Broadcasting) standards. The ISDB has the ISDB-S standard (for satellite transmissions), ISDB-T (for digital transmissions), and ISDB-C (for transmissions on cable).

510

Chapter 18

The ISDB standard provides for 13 segments in a single carrier slot of 6 MHz; hence each segment has approximately 430 KHz of usable bandwidth. The segmentation is done by selecting frequency blocks from the OFDM modulator. The first transmissions for mobile TV using ISDB-S began in October 2004 as the MobaHO consumer satellite broadcasting service. The service featured MPEG-4 version 1 simple profile video coding and AAC (LC) audio coding. The resolution supported was 320240 (QVGA) and the bit rates were 384 Kbps. Audio service was coded at 144 KBps. The initial offering included 8 video and 30 audio channels (with the option of additional premium video channels). The system was protected by the MULTI2 cipher. The coverage included all of Japan. Mobile TV services using terrestrial transmission began in Japan using one of the segments (hence sometimes called 1-Seg broadcasting) under the ISDB standard. ISDB-T services using 1-Seg broadcasting were launched in Japan in April 2006.

Figure 18.4: Mobile TV services in Japan.

The 1-Seg broadcasting services were started as a free public broadcasting service in Tokyo, Osaka, and Nagoya, featuring 34 channels, in April 2006. As a pure broadcast service, it is independent of the mobile operators (all of whom can make arrangements with the broadcasters to have interactive features via their 3G networks). Over 30 million handsets with 1-Seg terrestrial reception capability had been sold by the end of 2009.

Mobile TV and Multimedia Services Worldwide

511

Data broadcasting is another feature of the service that is being supported by all broadcasters. The data broadcasting has information such as weather, sports, anytime news, and electronic service guides. The data broadcast uses “Broadcast Markup Language” (BML), which features broadcast over terrestrial broadcast or 3G networks. A number of handsets are now available, such as the KDDI Sanyo W33SA and KDDI Hitachi W41A, which can receive data in BML format. The transmissions can also be received on PDAs and car-mountable receivers. The communications ministry in Japan has now approved four mobile operators: NTT DoCoMo, KDDI, Softbank Mobile, and eMobile for providing LTE services using the 1.5 GHz spectrum.

Figure 18.5: 1-Seg broadcasting in Japan.

18.4 Germany Germany has traditionally been a strong DAB market, with the coverage of over 85% of the country and over 180 radio stations. It is now planning a nationwide relaunch of radio services with higher-power transmitters and new multiplexes. Mobile TV services were launched in Germany coinciding with the FIFA 2006 based on two technologies: T-DMB and DVB-H. The T-DMB service was launched by broadcast operator MFD in cooperation with mobile operator Debitel. The initial launch featured five cities (Berlin, Frankfurt, Munich, Cologne, and Stuttgart), the Samsung P900 DMB phone, and

512

Chapter 18

four channels of TV. The subscription cost 10 € per month. The services were discontinued in 2008. The DVB-H license was won by operator Mobile 3.0, leaving out the other mobile carriers, with the result that these carriers started providing handsets with DVB-T capability rather than with DVB-H, which was a subscription-based service and would have taken revenues off their networks. This finally led 3.0, which launched services in June 2008, to pull out within three months of launch. In the meantime, the 3G/HSDPA operators (O2 Germany, T-Mobile, E-Plus, and Vodafone Germany) have been offering mobile TV via their 3G networks.

18.5 Italy 3 Italia has the distinction of having launched the world’s first commercial DVB-H network on June 5, 2006. The service was launched using a new DVB-H network created by the company in the UHF band across Italy with coverage of 75% of the population (coverage of 2500 towns and cities). The service was branded as Walk-TV™. The initial launch included nine channels. At the time of the service launch the Mobile TV services were offered at €3 per day or € 29 per month. Alternatively packaged voice calls (1 hour per day) and mobile TV were offered at €49 per month. The initial channels included RAI1, Canale 5, and Sky TG24. 3 Italia produces La3 Live, a channel specifically designed for mobile TV.

Figure 18.6: 3 Italia DVB-H network.

Mobile TV and Multimedia Services Worldwide

513

Table 18.3: 3 Italia DVB-H Service (Initial Launch). Service Standard: Nonhierarchical DVB-H CBMS with DVB-IP Datacast Specifications Transmitter Network: The headend equipment for the DVB-H was provided by Reti Radiotelevisive Digitali (RRD) and was composed of the service platform and DVB-H gap-fillers. The transmitter network involved 1000 transmitters from 2.5 Kw to 5 Kw.The entire network is DVB-H. (nonhierarchical). Transmission parameters: H.264/AAC encoding followed by MPE-FEC ¾, QPSK modulation with ½ FEC, 8 K carriers, timeslice period of 2 seconds Handsets: LG U900, Samsung P910 Content Protection: Conditional Access: Nagravision Digital Rights Management: Gemplus Interactivity: FastESG from EXPWAY Mobile Network : 3 Italia’s 3G UMTS Network, 6 million users in March 2006 Broadcasting Partner : Mediaset

Table 18.3 lists the main features of the service at the time of initial launch. The platform has undergone many changes since launch. It was upgraded in 2008 with a service delivery framework based on the OMA-BCAST DRM profile. 3 Italia uses Harmonic’s Rhozet® carbon-coder video transcoding solution to make mobile TV available via their DVB-H network. The transmission was made hierarchical with four terrestrial TV channels being added to the transmission by RRD (Reti Radiotelevisive Digitali), 3 Italia’s partner in mobile TV services. A much broader range of handsets is now available, including the Nokia N96, Samsung SGH-P960, and ZTE N7100 HSDPA (multistandard phone with DVB-H/T, DAB, and T-DMB). There are many types of receivers, PMPs, and PNDs that are used in addition to phones. An example is Quantum’s PMP with OSF IPDC and OMABCAST personal mobile TV and GPS navigation. In June 2008, some channels were made free-to-air in order to garner greater subscription.

18.6 Netherlands Netherlands is one of the few European countries where analog TV transmissions were phased out by end of 2006. DVB-T-based terrestrial transmissions are being done by KPN, even though it is primarily a Telecom operator. 3G services in Netherlands are being offered by Vodafone, Orange, KPN, and T-Mobile. KPN introduced video telephony in October 2004 using its 3G UMTS network and Sony’s Z1010 phone. Mobile TV (i-mode) services are being offered by KPN using its 3G network.

514

Chapter 18

DVH-H trials were also carried out by KPN (along with Nokia, Nozema services a broadcasting company and Digitenne as partners). The DVB-T platform of KPN was used for this purpose. The Nokia 7710 with a DVB-H receiver were used for the trials. The trial included an interactive channel called Portable Hollywood, which included interactive online quizzes on Hollywood-based shows and celebrities. Commercial services were launched in June 2008 with 10 TV services and a subscription of € 10 per month. The service had about 30,000 users by mid-2009. The application framework is OMA-BCAST (DRM profile).

18.7 The United States The United States presents an interesting example for mobile TV, as it is the largest area offering services with MediaFLO technology as well as MobiTV, the streaming TV service. It is also interesting because of the innovative launches of channels exclusively meant for mobile such as ROKtv™ and FreeBe TV™ (Free Television on Mobiles).

18.7.1 Mobile TV Almost all the operators (except T-Mobile USA) are now providing streaming of TV channels (such as Verizon’s V CAST, Sprint TV, and AT&T CV) meant for mobiles. In addition, MobiTV is a major aggregator of content and uses the networks of all major cellular operators to deliver MobiTV subscription and on-demand services. Alltel, AT&T, and Bell Mobility (Canada) use QuickPlay Media’s OpenVideo™ platforms for pay-per-use video downloads.

18.7.2 MediaFLO Terrestrial Mobile TV Qualcomm is the owner of six Economic Area Grouping (EAG) spectrum licenses in the lower 700 MHz frequency band (716–722 MHz, U.S. TV channel 55), which together constitute a nationwide footprint. It has been operating a network of transmitters using the MediaFLO technology and UHF channel 55 to broadcast TV in over 60 markets since 2006. The services are distributed using two operators, AT&T and Verizon, which use the infrastructure provided by Qualcomm Inc. and provide the same channels except for two channels that are operator-specific. The markets that were restricted owing to the unavailability of spectrum due to the digital transition initially have now been opened after the analog carriers were switched off in June 2009. Qualcomm has won bids in 700 MHz band (Block D), thus allowing it to go ahead with nationwide launches. This is a key development in 2009: making mobile TV based on mediaFLO technology available nationwide. DVB-H services have not been successful in the United States, with two operators (Modeo and HiWire) closing down their networks. This is because of the high price for independent

Mobile TV and Multimedia Services Worldwide

515

Figure 18.7: Mobile TV services in the United States.

spectrum and the cost of rollout of an entirely new transmitter network as opposed to ATSC Mobile DTV, which can use the existing transmitters.

18.7.3 Mobile TV Based on ATSC Mobile DTV Technologies The technology that is most promising for the nationwide availability of mobile TV on the go is the ATSC Mobile DTV. It has the support of the Open Mobile Video Coalition (OMVC) with over 800 stations as members from the broadcasting community. Technical trials using the ATSC Mobile DTV transmissions have validated the underlying technologies and the ATSC Mobile DTV was elevated to the status of “Candidate Standard” under the ATSC in July 2009 and an approved standard in October 2009. This sets the ground for a nationwide rollout of networks using the standard. “Model stations” have been set up at Atlanta and Seattle (WATL and KONG) that can be used by vendors as well as users to test the mobile reception products and validate them for massscale production. A seven-station trial has been set up in the Washington, D.C., area with participating stations including Ion media’s WPXW, WDCA (Fox), WUSA (CBS), and WRC (NBC), amongst others. Each station would transmit one or two mobile channels initially, along with an EPG and data. Individual stations in markets such as New York (ION mobile DTV) and Raleigh, N.C., have begun transmissions of mobile DTV.

516

Chapter 18

The business model of ATSC Mobile DTV is still open with both free-to-air (advertisementsupported) and pay content. ATSC Mobile DTV uses OMA-BCAST for content protection and an application framework based on the OMA Rich Media Environment (OMA-RME). Although a very wide range of receiver devices including mobile phones, standalone receivers, and USB devices are expected to be available, the initial services have been demonstrated using LG Lotus and Voyager devices.

18.8 Hong Kong Mobile TV services were launched by the operator CSL in Hong Kong in March 2006 on the 3G network. The service is based on the Golden Dynamic’s VOIR Portal and follows the 3GPP standard 3G-324M. This provides a circuit-switched connection delivering consistent-quality video without any delay in switching of channels. CSL is a wholly owned subsidiary of Telstra and is being merged with New World PCS; this will be the largest mobile operator in Hong Kong. Subsequently, with the availability of 3G spectrum licenses, several operators started offering streaming mobile TV services in Hong Kong. In the meantime, based on public consultations by the regulator OFTA, the government decided to auction three licenses for terrestrial mobile TV in early 2009, with a capacity of about 26 channels. This is based on one multiplex in the UHF band with 20 channels and two multiplexes of 1.5 MHz each in Band III with three channels each. With the massive growth of CMMB in China, the state broadcaster TVB started tests for CMMB technology in December 2008 with the intention of possibly deploying the technology later. The Phase 1 trials were conducted from Temple Hill TV station with five channels, including one pay channel. The Beijing WiFlare company provided the CMMB equipment. Phase II trials were conducted in March 2009.

18.9 India India is the second largest mobile market in the world, with a user base of 400 million in 2009. However, due to various reasons, the auctions of 3G spectrum were delayed to the end of 2009, with the result that the existing 2G spectrum is highly congested. Initially, the licenses were issued to four operators per circle (equivalent to a state) with two GSM and CDMA licensees each in addition to state-owned MTNL and BSNL (the third operator). Subsequently, operators with wireless local-loop services were permitted to offer full mobility as CDMA operators (e.g., Reliance and Tata Teleservices), and in 2007 also permitted to offer GSM services. Licenses were granted in 2007 on a first-come-first-served basis to additional operators, taking the number of operators up to 10 per circle. The state-owned operators, BSNL and MTNL, were permitted to offer 3G services ahead of the 3G spectrum auctions. All the major operators have now moved to EDGE or CDMA2000 and offer streaming TV channels. Examples are Reliance Communications, BSNL, MTNL, Vodafone, Tata Teleservices, Idea Cellular, and Aircel.

Mobile TV and Multimedia Services Worldwide

517

18.9.1 3G Spectrum Auctions The Ministry of Communications and Information Technology is auctioning 3G spectrum with three slots of 5 MHz each being put on the block for launch of services in 2010. This is expected to be a turning point in India, as operators with 3G spectral resources will provide mobile TV services on a large scale.

18.9.2 Terrestrial Mobile TV in India In the field of terrestrial broadcasting, Doordarshan, a state-owned broadcaster, is the sole operator as digital terrestrial transmission (DTT) has not yet been opened up for private operators. Doordarshan had been operating a free-to-air DTT system in the four metros of Delhi, Mumbai, Kolkatta, and Chennai with a transmitter of 50 KW ERP, but there are hardly any customers for this service, because it is restricted to five Doordarshan channels. There is a plan to convert a large part of the terrestrial network to DVB-T by 2012. For mobile TV, the Telecommunications Regulatory Authority of India (TRAI) gave its recommendations on opening up of the market to private operators by auctioning of UHF spectrum on a technology neutral basis in January 2008. However, the Ministry of Information and Broadcasting has not made any licensing policy to enable the entry of private operators.

18.9.3 Doordarshan DVB-H Network Doordarshan operates a DVB-H network in Delhi, with a single transmitter at 50 Kw ERP operating from Akashvani Bhawan in central Delhi with a 100 m tower. The system was initially commissioned in 2007 with a capacity of 8 channels, but has subsequently been upgraded to 16 channels. Table 18.4 provides the key parameters of the service. The single transmitter that provides coverage up to 20 Km (Figure 18.8) is being upgraded to a SFN with secondary transmitters. The National Capital Region (NCR) will require coverage using about 15 transmitter sites. Table 18.4: Parameters of the DVB-H Service of Doordarshan, Delhi. Transmitter Location Power Txn. Parameters Multiplex Statistical Mux Frequency Head end Video Encoding Encryption ESG Hand sets

Akashwani Bhawan, Parliament St., New Delhi. 50 KW EIRP Single Transmitter, 100 M Tower. 8 K, 1/8 GI, QPSK, 3/4 FEC, MPE-FEC 15% Non Hierarchical, 16 TV services 96⬃410 Kbps Channel 26 (510–518 MHz) Nokia BSM 3.2 H.264-AVC, Envivio 4Caster M2 encoders. FTA (Free to Air) CBMS/OMA BCAST compliant Nokia M92, N96, N77 and other DVB-H Handsets (IPV-6)

518

Chapter 18

Figure 18.8: DVB-H transmissions in the National Capital Region, Delhi.

18.9.4 Expansion of DVB-H Networks in India The TRAI has recommended the auction of UHF spectrum for licensing of mobile TV; the state broadcaster Doordarshan has made a plan for a countrywide rollout of mobile TV using DVB-H technology with participation of private players. The plan envisages the use of the terrestrial transmitter infrastructure of Doordarshan with the encoders, IPE, and other headend equipment to be provided by private operators to launch DVB-H and DVB-T multiplexes from over 700 sites in the country. The expansion is planned in five phases: Phase I: Four metro cities (Delhi, Chennai, Kolkata, and Mumbai) Phase II: Thirteen major cities with a population of 1 million or more Phase III: 96 other major cities Phase IV: 100 new sites, including installation of DTT transmitters and DVB-H Phase V: 400 sites with DTT and low power transmitters (LPTs) Although this network will be primarily a Doordarshan network, which would include content from private operator channels, there would be other mobile networks licensed based on auction of spectrum at a later date. It is envisaged that by the end of 2010, mobile TV based on DVB-H and 3G will be available in all major cities in India.

Mobile TV and Multimedia Services Worldwide

519

18.10 Summary The industry has matured considerably through its experiences in the initial years of mobile TV since its launch in 2005–2006. Operators now see great advantage in building a large base of available handsets in the market through free-to-air services, with pay services to follow later. Terrestrial broadcasting is the key to a large base of mobile TV users. The availability of terrestrially broadcast mobile TV services in different markets is based on local licensing and spectrum availability. Spectrum for mobile TV is a major issue for a service launch and has a major cost implication for the services delivered. With all the limitations, the availability of ATSC Mobile DTV, CMMB, and MediaFLO based services through 2010 is set to be a major turning point for the industry.

Before We Close: Some FAQs 1 Is the ISDB-T mobile TV service still free in Japan? Is there any pay mobile TV service in Japan? Yes, 1-Seg services remain free-to-air even though they are now allowed to have content that is different from broadcast TV. Paid mobile content is available through 3G operators such as Softbank and Vodafone. 2 Which mobile TV service is available for automotive markets? Automotive markets in Europe use both DVB-T and DVB-H services. In the United States, the FLO TV service is available for car receivers through Audiovox. FLO TV will be available for additional markets such as Japan. 3 Are MMS services interoperable worldwide? MMS services now operate across most platforms. MMS switching platforms now support increasingly interoperable MMS services. MSS services are interoperable in the Americas and Europe. However, true global interoperability does not exist today. 4 Does the FCC regulate mobile TV content? The FCC’s main plank of regulation is net neutrality, implying that carriers cannot block access to any type of content, including video or VoIP calls. In addition, content is required to be regulated under the Entertainment Software Rating Board (ESRB) by providing ratings for mobile games. The FCC is now working toward a common rating system for games, TV, and other content.

This page intentionally left blank

CHAPTE R 19

Content and Revenue Models for Mobile TV Mobile TV is not “TV on mobile” nor is it “mobile phone ... on TV.” It is a new service, content and programming format opportunity. Communities Dominate Brands Blogs; Dec 1, 2007 (http://communities-dominate.blogs.com/brands/2007/12/what-is-mobile.html).

Do you know why a great majority of users, when visiting mobile social networking websites, visit http://m.itsmy.com? Or how its social browser game itsmy™ meant that Graffity.com could tag more than a million websites? Or what type of mobile TV shows it runs (which number over a thousand a day, towering above some of the giants of the traditional Internet)? The answers lie in being able to see mobiles as a totally different device, in not trying to clone applications and view everything how a mobile user would view. We will find these answers in this chapter.

Figure 19.1: m.itsmy.com: Presenting a new mobile experience. (Courtesy of itsmy.com)

The initial years of mobile TV since its launch in 2005 have brought to the surface many surprises, to the chagrin of content producers and mobile TV operators. Many pay TV © 2010 Elsevier, Inc. All rights reserved. DOI: 10.1016/B978-0-240-81287-8.00019-9

521

522

Chapter 19

services launched with considerable fanfare did not do well, while their counterparts—i.e., free TV services—were a roaring success. This was witnessed in virtually every market such as in Japan, where the success of free 1-Seg broadcasting led to MBC Co. (which operates the satellite-based pay service Moba-Ho®) reportedly considering closure. The same was the case in Korea, where the satellite-provided S-DMB pay services growth of customers stopped at a dismal level of 1.8 million, while its counterpart T-DMB numbers rose to well over 18 million. The advertising model, where used, has been subject to regulatory restrictions. Mobile TV has proved to be a challenge for service providers as well as content owners. In this chapter, we discuss the challenges and revenue elements of mobile TV. It turns out that the type of content that can be delivered is based on the type of mobile TV platform that is implemented. A broadcast TV is a completely different animal from a unicast TV delivered via a 3G network. We will also review mobile TV platforms and how they can be optimized for the end-user experience. In this chapter, we first look at the types of content that when accompanied by Live TV or VoD can create a winning experience. Subsequently, we look at the platforms for its delivery, tools for creation, and revenue models.

19.1 Introduction 19.1.1 Why Is Mobile TV Important as a Medium? The mobile TV space presents an unparalleled medium at the disposal of the producers, advertisers, and operators. It is highly personal and omnipresent. It may sound like a cliché, but mobile TV is slated to be the killer application of the twenty-first century, and the pace of developments in the year leading up to 2010 seems to confirm this belief. Why? First, the reach of mobile broadcast networks is set to increase dramatically. With ATSC Mobile DTV broadcasting commencing in a major way in 2010 and MediaFLO expanding to new markets, almost every mobile will be available with a tuner to receive over-the-air programming as has been the case in Korea, Japan, and China. SARFT has projected 500 million users in China by 2011, and it is estimated that India will have over 200 million users once mobile broadcasting gets underway. With the advancements of networks and lowering of costs, more and more people are potential users of multimedia and mobile TV on the handsets. Second, the applications are now catching up to user expectations. People in today’s world are on the move for business or pleasure and need to use whatever time they have to be on top of news, stock quotes, and weather information, and to enjoy sports, music, and videos. But the applications presented to them rarely fit the bill, except in the markets of Japan and Korea. Why are these markets are so successful while others are not? The answer is the same as why http://www.m.itsmy.com presents a superior user experience.

Content and Revenue Models for Mobile TV

523

19.1.2 Fitting Programs to the Mobile Screen In the initial years of the mobile web, it is not only that the applications did not fit the bill. Neither did the graphics, presentation tools, or mobile TV, which were designed for larger screens where space is not at a premium. The mobile TVs are characterized by small screens of 2–4 inches, limited viewing times (due to technology), and short periods when the users can snatch time to view. The content has to be of immediate interest or compelling. It has to be easy to use. It has to be created specially for the mobile environment. The essence of mobile content was captured quite aptly by the New York Times Magazine, which described MTV’s approach to mobile TV by the headlines of the article itself: “The Shorter, Faster, Cruder, Tinier TV Show.”1 The opportunity for mobile TV is important for content providers as well as operators. For the content providers, it provides an opportunity to capture audience beyond the prime time and target individual or group-specific advertising and to generate orders using interactivity. The small screen of the mobile is all the space the application developers have. Yet everything such as widgets, scrolls, messages, and animations must fit the screen and still be visible and usable. It is evident that content adoption strategies are necessary based on the requirement to support mobile phones characterized by: ● ● ● ● ●

Small screens Mobile phone application clients Content transducers New content produced for mobile TV Need for short form content

19.2 Mobile TV Content The capability to deliver live TV, in itself a great achievement, is getting overshadowed by the fact that the mobile users want greater control on this content than simply watching TV, even though some of the shows may be specially produced for the mobile. The users go for the programs uploaded by other users and broadcast on a channel for its amateurish appeal, liveliness, and real-life sitcoms. One does not need to look beyond YouTube to understand this. In addition, they use downloads to store content in mobile handsets, network on the website, and generally use it for fun and gaming. VoD content is delivered particularly well in unicast networks such as those based on 3G (HSDPA/EV-DO) technologies. This is well reflected in the types of services offered by the 3G and mobile broadcast TV companies, but broadcast networks need not be disadvantaged, 1

The New York Times, Magazine, May 28 2006 by Randy Kennedy. (http://www.nytimes.com/2006/05/28/ magazine/28mtu.html)

524

Chapter 19

provided they use on-screen interactivity and a clickable link that uses a mobile network for reverse path.

19.2.1 What Type of Mobile TV–Specific Content Works for Users? Although mobile TV is still in its early years, its usage pattern and results of early trials suggest that certain specific types of content are most suited for the small screen and that short durations can grab the attention span of the users.

Figure 19.2: Mobile content options.

How do we know what this content is? Well, after observing that mobile TV usage can jump over 300% during a baseball match, or knowing that at least one-third of iPhone users have downloaded a game or that over $5 billion of betting was done via mobile phones in 2009, a picture cannot but fail to emerge. This picture is valid for today. A different picture may emerge tomorrow, as there is no field as dynamic as mobile media. Content for mobile TV can in general be of the following categories: 1. Real-time broadcast/multicast to mobile terminal • Live TV and mobile specialty channels • Sports • Events such as concerts, speeches, ceremonies, natural calamities • Live music • Information (news, traffic) • Webcams

Content and Revenue Models for Mobile TV

525

• Multiplayer gaming • Emergency messages 2. Non-real-time content • Video on demand (news, weather, cartoons, headlines, stock news, etc.) • Music on demand • Webcasting (news and events) • Web-browsing (information, shopping) • Personalized content • Video games Typically, a mobile set of channels would have a fair representation from all genres for live channels as well as VoD. As an example, included in the $30 package for AT&T mobile TV is CV mobile video, along with CBS Mobile, CNN Mobile Live, Comedy Central, ESPN Mobile, Fox Mobile, MTV, NBC 2Go, NBC News 2Go, Nickelodeon, and Sony Crackle. These channels are specially produced for mobiles and contain time-shifted episodes as well as episodes developed especially for mobiles.

19.2.2 A Place Under the Sun for Content Aggregators So how do the operators, who are not in content production, deliver such content? Operators recognized soon after live TV carriage became a reality that content for the networks would need to targeted, created, and sourced separately from the regular TV channels. For this reason, they operate in a complete ecosystem with the content aggregators, who in turn had access to producers for specialized content.

Figure 19.3: Content model flow for mobile TV.

526

Chapter 19

Content generation is now a specialized activity left to content producers. Mobile operators normally focus on aggregating the content and presenting it on well-tested handsets to present an integrated experience. A lot of content as well as mobile-specific applications and games are also available in music stores and commerce platforms. Mobile operators focus on integrating the programs, games, animations, and downloads in a rich media environment. MobiTV is an example of a very successful content aggregator. However, getting the content right is only one part of the picture. It also needs to be presented with ease of use and intuitive access. The success of devices such as iPhone 3G and the services such as i-mode of NTT DoCoMo have been primarily due to user appeal and ease of access in addition to the content.

19.2.3 Presentation of Mobile Content The presentation of content uses the underlying software available on the phones such as Java, Adobe Flash, or BREW, and applications are written to use the presentation capabilities to full advantage. This environment is similar to the regular TVs, but there are major differences in the manner in which such content is created and displayed owing to the limited resources and small screen size of mobile devices. The basic premise that needs to be followed is that the mobile is now the center of the online access, collaborative applications, and messaging activity, weaning users away from pure Internet. A mobile is much more personal, available everywhere, and intuitive to use.

Figure 19.4: Mobile as center of content.

Content and Revenue Models for Mobile TV

527

19.2.4 Production and Postproduction of Mobile Content TV channels produced for standard definition or high definition are not ideally suited for delivery on small screens. HDTV channels, for example, cover the entire field in sports, but such content displayed directly on a mobile screen would be too miniscule to meaningfully watch any action. Moreover, the background in such a frame would cover sharply variable environments (such as cheering crowds), which would be a problem in a small bandwidth environment of, say, 256 Kbps. The ideal way to produce for mobile delivery is to have a separate camera that would focus on action and have the background as the outfield rather than crowds. If this is not possible and a regular TV feed is to be repurposed for mobiles, the best option is to use a mobile content processor. Such a processor provides dynamic reframing, noise reduction, and image stabilization to ensure that the images are suitable for mobile networks. An example of such a processor is the Helios® mobile content processor from Snell & Wilcox.

19.2.5 Profiling and Transcoding Content for Mobile Devices The sheer number of mobile devices requires that the content be profiled (and possibly delivered) per the media profiles of individual devices. Storage of content also needs to include its usage rules, permissions, and copyrights. Content transcoding for delivery over multiple networks is also a requirement. An example of a platform that meets these needs is the Vantrix® Media Profiler from Vantrix Corporation.

19.2.6 User-Created Content (UCC) User-created content has a special appeal. We need to recognize that today 65% of American online teens create content and 40% have a personal web page and a blog. YouTube represents the most visible user-created content. Its content is now also being organized as “channels,” although in a different context of each user having his or her content organized as a “channel.” In addition, there are broadcasters who have a UCC channel. As an example, BBC has launched the “Be on TV” TV channel where viewers can upload their own video content. Similarly, the United Kingdom’s mobile media company “3” supports user-uploaded content and the users get paid when the content is downloaded. There are other channels for 3G users such as http://blip.tv and http://yamgo.tv that support user-created content. 3G News Mobile Studio (a product from CREATECNA) is a software application that receives a live video and audio content from a 3G handset, stores it on the server, and retransmits it in the standard TV formats in real time. This enables customer-generated images to appear on TV in live shows. This also allows news content feed to be delivered through a mobile 3G phone and any person to act as a journalist.

528

Chapter 19

Figure 19.5: Mobile TV with user-created content (http://blip.tv).

19.2.7 Video on Demand Video on Demand in mobile networks has emerged as one of the most important classes of content. It is difficult to forget how a simple application such as download of ringtones became a roaring success in many markets. Video on demand provides the users with streamed or downloaded content of their choice that can be watched at will. The video on demand services consist of a selection of content from favorite TV channels, sports, and premium content in most implementations. As an example, AT&T offers the service CV in the United States. The service contains sports content including ESPN and Fox Sports, Fuel, Speed, local weather forecast in 100 cities, cartoons, content from Fox and HBO, as well as news from CNN, NBC, and Fox News on an on-demand basis. This is in addition to the regular FLO-based AT&T mobile TV service. There are many other examples as well, such as Yamgo.tv.

19.2.8 Adult Services As a mobile TV is much more personal than a home TV can be and can be used at any location, it is no surprise that adult content has been one of the successful content types

Content and Revenue Models for Mobile TV

529

Figure 19.6: Mobile video on demand (CV from AT&T).

that have made a dent in the initial days of the mobile TV and streaming video industry. According to a report from Juniper Research, the market for adult content will reach $4.9 billion by 2013. As in the case of the pay-TV industry, the adult content delivery (or access to such content) is subject to country-specific laws, ensuring that the content is not delivered to minors and other requirements. The most important technology for an operator in this field is the age verification technology. Bodies such as the Mobile Adult Congress (http://www.maccongress.com) have been reviewing the issues involved in delivering such content while complying with legal and regulatory guidelines. Voluntary codes of conduct for mobile operators have also been issued in various countries such as the United Kingdom by the Mobile Entertainment Forum (MEF) (http://www.m-e-f.org), Independent Mobile Classification Body (IMCB), mobile operators’ associations in Italy and Germany, and others. Australia has banned X-rated content from being delivered to mobiles. The FCC has also asked the CTIA USA to ensure that children are shielded from adult content. On the whole, adult content is seen as a strong driver for mobile TV and video in countries where it is permitted and the operators need to be able to deploy age verification and DRM technologies to ensure that the content is delivered to the intended audience only and that it cannot be proliferated by copying or forwarding.

530

Chapter 19

19.2.9 Video Clips Over 200 million clips were downloaded from a single network, Verizon V CAST, in 2009. Multicasting or on-demand transmission of video clips has been launched by many carriers to leverage the capabilities of the data networks. Verizon’s V CAST service, for example, involved transmission of 3GPP2 video with QCIF resolution at 15 frames per second and was offered on its EV-DO network. V CAST Music can be downloaded as WMA files. Each clip can be bought at $1 to $4, excluding the monthly data charges of $15. Similarly, AT&T’s CV service provides on-demand streaming or downloadable service. Vodafone provides a Vodafone Live! Music service for clip downloads in its networks in Europe and other countries.

19.3 Interactive Services Interactive services in a mobile environment provide a complete user experience that is created based on the graphics, charting, and visual animation tools, which take raw data and present it in an attractive manner. Interactive services permit users to do a variety of interesting things during a program: ●





View headlines, weather or traffic information through “widgets” that are generated in the mobile handset and direct the users to online carousels (magazines) being transmitted. Place online orders by pressing a button when a message or a widget is present on the screen. Order ringtones, wallpapers, or other products associated with a program (i.e., a music video).

19.3.1 Content Types Delivered via Interactive Services Weather The availability of weather information and traffic data updates form very compelling content. Weather information can be delivered by four techniques: ● ●





Letting the users surf the net and access weather websites Providing weather data in an interactive carousel that can be viewed by a menu option or Widget Providing a dedicated weather service application based on Java or Flashcast that is customized for a city or user Streaming data from dedicated sites on weather (such as rTV Realone® forecasts)

In general, a customized service would yield greater revenues than a monthly subscription channel, although a free channel may have a larger user base. Weather forecasts on mobile are now available from GoTV networks Yahoo, Google, and the Weather Channel, amongst others.

Content and Revenue Models for Mobile TV

531