IP and File Based Technology

image10

IP (Internet Protocol & MPEG compliant transport streams) and File Based Technology

Welcome to the first steps to understanding broadcasting and its inner workings.  Broadcast facilities are now under the enormous transition from current digital (SDI) infrastructures to an all “IP-based (network)” facility.   


This shifts the whole paradigm of the media delivering to the end user, facility design and support practices.  


The broadcasters have been moving to file-based workflows for more than 10 years. It is an end-to-end workflow, from ingest to playout, where digital media is contained in files.   


The digital videotape only records media digitally, but not as a file. That is why the primary challenges in managing a true file-based infrastructure are people and a process.  


Using a file-based workflow allows broadcasters to increase productivity, increase flexibility in creating content for multichannel distribution, with the same staff number, or potentially less.   


The main operational benefit of file-based workflows is the collaboration it enables between all users and the speed it gives them in doing their assigned tasks.  


The users range from journalists or graphics to producers and directors. And this cross-functional access to the workflow is disconcerting as the workflow now incorporates both production and business applications.   


The safety concerns about the content access are overcome by the access management rules and network security.   


A shared storage along with the availability of both high-resolution and proxy files increase the operational flexibility of the applications.   


Storage management will help to determine the location of assets and their version control. As the users access, edit, and create the content, a defined approval process alerts those responsible for review and approval.  


In the digital domain, broadcasters are now able to build the content for linear and nonlinear channels simultaneously.  


Thus, the content will be distributed across a variety of networks (broadcast, cable, Internet, and mobile)   to a wider variety of devices (TV, desktop computer, tablet, and smartphone).  


Incorporating distribution needs into the workflow allows broadcasters to produce content in multiple formats while reducing or eliminating the high cost of repurposing content.   


The broadcasters depending on their business needs and goals decide how they will manage their demand for a file-based workflow.  


 The key is understanding current operations, and assessing roles, responsibilities, and workflows to identify areas for improvement.

IP and the broadcast industry:

 Today’s quick shift from high definition to 4K UHD, just as future trends will demand from the broadcasters another way of putting together essential production applications, networks and systems. 


The new infrastructure must be easy to expand and upgrade, as well as extremely cost-effective to deploy. It means, it will need to be based on Internet Protocol (IP).   


As a part of this updating process, dedicated application transport mechanisms will need to be replaced by a converged IP-based infrastructure. 


Thus, broadcast, media and entertainment production will shift from dedicated equipment to a virtual environment.   In doing so, the broadcast industry will finally gain the benefits of Cloud-based computing, which increases productivity and agility while dramatically lessening capital expenditure costs.


Three key drivers have made this transition inevitable:


  • Ever-increasing video bit rate requirements. Ultra HD bit rates far exceed the capacity of the current SDI technology connection and cannot be carried by existing infrastructure. 
  • New standards that break down reliance on proprietary (and expensive) techniques and platforms. Compliance with industry standards and recommended practices allows all parts of the system developed by different venders to interoperate. 
  • 10 Gigabit Ethernet maturity. An ecosystem of compatible equipment exists, and Ethernet is already integrated into a wide range of devices, being widely available at commodity pricing and moving broadcast into the Cloud. Further, IP-based networking at 40-Gigabit-per-second (Gbps) and 100-Gbps is already available, ensuring a long future for this technology.

Understanding IP networks

image11

UNDERSTANDING IP NETWORKS

One of the IP technology main attractions is its ability to handle any video and audio technology. 

 IP brings with it the so-called “IP LAN/WAN convergence,” which makes it easier to achieve savings and increased flexibility by sharing equipment, studios, control rooms and production staff across locations.   


Many broadcasters misguidedly turn to poorly-informed IP switch vendors for advice to accommodate the specific needs of broadcasting.   

While the initial focus should be on getting the network architecture and the control right via any of the three main architecture layouts: 

  • centralized star network; 
  • spine-leaf architecture; 
  • dual star.

CENTRALIZED STAR NETWORK

The tendency for most broadcasters is to adopt what is known as a “centralized star network” with all connections transiting through a large IP router that can be located in the master control room.   


It is unsuitable for treating remote locations as extensions of the main location.  


The main disadvantage is that it requires expensive fiber connections with every single device, while the cost-per-port for low bandwidth devices is high.   


Scalability is another problem: capacity is often reached sooner than anticipated, meaning replacement of the central router.   


Moreover, lack of aggregation means redundancy needs to be handled by edge devices. 

SPINE LEAF

The second model is “spine-leaf architecture,” involving two or more routers at the core (spine) and other smaller routers at the edge (leaf).   


This reduces the number of connections going directly to the main routers, leading to simplified fiber management.   It requires fewer ports on the central router(s), and delivers more effective cost-per-port, especially for low-bandwidth devices.   


It provides optimal flexibility and scalability, since capacity can be added over time.

DUAL STAR

The “dual star” architecture involves the use of two spine routers, but each leaf in the network is only connected to one spine.   


This is not a flexible approach when it comes to load distribution and optimization of total network capacity.   


It puts special requirements on end devices that need redundant connections when the network evolves.  


The proponents of this architecture usually favor automatic protocol-based routing rather than software-defined networking.    


Only a true spine-leaf architecture enables broadcasters to get the most out of their IP infrastructure investment in their facilities. 

AUTOMATIC ROUTING

Automatic routing means leaving the decision about how to transport individual media flows to the network, rather than the operator.   


It is used widely in IP networks across the world, but may not be fast enough for live production and can run into trouble with network loops. 


This can only be fixed through higher operational complexity.   


On top of this, there are also concerns around protection and security, as streams to destinations are not explicitly controlled.   

SOFTWARE-DEFINED NETWORK (SDN) ROUTING

SDN puts routing control in the hands of a centralized control layer creating truly flexible, scalable and high-performance IP media networks.  


The management and orchestration software holds a complete view of the available equipment, the network infrastructure and the services.   


This enables it to make intelligent decisions on routing and controlling flows.   


This has many advantages: 

  1. it guarantees a higher level of performance when compared with automatic routing, since the software is also in control of every media flow.  
  2. it is beneficial from a protection and security perspective by fully controlling which destination is allowed to receive which multicast. 
  3. it can, with the right software, easily handle any network architecture. 


However, broadcasters should bear in mind that a successful IP infrastructure is built around the “ground-up design” of infrastructure.   Broadcasters should be architecting a the most effective network using a true spine-leaf model, and controlling the elements within it using SDN routing.

Understanding IP and File Based Technology

image12

CURRENT IP MEDIA PRACTICES

Professional media environments provide multiple means for moving video (compressed-files or streaming media) across the campus, between cities, etc.   


The audio video is transported in a compressed format of different kinds. Some are highly compressed for Internet delivery and others are mildly compressed.   


However, there are few means to transport uncompressed, high-bit rate (HBR) content end-to-end over the network.  


We’ll focus on the latest applications for high bandwidth, real-time,  life broadcast production using IP over a media-centric network. 

HIGH BIT RATE–UNCOMPRESSED VIDEO TRANSPORT

“Internet Protocol” (IP) network technologies are already in use at many media facilities for  life real-time production activities.   


Due to the recent SMPTE standards and initiatives broadcast production is beginning to use new capabilities for transport and manipulation of high bit rate, uncompressed (UC) signals over an   IP-network topology.   


The approaches are applicable to file-based workflows, data migration, storage and archive, automation and facility command-and-control. 

RULES OF ENGAGEMENT—IT CHANGES EVERYTHING

IP is a set of rules (“protocols”) which govern the format of the data sent over the internet.  


For broadcast or studio facilities, “internet” is more appropriately called the “network.”   


Designing and building an IP facility require a renewed technological approach to IT-networking compared to those for traditional SDI-facilities.   


One key-target in this will be to keep audio and video (over IP networks) acting precisely the way it does in an SDI-world without its burdens or constraints. 

WHERE ARE THE DIFFERENCES?

The isochronous (time bounded) nature of SDI, born out of the 1980s standards, is straightforward for  life, continuous video inside the studio and for long distance transport.  


Frame-accurate (undisturbed) transitions with a compressed video are generally accomplished using the peripheral equipment.   It receives a compressed video, then decompresses it to a “baseband” (SDI) form, where then transitions from A-source to B-source are completed.   


These processes take time and it is impractical for most  life applications to cleanly switch sources and maintain timing and synchronization.   


Program videos on YouTube or Netflix leverage receiver buffering techniques or use adaptive bit-rate (ABR) streaming functions to keep their “linear delivery” as seamless as possible to viewers.   


For professional media IP systems—real-time  life signals, on a network, are transported over isolated, secondary or virtual networks (VLANs). 


For  life and real-time, HBR signal transport, new network topology and timing rules (protocols) must be adhered to.   They are defined in the new standards SMPTE ST 2110 and or ST 2022.

DIFFERING DATA TRAFFIC STRUCTURES

Packet structure and formatting is different per each data type’s intended uses.   


Different data traffic types are not mixed on the same VLAN network.   


Real-time transport networks are conditioned to carry HBR traffic.   


Packets from senders (transmitters) are constructed based on IETF RFCs such as “real-time transport protocols” (RTP) and “session description protocols” (SDP); and supporting IEEE and  SMPTE standards.   


The other conditions identified in the SMPTE ST 2110 or ST 2022 standards are: timing, synchronization, latency and flow control.   


File-based and streaming media is intended to, or can run at variable data rates, while previous IT-like network designs must run continuously at non-wavering data bit rates.   


In file-based transport, data from senders need not “arrive” at receiver input(s) in an isochronous nature.   


In streaming media delivery, occasional interruptions or “buffering” is expected, while  life real-time video must be synchronously time-aligned.   


That is why system designs now include distinct considerations for real-time and non-real time signal flows.

Understanding formats: codecs, bitrates, files and stream

image13

Understanding formats: codecs, bitrates, files and streams

Terrestrial television systems are encoding or formatting standards for the transmission and reception of terrestrial television signals.  


In digital terrestrial television, there are four main systems in use around the world now: ATSC, DVB, ISDB and DTMB.   


The two principal digital broadcasting systems are Advanced Television Systems Committee (ATSC) standards, adopted as a standard in most of North America, and the Digital Video   Broadcast – Terrestrial (DVB-T) system used in most of the rest of the world.   


DVB-T was designed for format compatibility with existing direct broadcast satellite services in Europe (which use the DVB-S standard), and there is also a DVB-C version for cable television. 

ATSC

The terrestrial ATSC system (unofficially ATSC-T) uses a proprietary Zenith-developed modulation called 8-VSB (vestigial sideband) technique.  


After demodulation and error-correction, the 8-VSB modulation supports a digital data stream enough for one high-definition video stream or several standard-definition services. 


This system was chosen specifically to provide for maximum spectral compatibility between existing analog   TV and new digital stations in the United States.   


It is also better at dealing with impulse noise which is especially present on the VHF bands. 

DTMB

DTMB is the digital television broadcasting standard of the People's Republic of China, Hong  Kong and Macau.   


This is a fusion system, which incorporates elements from DMB-T, ADTB-T and TiMi-3. 

DVB

DVB-T uses coded orthogonal frequency division multiplexing (COFDM), which uses as many as 8000 independent carriers, each transmitting data at a comparatively low rate.  


This system was designed to provide superior immunity from multipath interference, and has a choice of system variants which allow data rates from 4 Mbit per second up to 24 Mbit per second.   


DVB-S is the original Digital Video Broadcasting forward error coding and modulation standard for satellite television and dates back to 1995.   


DVB-S is used for broadcast network feeds, as well as for direct broadcast satellite services.  


DVB-C stands for Digital Video Broadcasting - Cable and it is the DVB European consortium standard for the broadcast transmission of digital television over cable. 

ISDB

ISDB is very similar to DVB, however it is broken into 13 subchannels.   


Twelve are used for TV, while the last serves either as a guard band, or for the 1seg (ISDB-H) service.   


The ISDB systems types differ mainly in the modulations used, due to the requirements of different frequency bands.   


Unlike other digital broadcast systems, ISDB includes digital rights management to restrict recording of programming.

Digital Asset Management (DAM) and Media Asset Management (MAM)

A Digital Asset is any audio video file, that was recorded in some digital format, and which is intended for redistribution to some viewers with rights to watch it.   


Media Asset Management (MAM) helps to accommodate complex realities, including:  

  • searching, storing, retrieving, and distributing large volumes of metadata,   
  • workflow of multiple geographically-dispersed users,  
  • distribution over a variety of platforms.  


Benefits of DAM:  

  • Fast Searching in a single place.  
  • Easily share and distribute found content and its up-to-date versions.  
  • Reduce costs.  
  • Collaborate across departments and users. 
  • Keep track of the content use.  


Every business or organization can improve the impact of their digital content via DAM. 


Here are the top five projected trends with it:   


  • Artificial Intelligence will replace manual workloads for employees via automation of the manual tagging of the visual assets and will be able to predict a user’s needs based on their past behaviors.   
  • Automation will take over asset organization in marketing companies and departments. 
  • The DAM software will recommend specific assets for a specific task within a campaign or activity.   
  • Cloud-Based DAM is better than traditional cloud file storage, as it offers file sharing regardless of file size and features like version control, rights management, file rendering, and workflow templates.   Metadata Management allows the DAM system to suggest options for metadata automatically allowing endless opportunities for searching and cataloging digit asset libraries.   
  • Blockchain is a secure and localized online ledger system which protects the ownership of digital assets and the information from being misused by allowing digital assets to be distributed but not copied.   


Media Asset Management (MAM) is a subcategory of DAM.  MAM is a multi-format, multi-vendor, multi-workflow federated content repository.   


The first MAMs meant on-premise installations and gave the ability for broadcasters to properly manage huge amounts of media files.

Command and control, automation & orchestration

Command and control is the all-encompassing automated set of processes that control the acquisition, file movement, handling and delivery of media.    


Monitoring tools help the system to manage the handling and quality control of media and metadata in the entire facility.   


The automation controls all the program source devices and this translates into receiving multiple instructions sets, then issuing multiple commands to playout content.   


Editing systems control source machines, switchers and mixers. This is one of the primary examples of command and control.   The editing system issues a set of commands to the source device each time an element is selected. It issues a different set of commands when the finished program has to be rendered for each delivery format.  


This shows the layered topology in the IP environment and how command and control is one layer within a single IP transport stream: 

• Media 

• Metadata 

• Communication 

• Command and Control 


In the file-based broadcast and production environment, there is a need for control for each process.   Ingest devices need to know to start recording plus the format profiles and direction where to place the media and metadata once it’s created.   


The production and media management systems need to be notified that the file is ready for use and there is a media handling process that controls the movement of the files.   


Every aspect of the file-based environment is managed by the command and control layer and each process and device is controlled by an automated process.  


Orchestration is the new term for an all-encompassing command and control management system (a conductor).   


The conductor controls the ingest processes and devices, handles media movement, interfaces with media management and controls the master control automation system for delivery.   


It is a unified dashboard that shows all the active processes, device status and where the files are in the system, ensuring continuity of the flow of media and metadata.   


Business Process Management tools (BPM) is a layer of middleware applications that sits between other applications to integrate them or facilitate file movement and access.  


Application Program Interfaces (API’s) is a set of programming tools that enable to develop the necessary software interfaces to integrate with the third party or customer developed systems.  


While many vendors offer “end to end” solutions, there really are no “beginning to end” products or services.   Files move between production, edit and distribution systems.  


Orchestration tools streamline this and add the “single screen” interface to monitor all the processes.   This is what the orchestration technology must monitor and manage: 

• Make sure devices are to the correct parameters  

• Manage the transfer or transport of the encoded file to each stage in the process 

• Assign resources to each process 

• Balance the distribution of processes across applications,  etc.  


Orchestration is critical as more and more applications are becoming cloud based.

image14