IP and File Based Technology

IP and File Based Technology


playoutcentre playoutcenter

Broadcasting is the transmission of audio and video material to a selected audience by radio, television, or otherwise. The receiving parties could include the general audience or a relatively large subgroup of those parties. It could also be used for private recreation non-commercial communications exchange, self-training and emergency communication, such as amateur (ham) radio and amateur television. Economically, there are a few ways in which stations can transmit on an ongoing basis.
The process by which stations are financed can be listed as follows:
· In-kind contributions of volunteers ‘ time and skills (common with community radio stations)
· Direct government contributions or public broadcasters ‘ activities
· Indirect fees from government, such as radio and TV licenses
· Subsidies from foundations or corporations
· Selling ads or sponsorships
· Public subscription or membership.
Starting in 1937, the first regular television broadcasts can be categorized as “recorded” or “live”. The former allows errors to be corrected and redundant or unwanted material removed, rearranged, slow-motion and repetitions applied, and other techniques to improve the program. Many live events like sports television, however, may include slow-motion videos of major goals / hits, etc., of live television broadcasting. Although they are mostly “recorded live” (sometimes described “live-to-tape”), most events are advertised as live. This is especially true of musical artists’ appearances on radio when they come for a concert appearance in-studio. Similar situations have taken place in television (“The Cosby Show is recorded in front of a live audience in the studio”) and news broadcast. There are several physical ways to transmit a transmission. If it comes directly from the studio at a single radio or television station, it is actually transmitted to the world from the antenna on the tower through the studio / transmitter connection to the transmitter. Programming can also be performed via a satellite system, played live or captured for subsequent transmission. Station networks can replicate the same programming at the same time, initially via a microwave link, now usually via satellite. It can also be distributed to stations or networks through physical media such as analog or digital videotape, compact disk (CD), DVD, and sometimes other formats. These are usually included in another broadcast, for example when electronic news gathering (ENG) returns a report to the station to be included in a news program. The final step in the transmission of the broadcast is how the signal reaches the listener or viewer. It may come over the air to an antenna and receiver as with a radio or television station, or it may come over the channel or directly from a network via cable television or cable radio (or “wireless cable”). The Internet may also provide the recipient with either internet radio or streaming media television, particularly with multi-casting enabling sharing of the signal and bandwidth. The phrase “broadcast network” is sometimes referred to differentiate networks that broadcast an over – the-air television signal that can be received from so-called cable or satellite television networks using a television antenna. The word “broadcast television” may apply to such networks ‘ broadcasting programming.

IP and File Based Technology

Wireless Telecommunications

IP (Internet Protocol & MPEG compliant transport streams) and File Based Technology

First of all, let’s start by looking at the method of distributing media to the end user in particular.
Broadcasting facilities are now beginning what many believe will be a global transition from existing digital infrastructure (SDI) to an all-IP-based (network) facility. The industry has not undergone such a transition since the switch from analog to digital in the 1990s.

The transformation seems rational, planned, and perhaps even straightforward on the ground, given the level of IP / IT integration that already exists in many facilities. Under the hood, however, both IT and broadcast technical professionals are engaged in a paradigm shift in philosophy, facility design and activities of service. The term “broadcast network” is often used to describe networks which broadcast a cable or satellite television over – the-air television signal that can be received from so-called networks using a television antenna.

It is an end-to-end process, from ingest to playout, where files include digital media (as opposed to digital videotape that only digitally records media, but not as a file). Although broadcasters have switched to file-based workflows for over 10 years, the complexities of managing this process are more than technical. People and method are the key barriers to creating a true file-based system and reaping its benefits. Broadcasters hire hundreds of staff with specific skill sets for each step of the workflow. Using a file-based system allows broadcasters to increase productivity, increase flexibility in producing multichannel delivery content, and focus resources on developing new revenue-generating business models with the same or potentially lower number of staff. The primary functional advantage of file-based workflows is the coordination it provides between all users and the speed it enables these users to execute their assigned tasks. These clients vary from reporters and editors, to producers and managers. For some, this cross-functional workflow access is disconcerting, as the system now includes both output and business applications. The accessibility of data to these non-technical resources raises concerns about who can access information from different users and why. Nevertheless, the efficiencies gained by easy access to centralized resources used for development as well as other purposes exceed these concerns. An optimal workflow must implement rules on access management and network security. Access management can control access to stored resources read, write, copy and edit. High-resolution or proxy files are available to mitigate the requirements for copying and moving large media files. Storage management will help to determine the location of assets in the storage and version control of these assets online, near-line or offline. Shared storage increases operational flexibility as it reduces the time between applications to transfer assets. A defined approval process alerts those responsible for reviewing and approving content when users access, edit and create content. Once again, incremental efficiencies are achieved through easy access to content and the presence of all approvers within the workflow defined. In the digital domain, broadcasters can now simultaneously create content for both linear and non-linear channels. A safe assumption is that content will be delivered to a broader range of devices (e.g., TV, desktop computer, laptop, and smartphone) across a number of networks (e.g., radio, cable, Web, and mobile). The integration of distribution requirements into the workflow helps broadcasters to generate multi-format content while reducing or eliminating the high cost of repurposing content. How can broadcasters handle the digital transition and its file-based workflow requirements? They should concentrate on their specific needs and targets. Do they show live news, live sports or entertainment? Who are their viewers and where? How will content be created, handled and ready for replay, distribution and consumption? The key is to understand current operations and to determine functions, responsibilities and workflows in order to identify areas for improvement.

IP and the broadcast industry Technological transition pushes telecommunications, media and entertainment companies to adopt a new approach to how technologies and technology are deployed. There have been recent changes from black and white to color and from standard to high definition over the years, giving broadcasters enough time to move from one form of infrastructure to another. In comparison, today’s change from high definition to 4K / UHD, as well as potential changes to 8 K and HDR / wide range, will be much quicker. Broadcasters need a new approach to the development of critical production technologies, networks and processing systems. The new infrastructure must be easy to expand and update, and extremely cost-effective to implement. In short, it will have to be Internet Protocol (IP)-based. Common sense suggests that older hierarchical systems should give way to a decentralized infrastructure that allows for rapid, affordable expansion and the adoption of new technologies. Common sense suggests that older hierarchical systems should give way to a decentralized infrastructure that allows for rapid, affordable expansion and the adoption of new technologies. In many other sectors, it has occurred. Today, it’s broadcasting and the turn of media production. Dedicated application transport mechanisms must be replaced by a converged IP-based infrastructure as part of this updating process.

In short, broadcast, media and entertainment production moves from fixed equipment, in which different tasks take place on a single piece of equipment and content is transferred from device to device, to a virtual environment where processing can take place anywhere and content can be stored anywhere. This will ultimately benefit the broadcasting industry from cloud-based computing, in which the availability of fast, standard-based network infrastructure and cloud-based processing and applications increases productivity and agility while significantly lowering capital costs.

This transition has been made inevitable by three key drivers:
– Ever-increasing criteria for video bit rate. A single uncompressed HD video that streams the current link to the SDI technology. Ultra HD (4K, 8K) bit rates well surpass this capacity and can not be delivered via existing infrastructure.
-New standards which break down dependency on proprietary (and expensive) techniques and platforms. Compliance with industry standards, such as the Society of Motion Picture and Television Engineers (SMPTE)-2022 set, and other standards and recommended practices in the works of SMPTE, the Video Services Forum (VSF), and the European Broadcasting Union (EBU), allows all parts of the system established by SMPTE.
-10 Gigabit Ethernet maturity. High-speed Ethernet technology is now widely available at commodity prices, allowing large amounts of video data to be transferred and retrieved instantly and easily–in turn, transmitted to the cloud, just like retail, Big Data or any other business entity or initiative.

Understanding IP networks

playout solutions

CURRENT IP MEDIA PRACTICES Professional media environments have various means of moving content from point A to point B (i.e., compressed files or streaming media). Such points could be to random distribution points across the university, between towns, or anywhere else. In most applications, audio / video signal transmission is a compressed video format available for thousands. Many formats are highly compressed for Internet distribution and others are moderately compressed contribution-quality, e.g. from sports venues to studios or for consolidation of output before transmission.
Products of compressed media manufacturers prepare the data for the subsequent stages in the production chain of content. Few (if any) provide a means of transporting end-to-end uncompressed, high-bit data across the network.

With this preface, we will concentrate on the new high-bandwidth, real-time, live broadcasting technologies using IP over a media-centric network.

HIGH BIT RATE–UNCOMPRESSED VIDEO TRANSPORT Broadcast production is starting to use new capabilities of high-bit rate (HBR) delivery and control, uncompressed (UC) signals across an IP network topology. Specifically, these applications are for live / real-time production activities. In addition to business platforms and programs, recent SMPTE specifications (ST 2022-6 &-7 and ST 2110 released at the end of 2017) are driving new technical measures to reshape the transmission facility.

In many media facilities, “Internet Protocol” (IP) network technologies are already in use. Approaches apply to file-based workflows, data migration, storage and archiving, automation and command-and-control facility. Historically, these were not usually referred to as “IP.” As UC / HBR video transmission capacities came into being, the nomenclature developed. Today, “all” seems to be IP, irrespective of how that jargon is used to which software.

IP is a’ set of rules (‘ protocols’) regulating the layout for information sent over the Net.” Internet’ is more commonly referred to as the’ network’ of television and studio facilities. Effectively, the implementation of certain limited IP technology would effectively resolve the improvements to systems connected with studio / live media development and its operations in the material chain.
The design and construction of an IP facility will require a renewed technological approach to IT networking that is accompanied by a new attitude compared to traditional SDI facilities. To understand what it takes to design, build and operate the IP-based professional media facility, it is necessary to understand what is “real-time” (RT) IP and how it is distinguished from conventional SDI implementations (including file-based workflows or data storage).

One key objective in this will be to keep “audio and video (over IP networks) acting exactly as it does in an SDI-world” without the burdens or constraints of traditional SDI infrastructures. Ultimately, facilities will harness the advantages of network-based IP / IT systems for efficiency, versatility, price and extensibility / expandability.

SDI, born in the 1980s, is designed to enable the transmission and synchronization of audio / video from source to destination without interference and to minimize the generational quality issues associated with analog video and audio. SDI’s isochronic design is transparent for in-studio recording, continuous video and long-distance transportation.

Frame-accurate (undisturbed) transfers to compressed data, while somewhat feasible in streaming media, are usually performed using peripheral equipment that basically collects compressed video, then decompressed to a “baseband” (SDI) form where seamless transitions from A-source to B-source are done.

It takes time for each of these systems to add delay to the non-real-time sequence. For most live systems, simple switching of sources and pacing and synchronization is impractical.

YouTube and Netflix program video use advanced receiver buffering methods or use adaptive bit rate (ABR) streaming features to make the “natural distribution” to audiences as smooth as possible. However, due to GOP (group of images) issues and compression / decompression latency, the ability to provide live and glitch-free source-by-source video is curtailed.

For professional media IP systems, real-time / live signals are transported on a network through isolated, secondary or virtual networks (VLANs). It is important to stick to new network topology and timing rules for live and real-time HBR signal transport. Such “rules” are specified in SMPTE ST 2110 and/or ST 2022, which include IETF RFC implementations as defined in the new standards.

CENTRALIZED STAR NETWORK In terms of architecture, many broadcasters tend to adopt what is known as a “centralized star network” with all communications transiting through a massive IP router in the master control center.

The main limitation is that everything needs to go to the central router, which requires expensive fiber connections to each device.

Another problem is scalability. Capacity is often achieved earlier than expected, requiring the central router to be replaced. Because each connected device occupies one expensive high-bandwidth port on the central router, low bandwidth devices have a high cost-per-port.

In contrast, lack of grouping ensures that edge devices need to handle replication, including two connections to the central router, or one to each of the main or backup switches. Finally, the presumption that all traffic passes through the central router makes star network technology inadequate to handle remote broadcasting locations as extensions of the main location.

SPINE LEAF The second model is “spine leaf architecture,” which involves two or more main (spine) routers and other smaller edge (leaf) routers.

This reduces the number of connections to the main routers, thus simplifying fiber management. It demands fewer ports on the central router(s) and provides more cost-per-port capacity, especially for low bandwidth devices.

Spine-leaf architecture reduces build-in redundancy costs as well as providing optimum flexibility and scalability. From the outset, networks must no longer be over-sized, as capacity can be added over time.

While true spine-leaf architecture may be more complex than other methods, it is a flexible, robust and high-performance design that is well tailored to broadcasters ‘ needs.

This third architecture design is a “dual star” system that still requires the use of two spine routers, but the distinction is that each leaf in the network is attached to only one spine.
Unfortunately, in terms of load distribution and optimization of total network power, this is not a flexible approach. As a “pseudo” spine-leaf approach, it places special requirements on end devices that need redundant connections when the network evolves.
This architecture’s proponents usually favor automatic protocol-based routing instead of software-defined networking. Nevertheless, although the dual star is better suited for automatic routing, ultimately it is not a preferable option. Only a real spine-leaf architecture helps broadcasters to make the most of their investment in IP technology in their facilities.
Nonetheless, as well as network architecture, broadcasters must also manage and monitor the IP media network, considering the comparative advantages of automated routing and SDN.

AUTOMATIC ROUTING Automatic routing ensures that the decision on how to deliver specific content streams to the network instead of the user is left.

Although automated routing — or IGMP and PIM protocols — is commonly used worldwide in IP networks, it has reliability and bandwidth control issues.

Automatic routing may not be quick enough to produce live and may cause problems with network loops. This can only be solved with greater technical complexity because, if care is taken in planning and managing the network, there is a risk that it will be over-signed, creating uncertainty and signal dropouts. In addition, there are also questions regarding security and protection, as flows to destinations are not clearly regulated.

SDN puts control of the routing in the hands of a central control layer. The management and orchestration software provides a complete overview of the equipment, network infrastructure and services available. This makes it possible to make smart decisions about routing and controlling flows and provides the explicit routing capability that broadcasters expect and need.

It has a lot of benefits. First, it ensures a higher level of performance compared to automatic routing, as the software controls every media flow as well. It is even useful from a safety and security viewpoint, since the orchestration and control program can easily create a variety of paths to prevent failures and can also minimize safety risks by completely regulating which destination will receive what multicast.

In contrast to automated routing, SDN can easily manage any network architecture with the correct software. All its advantages ultimately make it the option control to build fully versatile, scalable and high-performance IP media networks.

Notwithstanding the obvious advantages of IP technology, broadcasters should keep in mind that a successful IP infrastructure is built around infrastructure “ground-up design.” Success also depends to a large extent on how individual elements are controlled within the network.

Broadcasters should use a true spine-leaf model to architect a network and use SDN routing to control the elements within it. It is the most effective way to maximize IP technology, thereby helping to deliver optimal investment returns and higher operational chances of success.

Understanding IP and File Based Technology

IP network

Professional media environments provide multiple means for moving video (compressed-files or streaming media) across the campus, between cities, etc.

The audio video is transported in a compressed format of different kinds. Some are highly compressed for Internet delivery and others are mildly compressed.

However, there are few means to transport uncompressed, high-bit rate (HBR) content end-to-end over the network.

We’ll focus on the latest applications for high bandwidth, real-time, life broadcast production using IP over a media-centric network.

“Internet Protocol” (IP) network technologies are already in use at many media facilities for life real-time production activities.

Due to the recent SMPTE standards and initiatives broadcast production is beginning to use new capabilities for transport and manipulation of high bit rate, uncompressed (UC) signals over an IP-network topology.

The approaches are applicable to file-based workflows, data migration, storage and archive, automation and facility command-and-control.

RULES OF ENGAGEMENT—IT CHANGES EVERYTHING IP is a set of rules (“protocols”) which govern the format of the data sent over the internet.

For broadcast or studio facilities, “internet” is more appropriately called the “network.”

Designing and building an IP facility require a renewed technological approach to IT-networking compared to those for traditional SDI-facilities.

One key-target in this will be to keep audio and video (over IP networks) acting precisely the way it does in an SDI-world without its burdens or constraints.

WHERE ARE THE DIFFERENCES? The isochronous (time bounded) nature of SDI, born out of the 1980s standards, is straightforward for life, continuous video inside the studio and for long distance transport.

Frame-accurate (undisturbed) transitions with a compressed video are generally accomplished using the peripheral equipment. It receives a compressed video, then decompresses it to a “baseband” (SDI) form, where then transitions from A-source to B-source are completed.

These processes take time and it is impractical for most life applications to cleanly switch sources and maintain timing and synchronization.

Program videos on YouTube or Netflix leverage receiver buffering techniques or use adaptive bit-rate (ABR) streaming functions to keep their “linear delivery” as seamless as possible to viewers.
For professional media IP systems—real-time life signals, on a network, are transported over isolated, secondary or virtual networks (VLANs). For life and real-time, HBR signal transport, new network topology and timing rules (protocols) must be adhered to. They are defined in the new standards SMPTE ST 2110 and or ST 2022. Packet structure and formatting is different per each data type’s intended uses. Different data traffic types are not mixed on the same VLAN network. Real-time transport networks are conditioned to carry HBR traffic. Packets from senders (transmitters) are constructed based on IETF RFCs such as “real-time transport protocols” (RTP) and “session description protocols” (SDP); and supporting IEEE and SMPTE standards. The other conditions identified in the SMPTE ST 2110 or ST 2022 standards are: timing, synchronization, latency and flow control. File-based and streaming media is intended to, or can run at variable data rates, while previous IT-like network designs must run continuously at non-wavering data bit rates.
In file-based transport, data from senders need not “arrive” at receiver input(s) in an isochronous nature.

In streaming media delivery, occasional interruptions or “buffering” is expected, while life real-time video must be synchronously time-aligned. That is why system designs now include distinct considerations for real-time and non-real time signal flows.

Understanding formats: codecs, bitrates, files and streams

broadcast network

Understanding formats: codecs, bitrates, files and streams
Terrestrial TV networks decode and format specifications for sending and receiving terrestrial TV signals. Until the late 2010s (expected) there were three major analog television systems in use around the world: NTSC, PAL, and SECAM. There are now four primary technologies in use globally for digital terrestrial broadcasting (DTT): ATSC, DVB, ISDB and DTMB.

Most digital TV systems are based on the standard MPEG transmission channel and use the video codec H.262/MPEG-2 Part 2. The details of how the transport stream is converted into a broadcast signal, in the video format before encoding (or alternatively after decoding), and in the audio format differ significantly. This has not prevented the development of an international standard that includes both major systems, although in almost every respect they are incompatible.

The two main digital broadcasting systems are ATSC standards, developed by the Advanced Television Systems Committee and accepted as a standard in most of North America, and DVB-T, the Terrestrial Digital Video Broadcast system used in most of the rest of the world. DVB-T has been designed to be compatible in format with existing direct-to-home satellite services in Europe (which use the DVB-S standard and can also be used by direct-to-home satellite dish providers in North America). While the ATSC standard also includes support for satellite and cable television systems, other technologies have been chosen by operators of these systems (mainly DVB-S or proprietary satellite systems and 256QAM replacing cable VSB). Japan is using a third system, closely related to DVB-T, called ISDB-T, compatible with the SBTVD of Brazil. The People’s Republic of China has created a fourth system called DMB-T / H.

ATSC A proprietary Zenith-developed modulation called 8-VSB is used by the terrestrial ATSC system (unofficially ATSC-T); as the name implies, it is a vestigial sideband technique. In fact, analog VSB is to modulate the amplitude periodically as 8VSB is to modulate the amplitude of the eight-way quadrature. In particular, this model has been chosen to ensure maximum spectral continuity between existing analog TV and new digital stations in the already crowded U.S. broadcasting distribution process, although it is superior to other digital systems when dealing with multi-path interference; however, it is better to deal with impulse noise that is particularly present on the VHF bands. There is also no modulation of the device. The 8-VSB modulation allows a digital data channel of approximately 19.39 Mbit / s after demodulation and error correction, which is adequate for one high-definition video stream or several standard definition networks.
On November 17, 2017, the FCC voted 3-2 in favor of allowing voluntary deployments of ATSC 3.0, planned as the successor to the original “1.0” ATSC, and issued a Report and Order to that effect. When you choose to deploy an ATSC 3.0 service full-power stations will be forced to simulcast their channels on an ATSC 1.0-compatible signal.

Typically, ATSC uses 256QAM on cable, although some use 16VSB. Within the same 6 MHz bandwidth, all double the throughput to 38.78 Mbit / s. ATSC is also used in satellite applications. Although these concepts are technically called ATSC-C and ATSC-S, they have never been formally defined.

DTMB DTMB is the People’s Republic of China, Hong Kong and Macau digital television broadcasting system. This is a fusion system a combination of different competing norms from different Chinese universities, incorporating elements from DMB-T, ADTB-T and TiMi3.

DVB DVB-T uses coded multiplexing of the orthogonal frequency division (COFDM), which uses up to 8000 independent carriers, each transmitting data at a relatively low rate. This system was designed to provide superior multi-path immunity and has a choice of device variants that allow data rates from 4 MBit / s up to 24 MBit / s. One U.S. broadcaster, Sinclair Broadcasting, petitioned the Federal Communications Commission to allow COFDM to be used instead of 8-VSB, on the basis that this would boost opportunities of digital television coverage by homes without outside antennas (the majority in the U.S.), but this request was denied. (However, one US virtual station, WNYE-DT in New York, was briefly upgraded to an emergency COFDM modulation for data transmission to emergency services workers in Lower Manhattan following the terrorist attacks of September 11).

DVB-S is the original satellite television format for Digital Video Broadcasting for forward error coding and modulation and goes back to 1995. It is used via satellites that serve all continents, including North America. DVB-S is used for broadcasting network feeds in both MCPC and SCPC modes as well as direct-to-home satellite services such as Sky and Freesat in the British Isles, Sky Deutschland and HD+ in Germany and Austria, TNT SAT / FRANSAT and CanalSat in France, Dish Network in the United States and Bell TV in Canada. The DVB-S delivered MPEG transport stream is designated as MPEG-2.

DVB-C stands for Digital Video Broadcasting-Cable and is the DVB European consortium standard for digital cable television broadcasting, which transmits a digital audio / video stream of the MPEG-2 family using a channel-coded QAM modulation.

ISDB ISDB is very similar to DVB, but it is split into 13 sub-channels. Twelve are used for radio, while the last one is used either as a guard band or as a 1seg system (ISDB-H). Like other DTV systems, the types of ISDBs differ primarily in the modulations used due to the different frequency bands requirements. The ISDB-S 12 GHz band uses PSK modulation, 2.6 GHz digital sound transfer uses CDM, and ISDB-T (in VHF and/or UHF band) uses PSK / QAM COFDM. It was developed with MPEG-2 in Japan and is now used for MPEG-4 in Brazil. ISDB requires, unlike other digital broadcasting systems, digital rights management to limit program recording.

Plus more details in attached presentation: Video streaming: Concepts, algorithms, and systems
File based broadcasting
o Video/ audio/subtitling
 – Types of video
  · Resolution (SD – HD, ultra HD, HDR)
  · Framerate
  · (PAL/NTSC) format
  · Scan order (upper/lower field/progressive)
  · Compression/containers
  · Size
 – Types of Audio
 – Types of subtitling
  · Srt, stl
  · cc
 – Graphics
 · alpha
o Advantages and disadvantages files/IP streamig
o Compressed audio streaming – MP3 / AAC
o Compressed video streaming – H264/HEVC/VPS

Digital Asset Management (DAM) and Media Asset Management (MAM)
What is a Digital Asset?

Digital Asset is any audio / video file that has been recorded in some digital format and is intended for redistribution to viewers with rights to view it.

Digital content data archives are the currency for contemporary broadcasting systems but the maintenance of large media file libraries can be a major challenge for any broadcaster. It can take time to search, store retrieve and distribute video and audio files for broadcast , as well as manage multiple metadata even for the easiest of operations, as modern broadcasters do not operate in a simple world.

Media Asset Management (MAM) requirements must deal with complex realities including:
– Large volumes of digital asset related metadata
– Multiple user workflows working together to generate and broadcast content

The demands of networked, but geographically dispersed broadcasters operating on a common platform

Accessible web services and system user control that run station aspects away from the studio
Distribution across a range of platforms including radio, terrestrial TV, CATV and digital signage

Benefits of DAM
Quick scan for millions of items stored and handled in one location. “One-stop shopping” makes discovering content simple, powerful DAM search tools make it quick.
– Use the tools that should be available in a robust DAM system to easily share and distribute found content to others internally, externally or to social networks.
– Reduce costs by having to recreate missing, damaged and undefined material.
– Work with departments and users
– Ensure brand integrity, but make available to users across your organization the most up-to-date versions of your content
– Keep track of where and how material was used
– Reduce costs by centralizing this function

Digital asset management (DAM) has grown from an unique tool for large companies to a feature in every marketing department worldwide. When DAM began, it focused primarily on the media library and the cataloging, storage and sharing of the assets involved. Any company or organization can enhance the effect of its digital content by using and leveraging new DAM tactics and tools.
When selecting a DAM platform there are some decisions to be made. Because of a major shift in digital data, companies are forced to evaluate a cost-effective way of storing, arranging, extracting, locating and distributing digital assets while maintaining the highest level of security. DAM systems help organizations maximize the value of their current digital assets, giving relevant resources pertaining to other tasks or digital media. Here are the top five developments in digital asset management we would hope to see.

Artificial Intelligence
It should be no secret at this point that artificial intelligence (AI) is starting to take over all aspects of business. DAM trends show that AI will replace employee manual workloads. As DAM can eliminate the manual need for labeling of visual resources, it uses AI technologies to search content and generate corresponding metadata tags automatically. This enables businesses to quickly find what they need while saving hours of organization.

Experts predict that in the coming years, AI will only become more reliable and efficient. Such “machine learning” would help DAM systems to know their own assets and users better. Visual recognition can predict the needs of a user based on past behaviors. DAM systems will continue to integrate AI for video based on their previous searches and overall user profiles to provide recommendations for using content.

Automation It is safe to say that in marketing companies and divisions, technology would take over resource management. Machine learning algorithms in a project or event would require the DAM code to suggest appropriate resources for a specific task. The marketer can then pick the asset that works best for them based on their own personal preference or on the basis of objective experience. As the DAM software learns from data and usage patterns, it will be able to extract more suitable assets from the system over time. Automation is needed to support today’s and future online marketers. Customer service and job efficiency will suffer without quick access to related digital assets.

Cloud-Based DAM Recent years have seen a substantial increase in the recognition and usage of “the cloud.” It helps a company to buy a business license without hosting it. This offers a company with a safe and efficient way to store and preserve their business income and assets. A cloud-based DAM is better than traditional cloud file storage, hard drives and shared drives because it provides file sharing irrespective of file size.

The cloud will continue to be embraced by DAM systems in the coming years. This storage type includes features such as version control, rights management, file rendering, and templates for workflow. The use of a cloud DAM system requires little or no IT intervention, and the protection of these services is generally better than organisations that manage their own data access across sites such as Dropbox and the like.

Metadata Management One of the best reasons to use a DAM device is to maximize an organization’s overall time. Most companies are working to ensure that all digital assets are stored in metadata, but it takes a while to accomplish this process manually. Tools and code can now anticipate relations between digital assets, allowing the DAM program to automatically recommend metadata options. This ensures that companies will be more informed about the amount of metadata information they can hold in their records, allowing endless opportunities to scan and archive digital asset libraries.

Blockchain is a secure online ledger system that creates, maintains and invests in DAM strategies. Blockchain preserves digital asset ownership and protects against abuse of the data. This is achieved by recognizing the honesty and protection of those who use the property before they can use them.

For organizations that deal with high-security data, blockchain is important. This type of technology is said to have created the backbone for a new type of internet by allowing the distribution but not copying of digital assets. Most experts predict that blockchain plays a key role in the sharing of digital assets between marketers and their customers. This method of implementation allows users to monitor and record the use of resources without manually doing the time-consuming process.

Businesses provide a seamless experience for consumers across all technology channels by committing to a digital asset management platform. It provides a platform for providing customers with a unique experience and brand awareness while keeping assets and valuable information in the hands of the right people. There are a few items you need to make the best use of a digital asset management program for your company, but by reflecting on what is happening in the DAM environment, companies can make sure they are able to integrate the latest technology into their business plan.

In the broadcast industry, we refer to the term MAM, or Media Asset Management, which is a subcategory of DAM.

The birth of MAM
MAM, as we know it today, was first used as a term around 15 years ago with vendors such as Dalet, Ardendo (now Vizrt), and Blueorder (now Avid). At that time, broadcasters made their first steps towards digitalization, creating more sophistication and effectiveness challenges when managing digital content.

The first MAMs were highly tailored and generally involved a substantial upfront investment. The first MAMs often included installations on-site. However, the value of being able to manage massive amounts of media files properly, something that was simply not possible before, and the subsequent efficiency leap for broadcasters meant that the investment in time, money and valuable space was well worth it. So what we saw was more content with less effort and some very happy broadcasters–and happy MAM suppliers.

A MAM system should not only manage the actual storing of the resources, but also their collection (ingestion, input), categorization (annotation, tagging, ordering, metadata), retrieval (search, display, extraction) and finally distribution (transcoding type, user rights, physical delivery). A MAM machine would have to be capable of encoding, decoding and transcoding video and audio formats.

MAM is a system used in a distributed and collaborative environment to archive and manage time-based media and workflows. MAM is a multi-format, multi-sales, multi-workflow federated content repository that promotes innovation, traction and operational efficiency throughout an asset’s entire life cycle. Vendors of asset management need to change as workflows shift into the cloud. Another way they do this is to break part of their service package into orchestration, but as with MAM, it is not entirely clear that when they use the word, everyone means the same thing. Orchestration is an important part of a MAM process, but it is not a MAM system. It includes integration with other technologies (development, newsroom, playout automation, CMS, VOD, recording, video processing). Through some sort of business service bus, it implementation is best delivered. This has been the’ holy grail’ of news and advertising, but it has been and will be a huge challenge with so many different systems and forms for integration.

Command and control, automation & orchestration

playout studio

Miriam Webster describes orchestration as “the arrangement of a musical composition for performance by an orchestra”.

The computer industry defines it as “the automated arrangement, coordination, and management of complex computer systems, middleware, and services”.

They’re not actually very different, so what does any of this have to do with broadcasting? Quite a lot!

Automation has been an integral part of broadcasting systems since the 1950s. Automation has progressed to broadcasting systems and more complex and sophisticated production systems. Every aspect of production and broadcasting uses automation tools to create, manage, transport, distribute, protect and archive content of programs. File-based workflows rely on automated processes for handling and transferring data across the entire media architecture.

The transition to an IP architecture has resulted in significant improvements in command and analysis and maintenance of systems. Command and control is a comprehensive automated set of processes that control media acquisition, file movement, handling and delivery. Monitoring is more than just a set of scopes and metrics. Dashboards and browsers offer process management tools to manage media and metadata processing and quality control throughout the facility.

Automation was the system that control system used to incorporate the traffic system into the play-out system. The automation manages all the program source devices i.e. tape machines, servers, routers and master control switches to begin programming according to the traffic schedule instructions. With the sector switching to multi-channel distribution, the automation system requirements have become more challenging. This manifests into getting multiple sets of instructions commands i.e. traffic schedules and playlists, and then giving various commands for the different devices it controls to play content. And while automation has grown to support it, it is now just one component in an IP broadcast center’s overall automation system.

Editing systems control machines, switches, and mixers from the source. This is one of the primary command and control examples. Every time an item is selected, the editing system issues a set of commands to the source device. When the completed program is rendered for each delivery format, it issues a different set of commands.

Command and control evolved from Serial RS232, 422, 485 with GPI / O relay closure to a flow and file-based architecture IP layer that manages both media management and motion as well.

– Metadata
– Communication
– Command and Control

It illustrates the layered topology in the IP world and how command and control is one layer in a single IP transport stream that transport media. metadata and communication, in addition to command and control.

New requirements for larger command and control systems were implemented by stream and file-based technologies and workflows. It takes control of the whole media lifecycle from concept to distribution and beyond with all steps along the way.

There is a need for control for each process in the file-based broadcast and production environment. Ingest devices need to know how to start recording plus format profiles and direction where to place the media and metadata once it has been created. It is important to alert the production and media management systems that the file is available for use. There is a media management process that manages the movement of data across the different business and distribution systems and across different storage areas for output, media management, archiving and delivery.

The command and control layer manages any element of the file-based environment. And typically, every process and device is controlled by an automated process. Even if a process is manual, the controller is the dashboard or an independently integrated control system that manages the manual process and shows the status and progress of files and streams throughout the media management environment.

Orchestration is the new term for an all-encompassing command and control management system. Such systems are also called conductors. Conductors provide the command and control process with a single dashboard. We host the rules and policies that all systems and procedures are controlled.

The conductor is the control and control center dashboard. It displays all the active processes, the device status and the files in the system. The conductor controls the processes and devices ingested, handles media movement, interfaces with media management and controls the delivery automation master control system. Under schedules rules and policies the conductor must implement processes and track their progress in maintaining priorities and ensuring consistency of media and metadata flow.

With the cloud services introduced and hosting application development in the cloud became more popular, the need for a single application to monitor and handle all these growing and incompatible processes and file movement.

Take Orchestration, the next automation tier; and control and command It uses a single tool instead of each process providing its own control monitoring and management layer to unify them into a single control application. This application manages and controls processes in a simple and easy to follow user interface by sending commands to each device and subsystem, reading and reporting on the status of each process.

Since broadcast and development infrastructure has become IP and file-based and relies on software applications and databases, several new application and middleware technologies are available on the market to manage this.

Many vendors that offer file-based solutions have several services for recording, encoding, storing and sharing data. They all have tools within their own products to control each of the processes and handle the content. They also all have tools or interfaces-APIs-Application Program Interface, a collection of coding features that allow their own developers or customers to create the right software interfaces to integrate their products with systems developed by third parties or customers. This could be integration of databases for asset management or transfer of media files to editing systems from encoders to transcoders.

As the broadcasting and production sectors adopt and adapt Enterprise IT tools, some of the most crucial tools are management and monitoring.

Orchestration is automation and command and control of the next generation. The workflow based on stream and file is not a single thread operation. At each stage of production and broadcast there are multiple equipment and applications. There are several encoders that process various formats and bitrates with files moving down one path, streaming another. Asset management associates media metadata and allows access for protection and monetization of business applications. Files are moving between systems of production, editing and distribution. While many vendors offer end-to-end solutions, there are really no products or services “starting to end”.

It is too difficult to use different dashboards (User Interface) for each program to try and manage all this. Simplify this by incorporating the “one window” interface to control all the processes.
The Master Control Room (MCR) is more a Network Operations Center (NOC) and orchestration is a key component that is needed The list below outlines some of the tasks that the orchestral design must control and manage:

– Make sure that devices are in the right parameters (ie: Encoders and transcoders)
– Configure and schedule data for each device based on traffic
– Manage the conversion and/or delivery of the encrypted file to each stage of the process
– Assign resources to the individual processes
   – Fiber or Satellite circuit
   – Encode chain
   – Storage location
   – Play out chain
   – QC chain
– Track each system to overcome any conflicts, such as resource allocation
– Monitor the use of the network
– Balance the process distribution across applications to optimize performance
– Track the state of any alerts or alarms
– Automated control of QC systems

The need for orchestration has emerged as a standard next-generation technology for automation. As computers, software and systems all interacted within the IP network orchestration using a command and control layer, communication with each other became possible. Orchestration is also important as there are more and more cloud-based programs. Integrating cloud services into the production environment would rely on orchestration software for streamlined process management and control.

Media and business operations workflows and integration

MCRplayout mastercontrolroom

What are Broadcast Media Process Flow Charts?

Amid digital age changes, most broadcast media operations run under legacy systems that do not play nicely with each other. Companies spend an excessive amount of time, effort and money maintaining their systems up-to-date, and even then the complexity of these programs makes business operations less apparent. By showing the individual work steps or activities performed during a broadcast media business process, flow charts or workflows enable greater transparency. Media companies can then identify potential gaps and automation opportunities.

As you might imagine, broadcast media consists of several different business operations, although they all share the larger purpose of promoting the creation, editing and broadcasting of edited works. Unfortunately, with every production, not all of these processes happen internally, i.e. raw media can be sent to third-party editing houses. A system in broadcast media can help to make these problems apparent and have pay-offs in systems like:
Sourcing Content: Negotiations with local affiliates over syndication rights can be hampered by excessive, redundant communication between different stakeholders. A process analysis project can help identify potential bottlenecks and serve as a first step in manual process automation.
Program Planning: Selecting the optimal schedule of programming on a daily, weekly, monthly, quarterly, or season-long basis requires a precise schedule of review and approval to make sure broadcasts are put out on time.

As we have discussed, there are different sub-processes that make up the entire broadcast media. To some extent, however, they all serve as functions in broadcast media, the production workflow, for the overall process model.

Generally speaking, this workflow or flow chart has five phases, starting with media creation and finishing with a final product. The phases can differ in some parts of the process, but tend to follow the pattern below:
Acquisition: multimedia creation whether from audio or film cameras, recording system, graphics software, etc.
Staging, scanning, logging: the collected content is primed for editing. Employees should typically record raw information, check for the right takes, quotations, riffs, etc., and make notes detailing which parts to hold.
Editing: where the magic takes place, and most of the work. In a final product, recorded raw material starts to be brought together. This is a repeat process, with rough cuts going on continuously until a final version is approved.
Management: Assets are managed and archived, marked, assigned rights and may be sent for use to affiliates or production houses.
Delivery, broadcasting, distribution: product for consumption is made available. This can take several forms depending on the product’s nature, i.e. airing during newscast, posting online, broadcasting via online video services, etc.


2k, 4 K and 8 K Resolution formats larger than HD. HD has a horizontal resolution of 1920 pixels and 4 K, for example, has a horizontal resolution of 4096 pixels.
– AAF Advanced Authoring Format. Avid Media Composer uses this type of file which contains video and audio clip information, including metadata and sequence information..
– AVC-I 100 HD Codec specified for the finished programs in the DPP file delivery specification. It uses 100 Mbs bitrate. Some Panasonic cameras can record it and some NLEs support it as a native codec.
– Bitrate Bitrate refers to the amount of data per second that is captured. Typically, the higher the quality of the video or audio is captured, but the larger the file size is similarly captured.
– CBR Bitrate Constant. The bitrate, as the name suggests, remains the same throughout the length of the film.
– The CF Card Compact Flash Card, a low cost consumer camera card used in high-end DSLR cameras and the Canon XF305 and C300.
– Codec A video and audio encoding format, each with their own separate codecs. Apparently, due to the huge file sizes and difficulties in handling them (storing, transferring, editing) it is not common to deal with uncompressed full-frame formats. Based on the related settings, cameras continue to capture and encode video and audio automatically into the required codecs. Many cameras can capture several codecs with different parameters (such as bitrates) and for different reasons you can choose between them. Similarly, NLE packages may not accept all codecs and may require a transcode to move to a compatible format.
– Cloud – The cloud refers to Internet-accessible remote services.
– Re-link the edit sequence to the high-resolution master media after editing with low resolution.
– Data Wrangler. Someone on site to ensure that all tapeless media are properly copied and backed up. It is also a function that edit assistants perform when they ingest and transcode media before editing.
– DIT Digital Imaging (or Image) Technician. This is a relatively new position. It was created in response to the transition from the traditional film medium to digital cameras using different formats such as HD 2k,4k etc. Since digital video reacts differently from film, the job of the DIT is to work with the DOP / Camera operator to help achieve the best results.
– DNxHD Avid Media Composer’s preferred codec. HD footage will typically be recorded at 185 Mbps or transcoded.
– DSLR Digital Single Lens Reflex Camera. These were traditionally stills only cameras, but they have been able to shoot video since 2007. Small production companies are increasingly using them for a number of reasons: the quality of the lenses and images at low light levels are both high, and this makes them a natural choice for many companies when combined with the large sensor and the fact that they are cheaper than most traditional broadcast cameras. However, some broadcasters do not accept the footage as HD because it is shot at less than 50 Mbps, but later models can now shoot at 50 Mbps. Older models have also not registered a time code and have not been able to record synchronized audio, but later models are also starting to address these problems.
– EDL Edit Decision List. A file that describes an edit sequence in terms of a file reference and in / out time codes. It can be used to share an edit sequence or transfer edit decisions from offline to online (low-res to high-res). A modern example is the AAF, which can also include a much richer collection of metadata such as impact parameters and caption tex..
– FCP Apple’s Final Cut Pro NLE Suite.
– IP – The acronym is Internet Protocol that specifies a communication method between computers on a network. That device has an IP address, a numerical tag that allows that user to recognise and know where to send information to communicate / download files.
– LTO – The acronym is a digital tape that is often used for archiving and file-based media backup.
– MAM Media Asset Management (system). A website that enables video and audio content to be searched and browsed.
– Master/Mastering – Creation of a TX Master. Traditionally, mastering has always been to a HDCAM SR tape, but in a file-based world this might be the file on a XDCAM disk, HDD or transferred over a network.
– Metadata Technical and contextual data on the recorded and edited audio and video content. This would have included written information on the tape or notes in the tape cartridge in the tape world. It relates to information labeled against the card or file in the file-based world.
– Physical Metadata: Card labelling.
– Technical metadata: facts contained in the wrapper of the folder, such as encoding, bitrate and file length.
– Descriptive Metadata: logged data identifying the video / audio content.
– Native editing media created in (without prior transcoding or transwrapping) the original’ native format’.
– Original format – The camera encodes the original format (also known as the reference format).
– NLE Nonlinear Edit[suite]; for example, Avid, Apple Final Cut Pro, Adobe Premiere and Sony Vegas.
– P2 – Refers to Panasonic P2, a camera type that shoots a proprietary P2 card from Panasonic..
– Proxy lower resolution or a lower bitrate copy of the master media used for reviewing or editing. Proxies are sometimes generated in camera, on ingestion or at the beginning of editing to facilitate media handling.
– SxS – This is a solid state card format, used by cameras such as Arri Alexa and Sony XDCAM EX..
– Scanning The refreshment of the image. Progressive or interlaced are the two main types.
– Progressive: a scanning mode that describes how the image refreshes. At the same time, the entire image refreshes. This gives the recorded images a filmic quality at low frame rates such as 25 fps.
– Interlaced: a scanning mode in which first the odd lines of pixels were scanned, preceded a fraction of a second later by the even lines of pixels. It produces two pixel fields that give a perceived doubling of the refresh rate of the image, but where each field has half the resolution of the entire frame. For eg, the image is divided into 50 interlaced fields per second at 25 frames per second, providing a more natural “film” look than progressive scanning.
– PsF: Many imaging systems use a combination of the two scanning styles called Progressive Segmented Frame or PsF. Progressively captured images are reprocessed for interlacing in camera. A PsF image has the same picture quality as a progressive image, although it is interlaced.
– Schema: Definition of an optional or required field structure and permitted data entry values. Used often in relation to file or database metadata.
– Progressive: a scanning mode that describes the picture refreshment. The whole image refreshes at the same time. This gives the images captured a movie value at low frame rates such as 25 fps
– Transcode Modification from codec to codec. This is sometimes required because the destination may not be compatible with the source codec (such as an edit suite). You should aim to minimize this as: every time you transcode, there is a small loss of quality; often referred to as a ‘generational loss.’ Changing between codecs can take a lot of time and computer processing ability. To store both the original codec and the destination codec (at least in the short term, as you may later decide to keep only one) you will need more storage.
– Transwrap: Change the wrapper of the file.
– TX: Abbreviation for Transmission.
– Bitrate variable of VBR. This is where the bitrate changes over the shooting duration, as the name suggests. Of example, a constant bitrate of 100 Mbps remains the same at any point over the duration, but a variable bitrate of 100 Mbps may be more or less at any point, but the normal average would be about 100 Mbps.
– Wrapper, also known as a’ container of files.’ This is the file structure that contains technical metadata about the file around the video / audio codec. This information will be used by video / audio players / editors to understand how to play / edit the file (assuming they understand the format because not all players can play all formats).
– XDCAM EX A Sony image array capture format such as PMW-350 and EX3. On SxS cards, the data is registered. The DPP broadcasters consider only the 50 Mbs version as acceptable quality HD.
– XDCAM HD422 A recording format used by a range of Sony cameras like the PDW-700. Content is recorded on removable 50 Mbs optical disks.