SP106-DRL360: 360-degree Video Streaming with Deep Reinforcement Learning


360-degree videos have gained more popularity in recent years, owing to the great advance of panoramic cameras and head-mounted devices. However, as 360-degree videos are usually in high resolution, transmitting the content requires extremely high bandwidth. To protect the Quality of Experience (QoE) of users, researchers have proposed tile-based 360-degree video streaming systems that allocate high/low bit rates to selected tiles of video frames for streaming over the limited bandwidth. It is challenging to determine which tiles should be allocated with a high/low rate, because (1) the video playbacks include too many features that dynamically change over time when making the rate allocation; (2) most of the state-of-the-art systems focus on a fixed set of heuristics to optimize a specific QoE objective, while users may have various QoE objectives that need to be optimized in different ways. This paper presents a Deep Reinforcement Learning (DRL) based framework for 360-degree video streaming, named DRL360. The DRL360 framework helps improve the system performance by jointly optimizing multiple QoE objectives across a broad set of dynamic features. The DRL-based model adaptively allocates rates for the tiles of the future video frames based on the observations collected by client video players. We compare the proposed DRL360 to the existing systems by trace-driven evaluations as well as conducting a realworld experiment over a wide variety of network conditions. Evaluation results reveal that DRL360 can adapt to all considered scenarios, and outperform the state-of-the-art approaches by 20%-30% on average given different QoE objectives.


There are no reviews yet.

Be the first to review “SP106-DRL360: 360-degree Video Streaming with Deep Reinforcement Learning”
Contact UsHere's your new discount product tab.
Shopping Cart