Skip to content

Simple Streaming Client Sample

Kolesnik, Gennadiy edited this page Apr 6, 2026 · 4 revisions

The Simple Streaming Client sample implements a simple streaming client application which streams video and audio from a server, such as, for example, the Remote Desktop Server sample. It has the following functionality:

  • Receiving, decoding, post-processing and displaying a video stream from the server
  • Receiving, decoding, post-processing tha playing an audio stream from the server
  • Capturing user input from a keyboard, a mouse and a game controller connected to the client machine and sending it to the server
  • Optionally encrypting all network traffic using AES encryption with a pre-shared password to generate the encryption key
  • Standard Dynamic Range (SDR) and High Dynamic Range (HDR) streaming when supported by the OS and codec (HEVC and AV1 on Windows only)
  • Server auto discovery or direct connection via a URL
  • Streaming over UDP and TCP
  • Extensive logging

Video post-processing includes the following:

  • Video scaling, including high-quality upscaling when the resolution of the video stream received is lower than the resolution of the client display
  • Compression artifact removal (denoising)

The following video codecs are supported (subject to hardware support):

  • h.264/AVC
  • h.265/HEVC
  • AV1

Audio post-processing includes the following:

  • Resampling
  • Remixing to the client's speaker layout

The following audio codecs are supported:

  • AAC
  • Opus

The following hardware configurations are supported:

  • An AMD or Intel CPU or APU

Microsoft Windows:

  • AMD, NVidia or Intel GPU with a hardware-accelerated video decoder supporting at least one of the supported codecs. NOTE: additional steps are required to run the client sample on a non-AMD GPU. See the Running on non-AMD GPU section for more details

Linux:

  • An AMD discrete GPU or APU supported by the AMDGPU Pro driver

Running the Simple Streaming Client Sample on Windows and Linux

Run the SimpleStreamingClient executable with the following command line parameters (command line parameters are not case-sensitive):

Connection Parameters:

  • -Server <url> - specifies the URL of the server to connect to. The URL is provided in the following format: [protocol://]ip address[:port], where the protocol can either UDP or TCP. The protocol and the port are optional and will default to UDP and 1235 respectively when not provided explicitly. Note that TCP must be enabled on the server in order for the client to be able to connect using TCP. UDP transport is available regardless of the server configuration. When the -Server command line argument is not provided, the client would perform automatic server discovery by sending a UDP broadcast to the local IP subnet on the port specified by the -port command line argument (see below). The client would attempt to connect to the first responding server automatically using the preferred protocol provided by the server's response. NOTE: automatic server discovery can only detect servers connected to the same IP subnet. If your subnet spans across multiple physical segments connected by routers, please make sure that the routers allow UDP broadcasts to travel across the segments.
  • -Port <port> - specifies the UDP port the UDP discovery broadcasts are sent to. This parameter also defines the port for direct UDP or TCP connections when the -Server command line argument is specified. Default: 1235.
  • -Encrypted [true|false] - enables optional AES encryption of all network traffic between the client and the server. You must provide the -Encrypted command line parameter with the same value on the server in order for the client to be able to connect to the server. You must also specify the same passphrase on the client and the server using the -Pass command line argument (see below). NOTE: while the actual data stream between the server and the client, including video, audio, controller events and application-defined messages sent over the AMD Transport are encrypted when encryption is enabled, discovery broadcasts are sent in plaintext regardless of the value of the -Encrypted argument.
  • -DatagramSize <size> - datagram size in bytes. Applicable only when streaming over UDP. For more information please refer to the Configuration Considerations section. Default: 65507.
  • -DeviceID <id> - specifies a client device ID. This ID is used to uniquely identify the client to the server. When not supplied, a random value will be generated. It is recommended that you generate a unique value in your application and persist it across multiple runs.

Video Parameters:

  • -Uvd [0|1] - hardware-accelerated decoder instance. Certain AMD GPUs, such as Radeon RX6800/6900/7800/7900 have two instances of the video codec engine. You can specify which instance is to be used for video decoding when running the sample on GPUs that have more than one decoder instance. Default: 0.
  • -PreserveAspectRatio [true|false] - specifies whether the video stream's aspect ratio should be preserved when it is different from the aspect ratio of the display connected to the client. Default: true.
  • -VideoDenoiser [true|false] - specifies whether the video denoiser (compression artifact remover) should be enabled or disabled. Keep in mind that video denoiser will increase the overall latency and might limit the frame rate at high resolutions when running on low-end or mobile hardware. Enable it when the hardware is capable of maintaining the desired frame rate at the desired stream resolution and when latency (responsiveness) is less important than image quality. Enabling video denoiser is useful when streaming over a slower network connection at a lower bitrate. NOTE: not available on non-AMD GPUs. Performance cost is higher on HDR streams. Default: true.
  • -HQUpscale [true|false] - specifies whether the high-quality video upscaler is enabled. The high quality upscaler produces a sharper, more detailed image when upscaling from a lower-resolution video stream to a higher resolution display compared to simpler bilinear or bicubic scalers. Enable it when the hardware is capable of maintaining the desired frame rate at the desired stream resolution and when latency (responsiveness) is less important than image quality. NOTE: not available on non-AMD GPUs. Performance cost is higher on HDR streams. Default: true.

Mouse-related Parameters:

  • -RelativeMouse [true|false] - specifies whether the mouse movements are sent to the server as relative "steps" or absolute screen coordinates. In the absolute mode the mouse movements are bound by the client's window. This limitation isn't present in the relative mode, however some temporary cursor desynchronization might be present when streaming over higher latency networks, which can manifest itself as cursor lag or rubberbanding. The absolute mode is the preferred mode in VDI applications, however, for gaming the relative mode is a better choice. Games often use the mouse to move the camera view (panning/dolly) with the cursor being hidden. In absolute mode the range of camera view movements would be limited by the screen boundaries. Default: true (relative).
  • -ShowCursor [true|false] - specifies whether the mouse cursor needs to be rendered by the client (true) or is embedded in the video stream (false). Default: true.

Miscellaneous Parameters:

  • -LogFile <passphrase> - specifies the path to the log file. Default: RemoteDesktopServer.log located in the executable's directory.

Running the Client Sample on Linux

While the Simple Streaming Client sample can run on either Wayland or Xorg, security restrictions in Wayland impose a few hard-to-work around limitations, which result in mouse cursor flicker while the mouse on the client or on the server is being moved. This is an unfortunate limitation we have not found a better solution for, which would not incurr an additional penalty on the roundtrip latency. This limitation, however, is not present in Xorg, therefore we recommend using the Xorg server instead of Wayland when mouse pointer flicker is objectionable.

NOTE: Advanced post-processing such as compression artifact removal/denoising and high-quality upscaling are not available on non-AMD platforms. Run the Simple Streaming Client sample with the -HQUscale false and -VideoDenoiser false command line options.

Running the Client Sample on a non-AMD GPU on Windows

To run the client sample on a non-AMD GPU you will need to include a special lite version of AMF with your client installation. This includes two DLLs amfrt64.dll and amflte64.dll containing the implementation of a video decoder and a color space converter that can run without the AMD graphics driver. These DLLs are located in the prebuilt/Windows directory. Place them next to the client executable. Install them on systems without an AMD GPU only.

Running the Client Sample on a non-AMD GPU on Linux

Interactive Streaming SDK includes a 'Lite' version of AMF in the prebuilt/Linux/x86_64 directory in a form of a prebuilt shared library called libamfrtlt64.so.1.4.*. The last number in the file name might change as new versions of AMF are released. Include this shared library with your client application and a place a symlink to this shared libarary named libamfrtlt64.so.1 next to your client executable.

On Intel GPUs you also need to explicitly enable hardware-accelerated video decoding by setting the ANV_VIDEO_DECODE environment variable to 1. It it is a good idea to add the export ANV_VIDEO_DECODE=1 line to your ~/.profile

Running the Client Sample on Android

Android imposes certain limitations on the Simple Streaming Client compared to the Windows and Linux versions. As a result, not all functionality described above is available or applicable on Android:

  • Video codec support depends on the Android device hardware. Not all devices provide hardware acceleration for AV1. While software decoding may be possible on some devices, it is not recommended for low-latency streaming, as it can introduce significant latency and may not sustain high frame rates at higher resolutions.
  • Advanced video post-processing features, such as the high-quality upscaler and compression artifact removal/denoising, are not available on Android.
  • Relative mouse mode is not supported due to Android API limitations. Mouse movement is therefore constrained by the Android device display boundaries.
  • Server-initiated mouse cursor movements are not reflected on the client, which can cause issues with games that reposition the mouse cursor automatically or when accessibility features (for example, snapping the mouse cursor to the default button in modal dialogs) are enabled on the server.
  • Mouse cursor flicker can occur on Android versions prior to Android 15 when the cursor changes on the server side. The default Android cursor may appear momentarily during cursor transitions. This is a known Android OS bug that has been fixed starting with Android 15 (Android V, API level 35).
  • Certain key combinations (for example, Alt+Tab) are intercepted by the Android OS and are not delivered to the application, so they cannot be forwarded to the server.
  • Xbox controllers over Bluetooth are supported and can be used as input devices on the Android client.
  • Touchscreen input is supported, allowing direct interaction with the streamed content via touch gestures.

Code Overview

This section provides a high level overview of the Simple Streaming Client sample's code. The sample's source code is located in the samples/SimpleStreamingClient/ directory. This overview is not meant to be a reference, but is designed to provide guidance as you navigate the source code and help in understanding of the most important and the not-so-obvious aspects of the sample. We suggest that you follow the source code while reading this section.

Application Classes:

The client application is implemented in the SimpleStreamigClient class. This is a base class which contains all platform-agnostic functionality. All Windows-specific functionality is implemented in the SimpleStreamigClientWin class derived from SimpleStreamigClient. This approach allows for cleaner code, not polluted with platform-specific #ifdef statements.

These classes are responsible for the application initialization and termination, command line parsing and starting the video and audio decoding pipelines and the network client.

The SimpleStreamingClient class implements the ssdk::transport_common::ClientTransport::ConnectionManagerCallback which receives notifications when a server is discovered or when a connection with a server is established or terminated.

Network Transport

The Simple Streaming Client sample uses the AMD Network Transport protocol to communicate with the server. The client component of the AMD Network Transport is implemented by the ssdk::transport_amd::ClientTransportImpl class, derived from the ssdk::transport_common::ClientTransport class. The implementation of the ssdk::transport_amd::ClientTransportImpl class is located in ssdk/transports/transport_amd/ClientTransportImpl.*. For more information about the AMD Network Transport please refer to the AMD Network Transport section.

If you wish to replace the AMD Transport protocol with another protocol, derive your own implementation class from ssdk::transport_common::ClientTransport and instantiate it instead of ssdk::transport_amd::ClientTransportImpl. Follow the instructions outlined in the Implementing Custom Protocols section.

The ClientTransport object must first be initialized by calling the ssdk::transport_common::ClientTransport::Start() method. The Start() requires a transport-specific parameter block defined by the ssdk::transport_common::ClientTransport::ClientInitParameters class. This is a base class. Each implementation of a transport must derive its own parameters class from it. The AMD Network Transport defines the ssdk::transport_amd::ClientTransportImpl::ClientInitParametersAMD class.

A client can connect to a server either by passing a server URL to the ssdk::transport_common::ClientTransport::Connect() method. For AMD Network Transport the URL must have the following format:

[protocol://]<server IP address>[:port]

where protocol can be either "UDP" or "TCP", with UDP being the default.

If you choose to implement your own transport, the URL format would be defined by your implementation.

When using the AMD Network Transport, the client can automatically discover the servers on the IP subnet by sending a UDP broadcast requesting a response from the servers. The discovery process is initiated by calling the ssdk::transport_common::ClientTransport::FindServers() method. The UDP port the broadcast is sent to is configurable by calling the ssdk::transport_amd::ClientTransportImpl::ClientInitParametersAMD::SetDiscoveryPort() method. When a server responds, the Client Transport object calls the ssdk::transport_common::ClientTransport::ConnectionManagerCallback::OnServerDiscovered() method which receives a ServerDescriptor as a parameter. The ssdk::transport_common::ClientTransport::ServerDescriptor class contains a server name, description, a URL it can be reached at and a list of supported video and audio streams. When a server supports multiple protocols, for example, a TCP- and a UDP-based AMD Transport, the OnServerDiscovered() callback will be called multiple times for the same server, once for each protocol.

NOTE: The server discovery process described above is specific to the AMD Network Transport. Other transport implementations may support other discovery mechanisms, such as, for example, a directory server. In such cases a call to FindServers() would result in a request being sent to the directory server, which would respond with a list of available servers. The server discovery mechanism is specific to the transport, however the ClientTransport API has been designed to be agnostic of the underlying implementation.

Video and Audio Streams

Each server can stream one or more video and/or audio streams. There are many reasons why multiple streams can be desirable. For example, a server could stream the same video content using various codecs, resolutions, bitrates or frame rate. Alternatively, a server might have multiple displays connected to it (physical or virtual) and capture different streams from different displays. To receive a video or an audio stream, a client must subscribe to it. This can be achieved by calling the ssdk::transport_common::ClientTransport::SubscribeToVideoStream() and the *ssdk::transport_common::ClientTransport::SubscribeToAudioStream() methods. Likewise, streams can be unsubscribed from by calling ssdk::transport_common::ClientTransport::UnsubscribeFromVideoStream() and ssdk::transport_common::ClientTransport::UnsubscribeFromAudioStream().

Video and Audio Pipelines

Each video and audio stream is preceded by the initialization block, which contains information about various stream parameters, such as codec, resolution, frame rate and so on for video and codec, sampling rate, channel layout for audio. It can also contain an optional binary blob generated by the encoder on the server and used for decoder initialization on the client. Video is streamed as a sequence of frames and audio (compressed or uncompressed) is streamed as a sequence of buffers. Every time an initialization block, a video frame or an audio buffer is received by the client, a call to the corresponding method of the ssdk::transport_common::ClientTransport::VideoReceiverCallback or the ssdk::transport_common::ClientTransport::AudioRecieverCallback is triggered. These callbacks are implemented by the ssdk::video::VideoDispatcher and the ssdk::video::AudioDispatcher classes respectively. Their purpose is to distribute initialization blocks, video frames and audio buffers belonging to different streams to their respective decoders and pipelines.

Once streams have been separated from each other by a Dispatcher, they are fed to an Input object. The purpose of an input is to unwrap video frames or audio buffers and convert them to an uncompressed stream. For compressed streams an Input object incorporates a decoder. A monoscopic video input used by the Simple Streaming Client sample is implemented in the ssdk::video::MonoscopicVideoInput class located in *sdk/video/MonoscopicVideoInput.. An audio input is implemented in the ssdk::audio::AudioInput class located in *sdk/auido/AudioInput..

On Windows and Linux, post-processing pipelines have their input data (uncompressed video frames or audio buffers) pushed by their corresponding inputs. They implement the Consumer interface defined in the corresponding Input class. A post-processing pipeline consists of a sequence of amf::AMFComponent objects. A pipeline pushes a video frame or an audio buffer through each of these components. A commonly used video processing pipeline is implemented by the base class ssdk::video::VideoReceiverPipeline and its Windows/Linux-specific derived class ssdk::video::VideoReceiverPipelinePC (see Implementing Video Receiver Pipelines). Likewise, an audio processing pipeline is implemented by the ssdk::audio::AudioReceiverPipeline class.

On Android there is no separate video presenter. Instead, the video decoder outputs directly to a platform-specific view surface, so there is no traditional multi-stage video post-processing pipeline. The decoder component still implements the QueryOutput() method, but it only returns an amf::AMFPropertyStorage object containing frame metadata (for example, presentation timestamps) used primarily for audio/video synchronization. Android-specific receive-side video handling is implemented by ssdk::video::VideoReceiverAndroid located in the sdk/video/android directory.

On Windows and Linux, a receiver pipeline provides input to a presenter, which is used as the final sink. Streaming SDK uses sample video and audio presenters from public AMF samples.

Synchronizing Video and Audio

Since video and audio streams are sent across the network in separate messages without buffering to achieve low latency, they may experience delays while in transit and these delays may result in video and audio getting out of sync on the client. While video can be fast-forwarded after the network congestion has cleared by reducing the wait between frames, audio cannot as its playback speed is determined by the sampling rate. When audio has been delayed, one of the ways to accelerate it to catch up with the video is to skip one or more decoded buffers and not play them. This is often acceptable since without buffering a delay of an audio buffer would result in an audible gap anyway. The synchronization between video and audio is performed by the ssdk::util::AVSynchronizer object inserted between the outputs of the video and audio receiver pipelines and the video and audio presenters. The implementation of the ssdk::util::AVSynchronizer class is located in the sdk/util/AVSynchronizer.* files.

Handling User Input

User input handling in the Simple Streaming Client is structured differently on Windows/Linux and Android, but follows the same overall architectural principles.

On Windows and Linux, input events are handled directly in the C++ Application class (for example, SimpleStreamingClientWin). The application receives input events from the OS and forwards them to the ssdk::ctls::ControllerManager class, which is responsible for constructing controller events and passing them to the Network Transport layer. All logic is implemented in C++, and the platform-specific windowing/input layer simply calls into the C++ application code.

On Android, input events are initially delivered in Java to the activity. Historically, Android only supported input handling in Java; native input APIs were added to the NDK starting with API level 31. To maximize reuse of the Streaming SDK C++ code and minimize platform-specific Java logic (which is considered mostly application-specific), the sample deliberately keeps the Java layer as thin as possible:

  • The Android activity implements the relevant input-handling interfaces and receives input events from the Android framework.
  • The activity immediately forwards these events to the C++ Application class via JNI calls.
  • The native side of these JNI methods is implemented in StreamingActivity.cpp, which contains the C++ implementations of the native methods of the StreamingActivity Java class. These methods simply adapt the Java-level events and forward them to the C++ Application class and then on to ssdk::ctls::ControllerManager.
  • The ControllerManager and related controller classes on Android follow the same design strategy as their counterparts on Windows and Linux.

This design was chosen intentionally to keep the overall application architecture for Windows, Linux, and Android as similar as possible, with the majority of reusable logic implemented in C++, and the Java layer confined to receiving input events from the Android OS and passing them down to the shared C++ code.

Mouse Handling on Android � Limitations

Mouse support on Android differs significantly from Windows and Linux, and these differences directly affect how desktop-style applications and games behave when streamed:

  • Absolute mouse movements only: Android delivers mouse position as absolute screen coordinates. The Simple Streaming Client on Android therefore sends absolute coordinates to the server; there is no support for true relative mouse mode (i.e., "delta-only" movement as used by many games to rotate the camera).

    • Impact on desktop applications: Many desktop applications (including the OS shell) programmatically reposition the mouse cursor, for example when opening context menus, tooltips, or when accessibility features are enabled (such as snapping the cursor to default buttons in dialogs). Because the Android client cannot reliably reflect server-initiated cursor repositioning on the device, the cursor on the client can appear "stuck" or out of sync with where the server expects it to be. For desktop use, it is recommended to disable accessibility features that automatically reposition the cursor on the server to reduce such inconsistencies.
    • Impact on games using mouse for camera rotation: First-person and third-person games typically rely on relative mouse movement with the cursor hidden and unconstrained. With absolute-only mouse input on Android:
      • The mouse is confined to the device's screen bounds and cannot be freely recentered.
      • Continuous camera rotation can feel limited or jittery, as the cursor cannot move beyond the screen edges.
      • Games that rely heavily on relative input for camera control may therefore offer a degraded experience.

    For these games, a better user experience can often be achieved by using an Xbox controller or another joystick/gamepad connected to the client instead of relying on mouse input.

  • Cursor flicker on Android 12�14: On Android versions prior to Android 15 (Android V, API level 35), there is a platform bug where changing the cursor shape on the server side (for example, from an arrow to a resize cursor) can cause the default Android cursor to appear briefly, resulting in visible cursor flicker. This is a known Android OS issue that has been fixed starting with Android 15, where cursor changes are handled correctly without intermittent flicker.

Keyboard Handling on Android � Limitations

Keyboard handling on Android is also different from Windows and Linux. Certain key combinations are reserved by the Android OS and are intercepted at the framework level before they reach the application. As a result, they cannot be forwarded by the Simple Streaming Client to the remote server.

Examples of key combinations commonly reserved for system use include:

  • Alt+Tab � used by Android to switch between activities or recent apps. The framework consumes this combination to handle task switching, so the key events are not delivered to the client application.
  • Meta (Windows key) / Search combinations � keys such as the Meta/Windows key or Search key (often mapped to a dedicated hardware or soft key) are used to open the launcher, app search, or assistant experiences. When pressed alone or in certain combinations (for example, Meta + another key), these are handled by the system UI and not passed through to the app.
  • System navigation keys � keys or key combinations mapped to Home, Back, and Recent Apps (for example, hardware keys or dedicated keyboard keys on some devices) are interpreted as navigation commands. Android consumes these events to navigate between activities and the home screen.
  • Volume keysVolume Up and Volume Down are typically routed to the system to control device volume and may not be reliably delivered to applications as standard key events, depending on device and OEM behavior.
  • Power / Screen lock � the Power key and combinations involving it (such as Power + Volume) are reserved for system actions like locking the screen, taking screenshots, or displaying power menus and are not available to applications.

Because these key combinations are handled at the OS level, the Simple Streaming Client running on Android cannot see them and therefore cannot forward them to the remote desktop or game running on the server. This behavior is by design in Android, to ensure that global navigation and system actions remain responsive and consistent across applications.

Clone this wiki locally