06/11/2025 | Press release | Distributed by Public on 06/11/2025 02:19
Whether you're working behind the scenes of a live broadcast or crafting cinematic visuals, understanding the technology behind professional camera systems is essential. Klaus Weber, Director of Product Marketing answers five of the most frequently asked questions about camera technology. These expert insights offer a deeper look into topics such as depth of field, imaging systems, lens compatibility, HDR/SDR workflows, and the evolving role of 5G in wireless production.
If you're navigating the intersection of broadcast and cinematography, this is your go-to guide for technical clarity and practical application.
What is Depth of Field and What Factors Influence It?
Depth of Field (DoF) in a camera system is primarily influenced by the lens iris and the focal length, where the focal length required for a given viewing angle depends on the size of the image sensor.
A wider (more open) iris decreases DoF, and a longer focal length also reduces DoF; however, these adjustments are often limited by fixed object distances and the narrow optimal aperture range, especially in UHD systems.
In practice, sensor size plays a key role-larger sensors (e.g., the S35 imager in the LDX 180) result in a shallower DoF compared to smaller sensors (e.g., 2/3" in the LDX 110/135/150) under the same shooting conditions.
How Do Broadcast and Cinematography Lenses Differ?
Broadcast and cinematography lenses differ primarily in image size compatibility, lens mounts, and control systems.
Broadcast lenses use the B4-mount for 2/3" imagers and have built-in controls, while cinematography lenses-typically PL-mount for larger S35 or full-frame sensors-often lack integrated controls.
Optical converters to use B4 lenses on S35 cameras reduce sensitivity by about 2.5 f-stops and compromise resolution, making them unsuitable.
For live use on S35 cameras, PL-mount lenses with built-in servo controls are the recommended solution.
What Are the Advantages of a Single UHD Imager Compared to 3x HD Imagers?
A single UHD imager provides superior resolution and image sharpness compared to a three HD imager setup, especially in UHD operation, where the single imager delivers native resolution while the three-imager system relies on up-conversion.
In HD mode, the UHD imager also benefits from green channel oversampling, resulting in better sharpness and reduced need for digital enhancement.
However, in LED wall applications, single UHD imagers are more susceptible to colored moiré artifacts due to the uneven distribution of color pixels, requiring carefully designed optical low-pass filters-whereas three-HD-imager systems typically produce less disruptive, monochrome moiré.
How Can Simultaneous HDR/SDR Live Productions Be Optimized?
For simultaneous HDR/SDR live productions, the optimal workflow combines native HDR operation, high-bit-depth look control, and built-in LUT processing for accurate HDR-to-SDR conversion.
Native operation in BT.2100 formats (HLG and PQ) ensures uncompromised 10-bit image quality, avoiding degradation from format conversions.
Applying look control within the camera's high-bit-depth processing and using 33x33x33 LUTs matching those at the production output ensures consistent SDR signals for shading, ISO feeds, and delivery.
What Are the Differences Between Wireless Cameras Using RF and 5G Technologies?
Broadcast RF solutions use dedicated point-to-point transmission over reserved frequency bands and remain reliable for high-end productions.
But they are complex, costly, and limited by bandwidth and regional regulations.
In contrast, broadcast 5G solutions offer more flexible and scalable wireless camera integration, enabling direct streaming into software-based production platforms and supporting remote workflows.
5G can operate over public networks, local networks, or sliced public networks, making it a versatile option for a wider range of broadcast applications.
Visit our LDX Series page to see our full range of cameras