V4L2 Brainstorming meeting notes (day 2)

For video device nodes, the VIDIOC_[GS]_CROP ioctls are too limited. We need two ioctls for crop/compose operations. VIDIOC_[GS]_EXTCROP is proposed for cropping, and VIDIOC_[GS]_COMPOSE for composing.
Subpixel level resolution in new crop ioctl is a workaround to solve HW that does crop-scale-clip-compose operations.

Action: Samsung Poland R&D will send an RFC.

For subdevs, several uses cases need to be taken into account.
Adding Region Of interest in videobuf structure is not possible as no more spare field available.

TODO: We need to set horizontal/vertical blanking from userspace. S_FMT on the output pad could be used for that, but we need to check whether this will cause issues or not (as blanking is "applied" when the image is captured, not after binning/skipping/scaling for instance - to be verified).

Another factors, affecting framerate is the pixel-clock (some sensors have PLLs). Clocking information is relatively easily available in the kernel, using the clock-API, but is more complex to handle in the user-space. Also the user-space doesn't know . Also the user-space doesn't know what capabilities sensors have to modufy the frequency.

Moving the framerate calculation into the user-space has the advantage of avoiding calculations in the driver, not implementing any policies in the kernel, presevring the complete flexibility. But the user-space applications will have to take into accout advanced issues, like the fact, that on some sensors one of the blanking setting (horizontal?) also affects the exposure.

There's also an opinion, that the G/S_PARM ioctl() has to die;)

Those sensors have a pixel array that usually includes black borders, and a default active pixels array which is a subdev of the whole pixel array. The S_CROP ioctl are then used to select the area of interest (which defaults to the default active pixel array). The S_FMT ioctl configures the pixel size to be output by the sensor, but the width and height values are ignored and filled by the driver with the area of interest size previously configured by S_CROP.

The sensor pixel array size, and the default active pixels array location and size need to be reported to userspace. As sensors may need to provide more advanced information (such as shape information for cross-shape sensors), a CROPCAP ioctl would be too limited, and a new SENSORCAP (or similarly named) ioctl is needed.

We use SMIA sensors as a sample use case. The SMIA sensors implement analog crop (cropping while reading out the pixel area), binning (combining several consecutive pixels horizontally/vertically by summing the values, resulting in a rough scaling), skipping (skipping n pixels ever n+1 horizontally/vertically, resulting in a rough scaling as well), digital crop (cropping in the processing pipeline, which keeps the line length identical but increases/decreases the active pixels/blanking ratio, doable while streaming) and digital scaling.

SMIA sensors need to expose at least two subdevs (or rather sub-subdevs). The first subdev can implement active crop, binning and skipping, and the second subdev digital crop and digital scaling.

Biining and skipping need to be controlled independently, as they can be used to implement different use cases (for instance, if binning just sums the pixels, the exposure time can be lowered to achieve a better SNR or maintain an acceptable exposure time in low light conditions).

Setting the width and height through s_fmt on the output requires the subdev to propagate the information backward up to the first block that performs size-related operations. That propagation will need to go through the crop settings on the output pad, which is seen as confusing by some developers. We may consider not setting  width and height with s_fmt for output pads, but instead let it return  the width and height configured through other more direct, forward-propagation means  (controls or ioctls, such as V4L2_CID_BINNING and  VIDIOC_SUBDEV_S_SCALING). Propagation the pixel code backward doesn't suffer from that cropping issue, although one can advocate that it shouldn't be propagated either for consistency reasons.

Details on how to configure binning, skipping and scaling remains to be decided (this includes whether we decide to propagate the S_FMT size backward).

This remains to be discussed.

Perhaps make clear in the HDMI controls when it represents a status.

DV_TX_DVI_HDMI_MODE: make it DV_TX_MODE to select between different transmit modes (HDMI, DVI, perhaps displayport specific modes also).

DV_RX_5V: too specific. RX_TX_POWER? RX_TXSENSE?

API: should be on subdev level.

(Secondary problem: how to provide support for generic apps in a SoC? Use plugins in libv4l to provide support for those?)

Hook connectors to input pads, add VIDIOC_S_MUX or something similar to control internal mux. Note: connectors can have multiple output pads (useful for DVI-I).

New control type for bitmasks.

Requirements:

Produced by sensor, associated with frame (SMIA++ is register values). Variable size. CSI2: metadata over a separate channel, parallel: in top lines. Format usually hw specific.

Histogram/statistics: you don't want to associate it with the buffer since you want to get it as soon as possible as they are available before the plane transmission ends, allowing you to setup the settings for the next frame. Note: this works because it is out of band data (at least for the OMAP3).

The OMAP3 ISP driver provides statistics to userspace through custom ioctls on the statistics subdevs nodes. It might make sense to add "video" nodes at the output of the statistics subdevs instead. A new type of node would then need to be created, and statistics-related ioctls and controls be standardized.

We should be able to support metadata in a plane. Hardware specific, so drivers can choose whatever mechanism works best (e.g. metadata in a plane, controls, read from subdev node, etc.).


'viewer mode' vs 'snapshot mode'. Currently: stop streaming, start streaming with new HQ settings/buffer sizes. Time consuming.

Want to preallocate buffers of both sizes.

OMAP3: per-filehandle bufferqueue.

Sensors can have viewmode vs snapshot register sets: switching from one to another means just switching to another set. Could also be triggered by a trigger pin. Can also be hooked up to a flash strobe.

Flash can also be a separate device or part of the bridge.

Some sensors have even additional modes (some can remember the last X snapshots and transmit those in 'monitor' mode). So don't limit the number of modes up front.


'Bracketing' in SMIA++: it is possible to set parameters for X frames before streaming. When streaming starts, the settings will be applied for each frame. The streaming stops after the last frame data is provided for. Subdev SMIA++ specific ioctl. Temporariliy overriding the e.g. exposure control.

May need to assign configurations (e.g. exposure) to specific frames.

Flash controllers can be triggered by the sensor or vice versa. Very hardware dependent.

Flash controller: Flash, torch mode, privacy light. Detection of LED hardware errors (short circuit, over temperature, timeout, overvoltage).

Create basic flash controls. If the hardware has very unusual needs then HW-specific controls can be used in addition.