--- Log opened Mon Jan 09 12:37:05 2012 12:37 -!- sailus [~sailus@nblzone-211-213.nblnetworks.fi] has joined #v4l-meeting 12:37 -!- ServerMode/#v4l-meeting [+ns] by hubbard.freenode.net 12:37 [Users #v4l-meeting] 12:37 [@sailus] 12:37 -!- Irssi: #v4l-meeting: Total of 1 nicks [1 ops, 0 halfops, 0 voices, 0 normal] 12:37 -!- Channel #v4l-meeting created Mon Jan 9 12:37:06 2012 12:37 -!- Irssi: Join to #v4l-meeting was synced in 1 secs 12:37 -!- pmp [~pboettch@bny92-5-82-232-56-222.fbx.proxad.net] has joined #v4l-meeting 13:51 -!- hverkuil [~hverkuil@nat/cisco/x-qluffgwaurmfbvzl] has joined #v4l-meeting 14:01 -!- pinchartl [~pinchartl@perceval.ideasonboard.com] has joined #v4l-meeting 14:01 < pinchartl> hi 14:02 <@sailus> pinchartl: Hi! 14:02 -!- dacohen [~david@a91-153-6-202.elisa-laajakaista.fi] has joined #v4l-meeting 14:03 < pinchartl> hi David 14:03 < dacohen> hi laurent 14:03 <@sailus> dacohen: Bom dia! 14:03 < dacohen> huomenta! 14:04 <@sailus> dacohen: Do you know if Tuukka will join as well? 14:05 < dacohen> I'm trying to contact him 14:05 < dacohen> last week he said he's joining 14:06 < pinchartl> what about Marek, Sylwester, Tomasz and Kamil ? 14:06 <@sailus> pinchartl: Sylwester at least is on #v4l. 14:06 -!- tuukkat [c0c69724@gateway/web/freenode/ip.192.198.151.36] has joined #v4l-meeting 14:07 < pinchartl> hi Tuukka 14:07 < tuukkat> hello everyone 14:07 < dacohen> hello Tuukka 14:07 <@sailus> tuukkat: Terve! 14:09 <@sailus> Ok, I think we can start. 14:09 < dacohen> yes 14:09 <@sailus> I called up this meeting to have an online discussion on the new sensor control interface. 14:10 <@sailus> There is no specific target set for this meeting, but just to informally discuss the interface. 14:10 < pinchartl> could you please start with an agenda to list the items you want to discuss ? 14:10 <@sailus> Of course if we can agree on something all the better. :-) 14:11 <@sailus> pinchartl: Well, yes. 14:11 <@sailus> The patches are on the list now and I've gotten quite many good comments to them as well. 14:12 <@sailus> But I'm not entirely happy with all the aspects of the interface. 14:12 <@sailus> - pixel rate --- should that really be part of the v4l2_mbus_framefmt? 14:12 <@sailus> In the end we could also discuss this one: 14:13 <@sailus> - How will this work for the regular V4L2 applications? (I think the answer should be the user space library contained in libv4l.) 14:14 < pinchartl> ok 14:14 < pinchartl> if I'm not mistaken, the only pending item remaining from your patch set is the pixel rate. everything else has been reviewed and acked. is that correct ? 14:15 <@sailus> pinchartl: I haven't gotten any acks yet. 14:15 <@sailus> But no-one has been complaining either. :-) 14:15 < pinchartl> ok, no formal ack, but an implied agreement 14:16 < pinchartl> at least from Sylwester and from me 14:16 < pinchartl> David, Hans, Tuukka, do you have comments on Sakari's patch set beside pixel clock handling ? 14:17 < dacohen> I have one question 14:17 < tuukkat> i must admit i haven't reviewed them too much :( I read the Sakari's RFC and at the moment I don't have comments 14:18 < pinchartl> ok 14:18 < pinchartl> dacohen: please go ahead 14:18 < dacohen> with this change, sensor driver isn't ready to he used without userspace dependence 14:18 < dacohen> so 14:19 < dacohen> is it going to be mandatory to release v4l plugin as well for sensor driver only? 14:19 < dacohen> it means sensor driver is still independent of a given host it will originally implemented with 14:19 <@sailus> dacohen: I'm also working on a generic pipeline setup plugin. 14:20 < pinchartl> the sensor driver is independent of the host driver, that's correct 14:20 <@sailus> Its intent is to use the V4L2 subdev/MC APIs to implement higher level functionality suitable for regular V4L2 applications. 14:20 <@sailus> But it's nowhere near usable yet. 14:20 < pinchartl> dacohen: are you worried about this particular sensor driver requiring userspace support, or about all sensor drivers requiring userspace support ? 14:20 <@sailus> There were initial suspicions on the feasibility of the approach but I haven't run into issues yet. 14:21 < dacohen> my worry is: 14:21 <@sailus> I can figure out possible paths and enumerate media bus formats. 14:21 < dacohen> if a sensor driver is implemented together with a host driver 14:21 < dacohen> then I release plugin to configure whole pipeline 14:22 < dacohen> if I want to use same sensor with other host 14:22 <@sailus> dacohen: Do you now mean an user space library? 14:22 < dacohen> yes 14:22 < dacohen> how is it going to be defined the userspace implementation 14:23 <@sailus> dacohen: There are two alternative approaches to this. 14:23 < dacohen> plugin is per device? 14:23 <@sailus> 1) Use libmediactl and libv4l2subdev to configure the pipeline based on a needed configuration. 14:24 <@sailus> 2) Use the generic pipeline configuration library. If you have a custom application using the device you probably don't want to use this one. 14:25 <@sailus> The first approach is quite usable these days; what we're missing is just a library which takes in a configuration in text form and uses libmediactl and libv4l2subdev to apply it. 14:25 <@sailus> The parsing functionality already exists. 14:26 <@sailus> I think specific plugins should most likely concentrate to offering 3A functionality and not configure the image pipe, that can already be done in a generic way. 14:27 < pinchartl> sailus: I think David's worry is as follows (David, please correct me if I'm wrong) 14:27 < pinchartl> ISP vendors are expected to release a kernel driver and a userspace plugin for the ISP 14:27 < pinchartl> those two components will be delivered to device vendors (either directly or through upstream, but that's irrelevant to this discussion) 14:28 -!- snawrocki [~snawrocki@217-67-201-162.itsa.net.pl] has joined #v4l-meeting 14:28 <@sailus> snawrocki: Hi! 14:28 < pinchartl> device vendors will then create devices based on the ISP 14:28 < pinchartl> and will select one or more sensors 14:29 < snawrocki> hi everybody 14:29 < pinchartl> what will then happen if the ISP userspace plugin doesn't work correctly with the sensor ? who will be responsible for that ? device vendors are known to screw things up, and ISP vendors don't want to support every sensor explictly 14:29 < pinchartl> dacohen: is that your concern ? 14:29 < dacohen> that's part of the problem yes 14:30 < dacohen> the other part is: 14:30 < dacohen> with android vendors are responsible to implement camera middleware 14:30 < dacohen> basically they can do whatever they want with kernel+middleware as long as it works for Android in upper layer 14:30 < pinchartl> dacohen: please define "vendors" in this context. SoC vendors or device vendors ? 14:31 < dacohen> both 14:31 < dacohen> depends on the agreement about who does the driver 14:31 < pinchartl> ok 14:31 < dacohen> soc vendors more likely 14:32 < tuukkat> what is the license of libmediactl/libv4l2subdev? 14:32 < dacohen> as android is a very common embedded environment to delivery new drivers codes 14:32 <@sailus> tuukkat: It's LGPL 2.1+. 14:32 < dacohen> we need a well-defined way to delivery the userspace part as well 14:34 < dacohen> with android, usually when patches become public vendors are not willing to change it anymore 14:34 < pinchartl> dacohen: ok, so your question then is what is the architecture we should use for android, and who should deliver which part ? 14:34 < dacohen> (that's another problem which do not belong to this context) 14:34 <@sailus> dacohen: Do you wish to implement the user space library in a generic, sensor independent way, or what was the question? 14:34 < dacohen> I'd like to have some solution we could propose to android as well 14:35 < tuukkat> sailus: generic (sensor independent) library would always be preferred, but likely not possible for all cases 14:35 < dacohen> if we can manage to bring android middleware closer to official community's support (as driver is not part in middleware) 14:36 <@sailus> dacohen: Do you mean Android as it is provided by Google, or for your own use? 14:36 < dacohen> provided by google 14:36 < dacohen> basically google tells vendors the prefered way to implement things below Android layer 14:36 < pinchartl> sailus: I think David would like to understand how a solution for android should look like, and make sure he can deliver one using the APIs and software we're creating 14:37 < dacohen> yes :) 14:37 < pinchartl> ok 14:37 <@sailus> Ack. 14:37 < pinchartl> the android camera API is in userspace. vendors (SoC + device) are responsible for implementing kernel + userspace support to offer that API 14:38 < dacohen> yes 14:38 < pinchartl> SoC vendors provide ISP support 14:38 < pinchartl> integration with a specific sensor can be performed by the ISP vendor or the device vendor 14:38 <@sailus> dacohen: Google's part of the stack uses a relatively high level API for camera, doesn't it? 14:38 < pinchartl> ISP vendors don't want to spend time implementing support for many sensors, and device vendors are known to mess things up 14:38 < pinchartl> sailus: yes 14:38 < dacohen> yes 14:38 <@sailus> That part is provided by the vendor. 14:39 < dacohen> if soc vendors can provide driver + user space lib it's ideal situation 14:39 < pinchartl> so we need to ensure that there's a way for the SoC vendor to provide kernel + userspace support for its ISP in a way that is compatible with MC + V4L2 *and* android 14:40 <@sailus> pinchartl: Well said. 14:40 < dacohen> yes :) 14:40 < pinchartl> and at the same time minimize the amount of code that needs to be written by device vendors, because they do a bad job 14:40 -!- lyakh [~lyakh@dslb-088-076-016-103.pools.arcor-ip.net] has joined #v4l-meeting 14:40 <@sailus> lyakh: Hi! 14:40 < tuukkat> we should push for Android vendors to adopt the same user space lib than in the GNU/Linux side 14:40 < pinchartl> device vendors will prefer a solution where they don't have to implement anything. if we want SoC vendors to use MC + V4L2, we need to make it easy for them to deliver a (mostly) out-of-the-box experience to device vendors 14:40 < lyakh> hi, but it's only 12:40 GMT 14:41 < lyakh> according to Google at least, and the meeting was announced for 14:00 GMT? 14:41 <@sailus> lyakh: 14:00 GMT + 2. 14:41 < lyakh> ok 14:42 <@sailus> lyakh: Next time I'll specify it in GMT directly. :-) 14:42 < dacohen> tuukkat: I agreee 14:42 < lyakh> :) np, should have read more carefully:) 14:42 <@sailus> pinchartl: I agree as well. 14:42 < dacohen> userspace is not part of the driver 14:43 < dacohen> we can't let vendors implement a new lib all the time 14:43 <@sailus> dacohen: Much of what's in those libraries is proprietary. 14:43 < dacohen> usually malformed driver is corrected by malformed middleware and Android layer has a "workind" driver 14:43 <@sailus> What I think we should do, is to provide building blocks; libraries which solve a single problem and do that well. 14:44 < tuukkat> dacohen: depends how you define the driver. If an userspace stub is required for using the driver and the stub is made specifically for the driver, it could be considered to be part of it 14:44 < dacohen> sailus: libv4l can be used as closed source plugin is allowed 14:44 < dacohen> tuukkat: with this new sensor interface, sensor is not usable without userspace part 14:45 < pinchartl> dacohen: if we take one step back, a userspace part is always needed 14:45 < pinchartl> kernel drivers offer an API 14:45 < pinchartl> and we need a userspace part to use it 14:45 < dacohen> let correct mysenf 14:45 < pinchartl> with a pure V4L2 driver, the userspace part can (in theory) be generic, with no dependency to the device 14:45 < tuukkat> dacohen: true, but can a generic libv4l be implemented which could support most sensors in a generic way? 14:45 < dacohen> with this new sensor interface, sensor is not usable with pure V4L2 interface 14:45 < pinchartl> dacohen: that's not completely correct 14:46 < pinchartl> you *could* configure the sensor within the ISP driver 14:46 < pinchartl> making a pure V4L2 driver out of it 14:46 < pinchartl> but that's not something we want to do 14:46 < pinchartl> configuring the pipeline is use-case dependent, and use cases belong to userspace 14:46 < dacohen> but that's something you may see in android if we don't have well defined rules for userspace part 14:46 < pinchartl> that's why we have the MC + V4L2 subdevs API 14:47 <@sailus> I think we'll need to make the assumption that the Media controller and V4L2 subdev APIs are implemented by the drivers. 14:47 < pinchartl> we want android to use MC + V4L2 subdev 14:47 < pinchartl> and to make it happen, we need to define how android should use it 14:47 <@sailus> Otherwise the device is suitable for what we call regular V4L2 applications. 14:47 <@sailus> pinchartl: Exactly. 14:47 < dacohen> so, sensor drivers implementing new sensor api can be used only my hosts implementing MC? 14:48 < pinchartl> dacohen: it's not really a completely new sensor API 14:48 < pinchartl> sensors have implemented the V4L2 subdev API for a long time 14:48 < pinchartl> and they can be controlled by the ISP driver 14:48 < dacohen> tuukkat: I think that's the intention (plugins should do specific codes) 14:48 < pinchartl> in this case the SMIA++ driver implements several subdevs 14:48 < pinchartl> an ISP driver could still control it from inside the kernel 14:49 <@sailus> pinchartl: Exactly. 14:49 < pinchartl> so an ISP driver could control the SMIA++ driver without exposing the MC API to userspace 14:49 < pinchartl> but 14:49 <@sailus> But that's still not the way it ever was intended to be done. 14:49 < dacohen> that means policy inside host driver 14:49 < pinchartl> regardless of whether the sensor is simple or more complex, as SMIA++, we don't want that 14:49 < pinchartl> we want kernel drivers to expose the MC + V4L2 subdev API to userspace 14:50 < pinchartl> because policies should be in userspace 14:50 < dacohen> then we need to explicitly say that rule 14:50 <@sailus> dacohen: That is the generic rule already. 14:50 < pinchartl> it is, but there's an exception to that rule 14:50 < dacohen> what? 14:50 < pinchartl> platforms where the ISP and sensors are simple 14:50 < pinchartl> in that case, a pure V4L2 driver can do the job 14:51 < pinchartl> if the hardware is so simple that there's no policy to be implemented 14:51 < dacohen> true 14:51 <@sailus> pinchartl: Then it's not really about policy. 14:51 < pinchartl> exactly 14:51 <@sailus> If there's no control you can't have policies either. 14:51 < pinchartl> my point :-) 14:51 < pinchartl> most of us work with complex ISPs 14:51 < pinchartl> but we need to remember than simple ISPs exist as well 14:51 < pinchartl> and that V4L2 is perfectly fine for those 14:52 <@sailus> The MC and V4L2 subdev interfaces were intended to control more complex devices. 14:52 < pinchartl> otherwise we will make people who develop drivers for those ISPs pretty angry 14:52 < dacohen> that's true 14:52 <@sailus> pinchartl: I agree. 14:52 <@sailus> ISP in this case is essentially e.g. a CSI-2 receiver, I assume? 14:52 < pinchartl> for instance, yes 14:53 < pinchartl> many of those simple ISPs are supported by soc-camera 14:53 < pinchartl> and a pure V4L2 interface is perfectly fine there 14:53 <@sailus> I think that the more simple devices should also move towards MC / V4L2 subdev interfaces. 14:53 < pinchartl> I'm all for encouraging that, but not for pushing hard 14:53 <@sailus> The issue is that the sensor driver has to provide a different kind of interface depending on what kind of host it's connected to. 14:54 < pinchartl> otherwise we'll alienate other developers 14:54 < pinchartl> sailus: why ? 14:54 <@sailus> Of course there has to be a very easy way to use the simple devices, the way they used to work in the past. 14:54 < pinchartl> the sensor driver provides a V4L2 subdev API, regardless of what they are connected to 14:54 <@sailus> Let's say that you want to combine a simple VGA sensor (no binning or cropping) and a CSI-2 receiver. 14:55 <@sailus> The sensor provides no V4L2 subdev interface to user space, and the CSI-2 receiver driver commands the sensor directly, either by using pad or video ops. 14:56 <@sailus> Then you want to attach a SMIA++ sensor to the CSI-2 receiver. 14:56 <@sailus> In this case, the user also wishes to perform the pipeline configuration from the user space. 14:56 < pinchartl> yes and no 14:56 < pinchartl> in that case you have two options 14:56 < snawrocki> for device tree support we will probably need a top level device for simple/complex ISP + sensor, then it might make sense to make also simple ISPs register the media device driver 14:56 <@sailus> Also in such a case the CSI-2 receiver driver doesn't provide a V4L2 subdev node. 14:57 < pinchartl> in that case I think it will be the decision of the CSI-2 driver 14:57 < pinchartl> if the CSI-2 driver implements MC + V4L2 subdevs API, the pipeline will be configured by userspace 14:57 <@sailus> What I was thinking was that we need a generic pipeline configuration library used by libv4l to configure such devices. 14:57 < pinchartl> if the CSI-2 driver implements a pure V4L2 API only, it will need to hardcode policies in the CSI-2 driver 14:57 < pinchartl> the first case is much better 14:58 <@sailus> pinchartl: But still the decision is made in the CSI-2 driver. 14:58 < pinchartl> sailus: yes. that's not optimal, but it can technically be done 14:58 < pinchartl> my point is that, in both cases, the sensor driver will offer the same interface 14:58 <@sailus> As there may well be users for both interfaces, we should resolve this in a way that doesn't require two different CSI-2 receiver drivers. 14:58 < pinchartl> it won't care whether it's connect to an MC ISP or a pure V4L2 ISP 14:59 < pinchartl> sure 14:59 < pinchartl> again, my point was that it doesn't make any difference on the sensor driver side 14:59 <@sailus> pinchartl: Except the V4L2 subdev device node, but that's a detail. 15:00 < pinchartl> sailus: whether subdev nodes are registered or not is a decision of the bridge driver 15:00 <@sailus> pinchartl: Ok. 15:00 < pinchartl> or rather a joint decision 15:00 < pinchartl> subdev drivers set a flag to tell whether they support subdev device nodes 15:00 < pinchartl> and host drivers register the nodes 15:01 < pinchartl> anyway 15:01 < pinchartl> we all work with complex ISPs :-) 15:02 < pinchartl> the situation on the kernel side is well understood I believe 15:02 < pinchartl> worries come from the userspace side 15:02 < dacohen> pinchartl: correct 15:03 <@sailus> I think the problem can be mostly split into two: pipeline configuration and 3A. 15:04 <@sailus> The interface for pipeline configuration is generic whereas the 3A solutions are typically very proprietary. 15:04 < dacohen> sailus: can we split pipeline configuration into path and resolution configuration? 15:05 <@sailus> dacohen: That's essentially how it is currently. 15:05 < pinchartl> 3A solutions involve both the ISP and the sensor. David, do you know how that's usually handled in android ? 15:05 < dacohen> pinchartl: same for android 15:06 < pinchartl> dacohen: who delivers the 3A implementation in android ? 15:06 < pinchartl> ISP vendor or device vendor ? 15:06 <@sailus> My understanding is that the sensor is essentially involved only in a way that it's part of the 3A feedback loop. 15:06 < dacohen> pinchartl: I can't say for all cases, but it looks to me ISP vendors 15:06 < pinchartl> I'd even talk more generically about quality tuning, not 3A 15:07 <@sailus> pinchartl, dacohen: I think so, too. 15:07 < pinchartl> quality tuning involves the sensor 15:07 < pinchartl> dacohen: so ISP vendors test their quality tuning implementation with several sensors, and device vendors are required to pick one of them ? 15:08 < dacohen> pinchartl: I think that goes to internal things 15:08 < dacohen> but I could say it is a valid situation 15:08 <@sailus> pinchartl: I think it's mostly so. 15:09 < tuukkat> 3a is something that happens below google's definitions, so which vendor actually provides it could vary 15:09 < pinchartl> ok 15:10 < tuukkat> but 3a is quite tightly coupled with ISP since in most cases ISP is providing the statistics for 3a 15:11 <@sailus> Would be safe to say that vendors provide their own 3A frameworks, at least for now? 15:11 < tuukkat> yes 15:11 < dacohen> yes 15:11 < pinchartl> I expect that in most cases the ISP vendor will provide a userspace camera stack implementation including 3A 15:12 < pinchartl> and device vendors will sometimes customize that 15:12 < pinchartl> but not always 15:12 < dacohen> yes 15:12 < pinchartl> (and obviously depending on whether they have access to the source code or not) 15:13 < pinchartl> so, from a device vendor point of view, would it be a problem to use libv4l as part of the camera stack ? 15:13 < pinchartl> we currently have two possible camera stacks on regular Linux platforms. one is based on libv4l, the other on gstreamer 15:13 < pinchartl> gstreamer isn't an option on android today 15:14 < dacohen> I'd like to push libv4l to google 15:14 < tuukkat> dacohen:i agree 15:14 < dacohen> or at least make it to point out to android users 15:14 < tuukkat> pinchart: for pipeline configuration, i think libv4l would be OK as long as we can get vendors to use MV+subdev 15:14 < dacohen> and then device/soc vendors would decide whether to use it or not 15:15 < tuukkat> i'm a little bit concerned about the LGPL license, tough. BSD would be safer bet ;) 15:16 <@sailus> I'm not entirely certain the interface provided by libv4l is suitable as the interface for camera in those embedded applications. 15:16 < pinchartl> does android still include no (L)GPL software beside the kernel ? 15:16 < tuukkat> for 3A, i'm less convinced that vendors want to adopt libv4l 15:16 <@sailus> That said, I haven't really thought about the prospect too much. 15:18 < dacohen> pinchartl: I don't know 15:18 < dacohen> tuukkat: agree 15:18 < dacohen> tuukkat: but quality tuning isn't crucial to make driver work 15:18 < dacohen> I'd say pipeline (path + resolution) configuration is to most problematic 15:18 <@sailus> dacohen: Isn't the quality tuning only about the ISP? 15:18 <@sailus> The sensor isn't involved in that. 15:19 <@sailus> Well, at least if you have a raw bayer sensor. :-) 15:19 < pinchartl> sailus: you need to at least control the exposure time, and select blanking parameters based on that 15:19 < dacohen> sensor may be indirectly involved 15:19 <@sailus> pinchartl: Those are sensor independent. 15:19 < pinchartl> you might want to control the analog gain as well 15:19 <@sailus> And standardised, or at least will be soon. 15:20 <@sailus> Yes. 15:20 <@sailus> Typically the interfaces related to image quality tuning in the ISPs are private ioctls. 15:20 <@sailus> Just look how many there are in the OMAP 3 ISP driver, for example. 15:20 < tuukkat> 3a is both ISP and sensor specific 15:20 <@sailus> There's none in any sensor driver I know of. 15:21 < pinchartl> there are some indirect dependencies. for instance on some sensors you might want not to push analog gain as high as on other sensors, depending on the sensor noise 15:21 <@sailus> pinchartl: Good point. 15:21 < tuukkat> color response depends on sensor 15:21 <@sailus> But the bottom line is: the sensor interface is standardised. 15:21 < pinchartl> so 3A usually doesn't involve private sensor controls, but it still involves the sensor 15:21 < pinchartl> yes 15:22 <@sailus> It doesn't matter which sensor is there, from the interface point of view. 15:22 < pinchartl> but the 3A implementation will need to know which sensor is used, and compute 3A parameters depending on that 15:22 < pinchartl> which means that the 3A implementation isn't sensor-agnostic 15:22 <@sailus> pinchartl: That's true. 15:22 < pinchartl> it can't be written once by the ISP vendor and work optimally with any sensor 15:22 <@sailus> But the sensor driver itself is not involved in that. 15:23 < dacohen> sailus: that's not completely true 15:23 < pinchartl> so either the ISP vendor needs to support sensors explictly, or it needs a way to let device vendors customize the 3A implementation 15:23 <@sailus> dacohen: Do you have examples of the opposite? 15:23 < tuukkat> sailus:agreed 15:23 < dacohen> sailus: yes 15:24 <@sailus> pinchartl: True. 15:24 < dacohen> besides sensor, the module may change as well 15:24 < dacohen> I mean, it may depend on module too 15:24 <@sailus> dacohen: True. 15:24 < dacohen> sensor needs to be asked 15:24 <@sailus> The user space part responsible for the tuning must be able to also recognise the module. 15:24 < tuukkat> dacohen: true, but usually the API is stil the same, at least for the actual sensor... 15:25 < dacohen> I still have one more sample which requires sensor driver 15:25 < dacohen> but I am not certain if I can share 15:25 < tuukkat> VCM (focus driver) might need to have private controls, at least now when those aren't standardized 15:26 < pinchartl> tuukkat: good point 15:27 < pinchartl> to summarize this, I think that once again we have no issue in the kernel-userspace API, but we need to ensure that our solution allows customization of the userspace code for a particular sensor model 15:27 < tuukkat> it's definitely possible that a sensor might require its own private controls, but i'd say it's not common 15:27 < dacohen> pinchartl: yes 15:27 < tuukkat> but it definitely is a sitation that has to be taken into account 15:28 < pinchartl> so the problem boils down to whether (and how) we can provide an implementation based on libv4l that fulfils all those requirements 15:28 <@sailus> pinchartl: I think we rather need multiple libraries. 15:29 <@sailus> libmediactl and libv4l2subdev are some of them. 15:29 < pinchartl> sailus: please go ahead and explain what your idea is, and then let's see if it fulfils Intel's needs 15:29 <@sailus> pinchartl: Yes. 15:30 <@sailus> What I think is needed is a library which applies a pipeline configuration, both links and formats, from a text file to a device. 15:31 < pinchartl> (and please tell when you're done with your explanation, so we won't interrupt you in the middle) 15:31 <@sailus> Device and use case specific textual configuration are far easier to manage than the current configurations which include sensor mode and user space programs. 15:32 <@sailus> The decisions related to pipeline are policy decisions so they should be explicitly specified. 15:32 <@sailus> It's not possible to come up with those configurations automatically. 15:33 <@sailus> (For regular applications in a general case, that can be made by the automatic pipeline configuration library, but the result cannot be expected to be optimal in all cases.) 15:33 <@sailus> The above configuration will include some controls on V4L2 subdevs, but that's mostly a minor addition to the pipeline configuration. 15:34 <@sailus> That should resolve the pipeline configuration with the possible exception of the digital zoom support. 15:35 <@sailus> If the digital zoom can be implemented using a single sensor resolution, then what is required by the rest of the user space is mostly the identity of the scaler subdev. 15:35 <@sailus> I'm done. 15:35 <@sailus> Any questions? :-) 15:35 < pinchartl> ok 15:36 < pinchartl> first of all 15:36 < pinchartl> what software component will use that ? 15:36 <@sailus> If the GStreamer based stack was in question, I'd say this is the camera source component. 15:37 <@sailus> It can also be libv4l. 15:37 <@sailus> At least for regular V4L2 applications. 15:37 < pinchartl> and for android it would be the android camera implementation ? 15:38 <@sailus> pinchartl: I think there would have to be a vendor library in between. 15:38 < pinchartl> what would that vendor library do ? 15:38 <@sailus> At least a wrapper. 15:39 <@sailus> Well, I don't know specifics of the Android camera interfaces, but I think it should provide a mapping between supported use cases and pipeline configurations. 15:39 < pinchartl> applying a pipeline configuration from a text-based description is certainly useful as a utility 15:39 < pinchartl> but I don't think that's enough 15:39 < pinchartl> don't we want to support both android and Linux with the same code base ? 15:39 <@sailus> \/use cases/use cases and resolutions/ 15:39 < pinchartl> (for 3A for instance) 15:40 <@sailus> pinchartl: This is independent of the 3A as far as I see it. 15:40 < pinchartl> sailus: yes it is. that's why I don't think it's enough 15:40 < dacohen> pinchartl: as tuukka said, IQ may be a bit more complicated to provide 15:40 <@sailus> The 3A will need to know the identities of a few subdevs, too. 15:40 < dacohen> but pipeline configuration is the most critical 15:40 <@sailus> Let's tackle one problem at a time. 15:41 < dacohen> sailus: agreed 15:42 <@sailus> Do you think this would fulfil the needs for pipeline configuration? 15:42 < tuukkat> could that text-file-based pipeline configuration be part of libmediactl, implemented in a couple of functions? 15:43 <@sailus> tuukkat: I think it would be a separate library. 15:43 < dacohen> sailus: why? 15:43 <@sailus> libmediactl only knows about MC. 15:43 < tuukkat> i think the idea is good, as a helper function 15:44 <@sailus> I think it could be distributed in the same package. 15:44 <@sailus> Or rather should. 15:44 < tuukkat> let's give a name for it: libv4l2pipe ? 15:44 <@sailus> Its functionality and purpose is so close to that. 15:44 <@sailus> tuukkat: For example. 15:44 <@sailus> The library I was working with is called libv4l2pipeauto. :-) 15:45 <@sailus> libv4l2pipetext? :-) 15:45 < tuukkat> libv4l2pipe, if not part of libmediactl, would than call function of libmediactl 15:45 < pinchartl> sailus: libv4l2pipe sounds good 15:45 < tuukkat> s/function/functions/ 15:45 <@sailus> Yes, that library would be using both libmediactl and libv4l2subdev. 15:45 <@sailus> Just as libv4l2pipeauto. 15:46 < tuukkat> a text config file would contain tags specifying different use cases and pipeline setup for each, I suppose 15:46 <@sailus> tuukkat: Yes. Alternatively different configurations could be put to different files. 15:47 <@sailus> Probably a single file is better. 15:47 <@sailus> I could imagine the pipeline configuration would be the same for multiple format configurations, for example. 15:47 <@sailus> s/the pipeline configuration/a pipeline configuration/ 15:47 < tuukkat> at least possibility specify different configurations in the same file, but maybe also possibility to read several different files (as long as tags don't conflict) 15:49 <@sailus> tuukkat: Sounds good to me. 15:49 < dacohen> maybe read all files from a specific directory 15:49 <@sailus> The libmediactl creates in-memory data structures of the topology of the device. 15:49 <@sailus> Likely libv4l2pipe should do the same. 15:50 < lyakh> different files sound more flexible and probably easier to implement, so, we could begin with that, and see if we really need single-file support too 15:50 <@sailus> And provide some kind of a query interface to query data from those data structures. 15:50 < dacohen> lyakh: agreed 15:50 <@sailus> The user could e.g. tell to libv4l2pipe which files to read. 15:51 <@sailus> The files could be, and typically would be, device dependent. 15:51 <@sailus> I'm not sure if it should be libv4l2pipe's responsibility to choose the correct one. 15:51 <@sailus> But if it isn't, then whose is it? 15:52 <@sailus> Perhaps it could include that functionality as well. 15:52 <@sailus> With an option to load a specified file. 15:52 < tuukkat> sailus: somehow i feel that libv4l2pipe would require so close relationship with libmediactl so that they would need to be merged. 15:52 <@sailus> I don't think merging is required. 15:53 < pinchartl> tuukkat: we'd like to keep V4L2 dependencies outside of libmediactl 15:53 < pinchartl> so that libmediactl can be used for other kind of media (ALSA is an example) 15:53 <@sailus> I felt the same way when I was separating the format parsing from media-ctl test program to libmediactl and libv4l2subdev. :-) 15:53 < tuukkat> pinchart: ok 15:53 < tuukkat> didn't think of anything outside video 15:54 <@sailus> tuukkat: I'm certain it can be done without adding that functionality to libmediactl. 15:55 < tuukkat> sailus:yes, it can be done, but i'm just a bit worried about code duplication, but if you say it's not a problem :) 15:55 <@sailus> tuukkat: I don't think there will be code duplication. 15:56 <@sailus> What there might be is that libmediactl may need additional functionality. 15:57 <@sailus> To summarise, I think libv4l2pipe should apply the following from a text file: 15:57 <@sailus> - link configuration 15:57 <@sailus> - format configuration 15:57 <@sailus> - setting control values 15:58 <@sailus> Link configuration should be separate from format configuration since the same link configurations could be used in multiple format configurations. 15:58 < lyakh> sailus: then it should take proprietary plugins for hw-specific controls? 15:59 < tuukkat> hmm... something like XF86Config... in one section is defined formats, in second section links, and third section refers to the first two? 16:00 <@sailus> tuukkat: Sounds good. 16:00 < dacohen> tuukkat: I'd prefer the second one referring first one 16:01 < dacohen> we can't have same resolution with different paths 16:01 < tuukkat> lyakh:in general, i think, that it could handle private hw-specific controls without private plugins 16:01 < dacohen> indeed 16:01 < dacohen> yes, we can 16:01 < dacohen> :) 16:02 < tuukkat> lyakh:a control setting is just ID_NAME= whether private or not 16:02 <@sailus> If private controls have a static value I don't see why they could not be specified in the configuration file, if they are somehow related to configuration of similar nature. 16:02 <@sailus> Albeit it might be that in that case it'd be just a numeric value instead of a human-readable name. 16:03 <@sailus> I guess it would be relatively easy to parse videodev2.h for control names and use them. 16:03 < lyakh> tuukkat: sure, I understand that, was just thinking, whether it would not make that configuration needlessly complicated, if you need several controls, and you depend on read back values? 16:03 < dacohen> sailus: strace would get same values if private plugin is used 16:04 < tuukkat> sailus:or define the private control id names in the beginning of the text file 16:04 <@sailus> tuukkat: That's also possible. 16:04 <@sailus> My understanding is that private controls are actually deprecated and device specific controls should be added to videodev2.h. 16:05 <@sailus> That would add lots of device-specific definitions to videodev2.h, though. 16:05 < lyakh> so, what if you need G_CTRL and then perform calculations with that value? 16:05 <@sailus> pinchartl: Do you think that's the case? 16:05 < tuukkat> lyakh:in that case there would need to be device-specific code 16:05 <@sailus> lyakh: Then that must be done somewhere else. 16:06 < lyakh> sailus: where? :-) 16:06 <@sailus> Probably in vendor-provided proprietary 3A library. 16:06 < pinchartl> sailus: do I think that what is the case ? 16:07 < pinchartl> lyakh: controls in the pipeline configuration file will mostly be used to configure blanking 16:07 <@sailus> pinchartl: For device-specific controls. Where are the supposed to be defined? 16:07 < lyakh> pinchartl: yes, don't you think it can depend, e.g., on lighting conditions? 16:08 <@sailus> lyakh: Blanking does not. Exposure and gain do, but they are controlled by the 3A library. We'll discuss that next. 16:08 < tuukkat> blanking depends on exposure ... 16:09 < dacohen> exposure range depends on blanking 16:09 < dacohen> but a specific value 16:09 <@sailus> Maximum exposure time is limited by blanking. 16:09 < dacohen> sailus: yes :) 16:09 < tuukkat> that's another way to put it :-) in these cases libv4l2pipe should set only some initial values 16:09 < lyakh> sailus: right, so, to increase exposure you need larger blanking :-) 16:10 <@sailus> About blanking --- it might make more sense to specify frame rates at this level rather than blanking values. One might also want to specify whether favour horizontal or vertical blanking. 16:10 < pinchartl> lyakh: blanking, along with clocks, define the frame rate. they also set the exposure maximum 16:10 < pinchartl> sailus: device specific controls can just be numeric values 16:10 < dacohen> lyakh: then you must know beforehand what the exposure range you want to support 16:10 <@sailus> pinchartl: Which class should they belong to? 16:11 < pinchartl> sailus: I don't know 16:11 <@sailus> pinchartl: hverkuil would know. 16:11 < pinchartl> and I'm not sure we should support controls other than blanking and pixel rate in the pipeline configuration 16:12 <@sailus> pinchartl: I don't see a reason to do so, but I don't see a reason to prohibit that either. 16:12 <@sailus> We might also want to provide a tiny function to calculate the blanking values based on a desired frame rate. 16:13 <@sailus> A format configuration should probably include a range of possible frame rates as well. 16:13 <@sailus> Or they could be separate from the format configuration. 16:13 <@sailus> Then one could just say "give me 15 fps" or such. 16:14 <@sailus> Frame time or frame rate? 16:14 < pinchartl> doesn't this bring us to the next topic (pixel clock) ? 16:14 <@sailus> Not quite yet. 16:15 <@sailus> Or pixel rate? 16:15 <@sailus> I assumed it was already specified earlier. :-) 16:15 <@sailus> As part of the format configuration. 16:15 < dacohen> that's my assumption too 16:15 <@sailus> Or is your question on how that is provided to the user space? 16:16 < pinchartl> I just mean that we've been discussing for 2 hours and 15 minutes, and that we haven't tackled the first item on the agenda yet 16:16 <@sailus> :-) 16:16 < pinchartl> unless someone still wants to add something to the pipeline configuration topic, I'd like to move on 16:17 <@sailus> I'm done with it. 16:17 <@sailus> So, on pixel rate. 16:17 < pinchartl> so next topic please 16:18 <@sailus> There are two reasons to pass this information to the user space: 16:18 <@sailus> 1) To calculate detailed hardware timing information. 16:18 <@sailus> 2) To figure out whether streaming is possible, or to figure out why it failed in case it did. 16:18 <@sailus> And for kernel space: 16:18 <@sailus> 1) To configure devices. The OMAP 3 ISP CSI-2 receiver and CCDC blocks must be configured based on the pixel rate. 16:19 <@sailus> 2) Validate pipeline pixel rate for each subdev. Some subdevs require it to be withint limits. A good example is the OMAP 3 ISP where most blocks have 100 Mp/s maximum whereas the CSI-2 receiver has 200 Mp/s maximum. 16:20 -!- hverkuil [~hverkuil@nat/cisco/x-qluffgwaurmfbvzl] has quit [Remote host closed the connection] 16:20 <@sailus> The current patchset puts pixel rate as part of the struct v4l2_mbus_framefmt, but there have been doubts (including myself) whether it really belongs there. 16:20 < pinchartl> what do you mean by "calculate detailed hardware timing information" ? 16:20 <@sailus> With the pixel array size and blanking, the user has the rest of the information for this already. 16:21 <@sailus> ISPs are able to provide events at frame start and other points as well. 16:21 -!- hverkuil [~hverkuil@nat/cisco/x-icnyoqpfabfubjhb] has joined #v4l-meeting 16:21 < pinchartl> sailus: what do you mean by "calculate detailed hardware timing information" ? 16:22 <@sailus> When the sensor converts any part of the pixel array from analogue to digital form, and sends that out over the bus. 16:22 <@sailus> Also when exposure has begun on a particular area. 16:23 <@sailus> And last but not least, to calculate the frame rate. :-) 16:23 < pinchartl> how does that relate to passing the pixel rate to userspace ? 16:23 <@sailus> Uh, without this information it's not possible to calculate the above. 16:23 <@sailus> Including the frame rate. 16:24 < pinchartl> for the frame rate I agree 16:24 < pinchartl> for the rest I'm not aware of any real use case 16:24 <@sailus> (We could also implement VIDIOC_SUBDEV_G_FRAME_INTERVAL in drivers, too.) 16:25 <@sailus> If you're moving a lens, for example, you can get information on e.g. if the lens moved in your particular area of interest (the AF window). 16:25 <@sailus> This is needed by the 3A algorithm library. 16:26 <@sailus> In Harmattan the information is passed to user space using non-standard controls. 16:26 < pinchartl> can we *really* be that precise ? 16:26 <@sailus> Why not? 16:27 <@sailus> High resolution timers are available on most SoCs. 16:28 < pinchartl> we have busy loops in interrupt context in the OMAP3 ISP driver... I doubt we can achieve such a level of precision 16:28 < pinchartl> but anyway 16:28 < pinchartl> that doesn't matter 16:28 <@sailus> It's also not about precision but when you fail to accurately schedule something you have a possibility to figure out when something went wrong, instead of just having to live with the result. 16:28 < pinchartl> we need the pixel rate to compute h/v blanking and frame rates 16:28 <@sailus> True. 16:29 <@sailus> The alternative to adding it to v4l2_mbus_framefmt would be to provide it as pad-specific control. 16:29 < pinchartl> do you need to set the pixel rate, or just get it ? 16:29 <@sailus> Typically this is only gettable. 16:29 <@sailus> The control in sensor would be read-only, for example. 16:30 < pinchartl> how do we select between the different pixel rates that the sensor supports ? 16:30 <@sailus> Based on the specified link frequency, binning and scaling factors the sensor driver is able to calculate the pixel rate in the pixel array. 16:31 <@sailus> The driver knows the clock tree so it can come up with a "best" value for the pixel rate. 16:31 < pinchartl> ok, link frequency is settable 16:32 <@sailus> The requirement is that the pixel rate in the pixel array is such that after binning and scaling the pixel rate over the bus does not exceed what the bus can transfer at the chosen frequency. 16:32 <@sailus> Same goes for all bus types. 16:32 < tuukkat> i suppose pixel_rate would be the maximum instantaneous pixel rate 16:33 < tuukkat> for example, during frame blanking the rate would be really zero, but pixel_rate would be always the same 16:33 < tuukkat> and it would not be the *average* rate 16:34 < pinchartl> tuukkat: I think so 16:35 <@sailus> tuukkat, pinchartl: Exactly. 16:35 < tuukkat> i'm just not convinced that pixel_rate could be defined in a device-independent and sensible way 16:35 <@sailus> Uh, why not? 16:35 <@sailus> It's just a number. 16:35 < tuukkat> what if a block such as CCDC would contain a buffer so that the pixel rate limitation would be on the average rate, not maximum rate? 16:36 < tuukkat> then if pixel_rate would the maximum instantaneous pixel rate, it could not be used to check whether CCDC can support that rate 16:36 <@sailus> That's a good point. 16:37 < pinchartl> pixel rates are defined on links 16:37 <@sailus> This is going to fine details, I think. 16:37 < pinchartl> to check whether a given pixel rate can be supported on its sink pad, a subdev needs the pixel rate value, as well as the horizontal and vertical blanking values 16:37 < tuukkat> pinchart:the only sensible way to define pixel_rate would be hardware-specific, i suppose 16:37 <@sailus> What we could do is to provide another control for the average pixel rate. 16:37 < pinchartl> no 16:38 < pinchartl> that's getting messy 16:38 <@sailus> Making any decisions based on that in driver would be quite difficult still. 16:38 < pinchartl> pixel rate is the instant rate 16:38 <@sailus> pinchartl: I would also prefer to keep it as such. 16:39 <@sailus> Also, if the hardware fails to transfer frames, this is often caused by the fact that the pixel rate is too close the maximum and buffer overflows happen often. 16:39 < pinchartl> if a driver needs to compute an average, it needs to do so internally based on the instant value, the format width and height, horizontal and vertical blanking 16:39 <@sailus> The result is that lots of faulty frames are produced. 16:39 < pinchartl> sailus: let's face it. in theory we can compute maximum bandwidths. in practice we can't 16:39 <@sailus> pinchartl: Agreed. 16:40 < pinchartl> even TI doesn't know what their ISP supports 16:40 < tuukkat> :D 16:40 < pinchartl> and limits are theoretical, they depend on system load 16:40 < dacohen> hehe 16:40 < lyakh> fwiw: I'm not sure, whether it has been tackled in the above discussion so far (had to dig through datasheets:-)), but there are also sensors, that take the frame rate as a parameter... 16:40 < pinchartl> I don't expect other vendors to do any better 16:40 < pinchartl> lyakh: do you have an example ? 16:41 < tuukkat> smart sensors certainly could do that 16:41 < lyakh> pinchartl: mt9t111 / mt9t131 16:41 <@sailus> lyakh: that kind of sensors will not be controlled using this interface. 16:41 <@sailus> They will continue to implement VIDIOC_SUBDEV_S_FRAME_INTERVAL. 16:41 < lyakh> sailus: why pre-desin limitations? 16:41 <@sailus> lyakh: It's a limitation of those sensors. 16:42 <@sailus> If the blanking on those sensors cannot be controlled directly there's nothing the interface can do about it. 16:42 < lyakh> sailus: it certainly can 16:43 <@sailus> I also think those sensors are very different from typical raw bayer sensors. 16:43 < pinchartl> lyakh: I've got the mt9t111 datasheet. what's the frame rate register ? 16:43 <@sailus> They include an ISP so the reason why we want to export this information to user space does not exist, nor those sensors may either be able to provide that information. 16:44 < lyakh> sailus: it uses that information additionally to optimally configure its PLL 16:44 <@sailus> s/ISP/ISP and 3A algorithsm/ 16:44 < lyakh> pinchartl: look at page 20 16:44 < lyakh> "PLL and Clock Divider" 16:44 < lyakh> search for "frame rate" 16:46 < pinchartl> lyakh: I don't think we have the same version of the datasheet then 16:46 < lyakh> pinchartl: if you just search for "frame rate"? 16:46 < pinchartl> a couple of occurences, but nothing relevant 16:47 < lyakh> pinchartl: mine is a "developer guide" 16:47 < lyakh> not just a data sheet 16:47 < pinchartl> MT9T111_DS_1.fm - Rev. D 10/07 EN 16:48 < pinchartl> but anyway that's a detail 16:48 < pinchartl> sailus: please go on 16:48 < pinchartl> I have 10 minutes left 16:48 <@sailus> Is there something else in configuring this sensor's PLL values than the actual divisors and multipliers? 16:48 < lyakh> MT9T111_DC - Rev. B 9/10 EN 16:48 <@sailus> I can't think of anything. 16:49 <@sailus> I think the above four cases where the pixel rate is required can easily be fulfilled by implementing that as a pad-specific control. 16:49 < lyakh> sailus: unfortunately, my document refers to another one for mt9t0131 for framerate configuration details... 16:50 <@sailus> lyakh: The driver may also calculate the frame rate if it is needed by the sensor. 16:50 < pinchartl> sailus: we don't have pad-specific controls in V4L2 16:50 <@sailus> We do not. 16:50 <@sailus> We would need them. 16:50 <@sailus> I don't think this is the first occasion they have been discussed in. 16:51 <@sailus> It's just that we haven't strictly needed them previously. 16:51 < pinchartl> and h/v blanking would be pad-specific controls as well ? 16:51 <@sailus> No; just pixel rate and possibly also the link frequency. 16:52 < pinchartl> why not h/v blanking ? 16:52 <@sailus> The link frequency doesn't really have to be but the pixel rate does. 16:52 <@sailus> The pixel array subdev contains just one pixel array. 16:52 <@sailus> There's no separate pixel array for each pad. 16:53 < pinchartl> but subdevs can modify blanking, exactly as they can modify sizes 16:53 <@sailus> But regarding pixel rate, a scaler subdev's different pads can well have different pixel rates. 16:53 < pinchartl> cropping 10 pixels on the right adds 10 pixels of horizontal blanking 16:54 <@sailus> Do you think we should provide blanking information outside the pixel array? 16:54 <@sailus> My assumption was it only would be needed in the pixel array itself. 16:55 <@sailus> It might make sense, but I can't think of a use case right now. 16:55 < pinchartl> if you want to implement bandwidth management you need that information 16:55 <@sailus> However, the pixel array subdevs will only have one pad which will be 0. 16:55 <@sailus> So it would be easy to add that later on. 16:56 < pinchartl> but I don't think bandwidth management can be implemented 16:57 <@sailus> pinchartl: Perhaps not in a general case. You can do something, for example fail on streamon when the data rate from the sensor is simply too high for the ISP. 16:57 <@sailus> That makes the assumption you know the pixel rate limitations of your ISP. 16:57 < pinchartl> you won't catch all cases 16:57 < pinchartl> you don't know the pixel rate limitations of the ISP 16:57 < pinchartl> you might not some of them 16:57 < pinchartl> but not all of them 16:58 <@sailus> I don't think it's necessary to try to implement pixel rate checking at least for now. 16:59 <@sailus> But if we want to try later on I don't think there's anything in the interface proposal, with the discussed additions, that would prevent doing so. 16:59 < pinchartl> you can probably add a coarse check, and having the pixel rate at the sensor output would then be enough 16:59 < pinchartl> if you want real bandwidth checking you need the pixel rate, the frame size and the blanking information on all pads 17:00 <@sailus> That information likely should be distributed inside the kernel only without involving the user, I presume? 17:00 < pinchartl> that would be better yes 17:01 < pinchartl> regarding the pixel rate, we need it in the kernel to configure the ISP, and we also need it in userspace to compute frame rates 17:02 < pinchartl> so it could be a control for now 17:02 <@sailus> Agreed. 17:03 <@sailus> But we'll need pad specific controls. 17:03 <@sailus> That needs to be discussed further with Hans. 17:03 < pinchartl> not yet, as we only need to implement that control in the sensor subdev 17:04 <@sailus> Pixel array and in SMIA++'s case, scaler or binner. 17:04 <@sailus> But yes, it's enough to do that per-subdev for now. 17:04 <@sailus> I guess that's a conclusion then. 17:05 < pinchartl> does the SMIA++ scaler change the pixel rate ? 17:05 < dacohen> it's fine for me 17:05 <@sailus> pinchartl: It does, and so does the binner. 17:05 < pinchartl> ok 17:05 <@sailus> They are separate subdevs so there's no problem. 17:05 <@sailus> I have one more question, actually relared to libv4l2pipe. 17:05 <@sailus> Who is interested in working on it? 17:06 < pinchartl> who has time to work on it ? :-) 17:06 <@sailus> That's actually a better question. 17:06 < pinchartl> sailus: will you combine libv4l2pipeauto into libv4l2pipe ? 17:06 <@sailus> I would be interested but I have no time. 17:07 <@sailus> I wonder if these should be separate libraries or the same. 17:07 < dacohen> how about same? 17:07 < dacohen> easier for its maintenance 17:08 <@sailus> If it's a single library, it will provide a single API. 17:08 < pinchartl> dacohen: I agree 17:08 <@sailus> Two libraries would provide two APIs. 17:08 < pinchartl> sailus: a single API made of multiple functions :-) 17:09 <@sailus> The problem I have with putting this under a single API is that I don't see _right now_ any overlapping functionality. 17:09 <@sailus> I don't like the idea of combining thing that have nothing to do with each other. 17:09 < pinchartl> sailus: they all deal with pipeline configuration 17:09 <@sailus> That said, I might change my mind once we start working on it. 17:09 < pinchartl> :-) 17:10 <@sailus> They do, but the way they do it is different. 17:10 <@sailus> One will take in a static configuration whereas the other will calculate everything automatically. 17:10 < pinchartl> sure, but I still think they're related 17:10 < dacohen> they might share codes 17:11 <@sailus> I think we could start with separate files and see how that goes. 17:11 < pinchartl> it's a detail for now 17:11 < dacohen> yes 17:11 < dacohen> we can start like that way IMO 17:11 <@sailus> Btw. I think we must make the format / link parsing code in libmediactl and libv4l2subdev use flex / bison. 17:12 <@sailus> Any volunteers? :-) 17:12 <@sailus> I mean, for libv4l2pipe? 17:13 <@sailus> Changing over to flex / bison is a different matter then. 17:13 < pinchartl> I'm sorry but I can't 17:13 <@sailus> There still is a dependency: I wouldn't want to extend the existing parser too much. 17:13 < pinchartl> I mean I don't have the time to work on it 17:13 <@sailus> Anyone else? 17:13 <@sailus> pinchartl: Same for me. :-I 17:14 < pinchartl> has everybody else left ? :-) 17:14 < dacohen> I can't either 17:14 < dacohen> tuukka? 17:14 < tuukkat> umm, i can't promise anything as i'm a bit busy .. :( 17:14 < dacohen> tuukkat: your answer was the most positive so far 17:14 < dacohen> :D 17:15 < tuukkat> thank you very much :-> 17:15 <@sailus> :-D 17:15 < tuukkat> but really, don't count on me =) 17:15 <@sailus> Ok. 17:15 <@sailus> I think then it's so that the one who eventually starts working on this will inform others so there won't be duplicated efforts. 17:16 < pinchartl> ok 17:16 < dacohen> ok 17:16 < tuukkat> ok 17:16 <@sailus> It might be myself but I can't promise anything either. 17:16 <@sailus> Sounds good then. 17:17 <@sailus> I think we've covered all the topics for today, either implicitly or explicitly. :-) 17:17 < dacohen> agreed 17:18 < tuukkat> yes, because there wouldn't be time to discuss more topics :-> 17:18 <@sailus> Thanks for participating, everyone! :-) 17:18 < tuukkat> thank you for organizing this :) 17:18 < dacohen> thank you for proposing the interesting topics 17:19 < dacohen> I must go now 17:19 <@sailus> I'll write some notes on the meeting and send them to the list. 17:19 < dacohen> thanks :) 17:19 < dacohen> tchau 17:19 < lyakh> bye all 17:19 < tuukkat> bye :) 17:19 <@sailus> Heippa! 17:19 -!- dacohen [~david@a91-153-6-202.elisa-laajakaista.fi] has quit [Quit: dacohen has no reason] 17:20 -!- hverkuil [~hverkuil@nat/cisco/x-icnyoqpfabfubjhb] has left #v4l-meeting [] 17:20 -!- lyakh [~lyakh@dslb-088-076-016-103.pools.arcor-ip.net] has left #v4l-meeting ["thanks, bye"] 17:21 -!- snawrocki [~snawrocki@217-67-201-162.itsa.net.pl] has left #v4l-meeting ["Leaving"] 17:21 -!- tuukkat [c0c69724@gateway/web/freenode/ip.192.198.151.36] has quit []