treitter (treitter) wrote,
treitter
treitter

LCA 2010 - Day 3

Day 1 | Day 2

Andrew Tridgell - FOSS and Patents

This informative talk went into a bit of detail on the current situation with (software) patents and some best practices for open source projects who want to avoid litigation. First, if you do get contacted by a company who claims your project violates one of their patents, contact public defenders, such as the Software Freedom Law Center or Electronic Frontier Foundation. In general, it's useful to know how to read patents, such as reading the abstract (but not stopping there, since they can often be misleading), then skipping ahead to the individual claims (which are the core of the patent), and then referring back to the diagrams, definitions, etc. as necessary.

And important point is that the "prior art" defense that so many people like to cite is actually very hard to win. This requires that the prior art covers every single claim in the patent. Instead, a "non-infringement" defense only requires that your work not (exactly) match every independent claim in the patent. There tend to be far more dependent claims than independent claims, so this defense tends to be easier to win.

An interesting point Andrew brought up was that he doesn't think it's to our advantage to avoid reading patents. His point is that even single damages (in the case that you unknowingly infringe a patent) is enough to end an open source project, so if you are forced to pay triple damages (in case you are aware of the patent), that doesn't change the end result. I unfortunately didn't have time to ask him this: wouldn't it matter to the author themselves whether they have to pay $LARGE_SUM_OF_MONEY or 3*$LARGE_SUM_OF_MONEY out of their own pocket? In either case, it's an ugly situation that you can be punished for being more knowledgeable.

Finally, Andrew suggested this strategy for open source projects to make themselves worse targets for patent suits (which is really all that matters): find and widely publicize work-arounds to patents. Closed-source companies are unlikely to do so, since a work-around could be a business advantage over another competitor who might otherwise be forced to license the patent. That way, we'll appear (and effectively be) a lot more work for patent owners to troll. That way, we can build up a reputation for not being worth the hassle. A great thing to aspire to.

20100123_007.jpg

Paul Mackerras -- Perf Events (in the kernel)

A replacement for perf counters, perf events provide kernel probes and simple API to perform fine-grained performance benchmarking on a Linux system. Whenever possible, perf events use hardware to get the most-precise data possible. There's a reasonable software-only fallback based on the high-resolution system clock (which most systems support at this point).

The API is essentially just one system call (which returns a file descriptor) and specific content behavior from that virtual file. Everything else is read(), close(), etc.

Events can be per task, per CPU, and, recently added, per-task-per-CPU. Per-task tracking can be recursive (forked processes get a copy of the parent counter struct and its final values are added to the parent upon exiting or explicitly synchronizing).

Perf events can trace cache (including TLB) activity, page faults, context switches, CPU migrations, and data alignment and instruction emulation traps.

Useful benchmarking always starts with good tools, and it sounds like we're finally getting a great tool with perf events!

Rusty Russell -- FOSS Fun with a Wiimote

Rusty detailed his geeky plans for making sure his daughter grows up to be a geek. These mainly involve writing some software to (ideally) translate her hand movements to actions on their TV. The idea was to help her establish causality at an early age, I think.

Anyway, I can't really do it justice in writing. You'll have to wait until the video is posted to see for yourself.

20100118_107.jpg

Carl Worth -- Making the GPU do its job

After giving a brief history of computer graphics an GPU development, Carl explained how hardware graphics support has sort of oscillated between discrete and integrated (into the CPU) hardware. (I wasn't aware that we'd already made the integrated → discrete → integrated cycle once before our current (in some contexts) migration back to integrated graphics).

The main problem with graphics in Linux right now is that we often have bad performance (which also means bad power consumption) and we need two drivers per video card (family, at least) -- one for 2D and one for 3D.

In order to figure out our bottlenecks, Carl created cairo-trace to measure the actual performance. This nifty program records the timing of cairo API calls for any program running under it. These traces can be played back through cairo at any time (at maximum speed), to continually improve performance for real-world uses of Cairo. I'm not sure if it's already in place, but these could easily be added to the (from what I hear, very good) automated cairo test suite, to avoid releasing regressions. If only more open source projects took testing and performance this seriously!

As it turns out, the current performance in most cases was actually better in pure software than in cairo-xlib (the latest stable backend for cairo).

An experimental GPU-accelerated backend (cairo-drm), in which Cairo bypasses X and uses GEM to render directly in the kernel, improves performance dramatically (10x speedup for Firefox). But the caveat is that it requires yet another driver per video card (bringing us to 3 total, for those of you keeping track at home).

Another approach, cairo-gl, which has Cairo bypass X to render to MESA, requires only 2 drivers total and should eventually have performance closer to that of cairo-drm. But for the moment, its performance is much worse than cairo-xlib.

Robert O'Callahan -- Open video on the web

Video on the web right now has two major players: Flash and Silverlight. (Maybe I just took ambiguous notes, but it's obvious that Silverlight is nowhere near Flash in terms of marketshare).

Beyond the obvious problem of software freedom, Flash is notoriously unstable (apparently a huge percentage of crashes of OS X are directly Flash's fault). Mozilla has decided that it's time to do something to push open media formats, to ensure this important (and growing) chunk of the Web retains the openness that has made the rest of it so popular and useful.

Some questions about the open formats:
  • Should we ignore patents?
  • This is only feasible while open formats are irrelevant, which isn't a great strategy.
  • Wait for the patents to expire?
  • They'll just be replaced with the next closed format.
  • Just pay the licensing fees?
  • Using a codec means you need to license it for your market size and per viewing (according to MPEGLA's fee structure), which is impossible for most websites serving content and for Mozilla to pay for distributing Firefox.

Another issue: video is not just about YouTube (passive watching). Flash adds interactivity (related videos, captioning, relevant ads, etc.), so we'll need an open counterpart to this as well.

Mozilla's solution for open video is Ogg Theora, which has had nice advances lately. GStreamer developer David Schleef got decent performance on OMAP3's DSP (in the N900 and other embedded devices) for Theora. The Ogg Index project adds indexing to the Ogg (container format?) to fix stream seeking over the web (which is frequently unusable without an index).

Mozilla has been shipping Theora support in Firefox 3.5+, since it doesn't want to pay, can't pay, and shouldn't pay for licensing codecs. If suddenly software patents were invalid, it'd be fine to just standardize on h.264, but that probably won't happen.

A big part of the chicken-and-egg push for open formats requires working with content providers and distribution networks. So Mozilla has gotten Dailymotion, the Internet Archive, and other websites to support Ogg Vorbis and Theora.

The other half is getting the browser to support the formats. Firefox 3.5+ can handle a Theora <video> tag; Chrome ships Theora (and H.264) support; Opera will ship Theora only. In FireFox 3.6, we'll get fullscreen Theora. Also in the pipeline is GPU-accelerated playback and Mobile/Maemo optimizations.

Partial successes in this push to open video include Vimeo and YouTube planning to support HTML 5's <video> tag on their sites (though they'll only be using the non-free H.264 codec).

So open media formats are slowly advancing on the web. We're basically at the point where most web standards are friendly to free software; if Gecko and (or) WebKit can't implement it, people don't propose it as a standard.

20100121_003.jpg
Tags: apolitical, lca, lca2010, newzealand, tech
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments