hackergotchi

treitter

Let's Push Things Forward

Maximizing social utility for fun and (modest) profit


Entries by tag: lca

hackergotchi
treitter

LCA 2010 - Day 3

Day 1 | Day 2

Andrew Tridgell - FOSS and Patents

This informative talk went into a bit of detail on the current situation with (software) patents and some best practices for open source projects who want to avoid litigation. First, if you do get contacted by a company who claims your project violates one of their patents, contact public defenders, such as the Software Freedom Law Center or Electronic Frontier Foundation. In general, it's useful to know how to read patents, such as reading the abstract (but not stopping there, since they can often be misleading), then skipping ahead to the individual claims (which are the core of the patent), and then referring back to the diagrams, definitions, etc. as necessary.

And important point is that the "prior art" defense that so many people like to cite is actually very hard to win. This requires that the prior art covers every single claim in the patent. Instead, a "non-infringement" defense only requires that your work not (exactly) match every independent claim in the patent. There tend to be far more dependent claims than independent claims, so this defense tends to be easier to win.

An interesting point Andrew brought up was that he doesn't think it's to our advantage to avoid reading patents. His point is that even single damages (in the case that you unknowingly infringe a patent) is enough to end an open source project, so if you are forced to pay triple damages (in case you are aware of the patent), that doesn't change the end result. I unfortunately didn't have time to ask him this: wouldn't it matter to the author themselves whether they have to pay $LARGE_SUM_OF_MONEY or 3*$LARGE_SUM_OF_MONEY out of their own pocket? In either case, it's an ugly situation that you can be punished for being more knowledgeable.

Finally, Andrew suggested this strategy for open source projects to make themselves worse targets for patent suits (which is really all that matters): find and widely publicize work-arounds to patents. Closed-source companies are unlikely to do so, since a work-around could be a business advantage over another competitor who might otherwise be forced to license the patent. That way, we'll appear (and effectively be) a lot more work for patent owners to troll. That way, we can build up a reputation for not being worth the hassle. A great thing to aspire to.

20100123_007.jpg

Paul Mackerras -- Perf Events (in the kernel)

A replacement for perf counters, perf events provide kernel probes and simple API to perform fine-grained performance benchmarking on a Linux system. Whenever possible, perf events use hardware to get the most-precise data possible. There's a reasonable software-only fallback based on the high-resolution system clock (which most systems support at this point).

The API is essentially just one system call (which returns a file descriptor) and specific content behavior from that virtual file. Everything else is read(), close(), etc.

Events can be per task, per CPU, and, recently added, per-task-per-CPU. Per-task tracking can be recursive (forked processes get a copy of the parent counter struct and its final values are added to the parent upon exiting or explicitly synchronizing).

Perf events can trace cache (including TLB) activity, page faults, context switches, CPU migrations, and data alignment and instruction emulation traps.

Useful benchmarking always starts with good tools, and it sounds like we're finally getting a great tool with perf events!

Rusty Russell -- FOSS Fun with a Wiimote

Rusty detailed his geeky plans for making sure his daughter grows up to be a geek. These mainly involve writing some software to (ideally) translate her hand movements to actions on their TV. The idea was to help her establish causality at an early age, I think.

Anyway, I can't really do it justice in writing. You'll have to wait until the video is posted to see for yourself.

20100118_107.jpg

Carl Worth -- Making the GPU do its job

After giving a brief history of computer graphics an GPU development, Carl explained how hardware graphics support has sort of oscillated between discrete and integrated (into the CPU) hardware. (I wasn't aware that we'd already made the integrated → discrete → integrated cycle once before our current (in some contexts) migration back to integrated graphics).

The main problem with graphics in Linux right now is that we often have bad performance (which also means bad power consumption) and we need two drivers per video card (family, at least) -- one for 2D and one for 3D.

In order to figure out our bottlenecks, Carl created cairo-trace to measure the actual performance. This nifty program records the timing of cairo API calls for any program running under it. These traces can be played back through cairo at any time (at maximum speed), to continually improve performance for real-world uses of Cairo. I'm not sure if it's already in place, but these could easily be added to the (from what I hear, very good) automated cairo test suite, to avoid releasing regressions. If only more open source projects took testing and performance this seriously!

As it turns out, the current performance in most cases was actually better in pure software than in cairo-xlib (the latest stable backend for cairo).

An experimental GPU-accelerated backend (cairo-drm), in which Cairo bypasses X and uses GEM to render directly in the kernel, improves performance dramatically (10x speedup for Firefox). But the caveat is that it requires yet another driver per video card (bringing us to 3 total, for those of you keeping track at home).

Another approach, cairo-gl, which has Cairo bypass X to render to MESA, requires only 2 drivers total and should eventually have performance closer to that of cairo-drm. But for the moment, its performance is much worse than cairo-xlib.

Robert O'Callahan -- Open video on the web

Video on the web right now has two major players: Flash and Silverlight. (Maybe I just took ambiguous notes, but it's obvious that Silverlight is nowhere near Flash in terms of marketshare).

Beyond the obvious problem of software freedom, Flash is notoriously unstable (apparently a huge percentage of crashes of OS X are directly Flash's fault). Mozilla has decided that it's time to do something to push open media formats, to ensure this important (and growing) chunk of the Web retains the openness that has made the rest of it so popular and useful.

Some questions about the open formats:
  • Should we ignore patents?
  • This is only feasible while open formats are irrelevant, which isn't a great strategy.
  • Wait for the patents to expire?
  • They'll just be replaced with the next closed format.
  • Just pay the licensing fees?
  • Using a codec means you need to license it for your market size and per viewing (according to MPEGLA's fee structure), which is impossible for most websites serving content and for Mozilla to pay for distributing Firefox.

Another issue: video is not just about YouTube (passive watching). Flash adds interactivity (related videos, captioning, relevant ads, etc.), so we'll need an open counterpart to this as well.

Mozilla's solution for open video is Ogg Theora, which has had nice advances lately. GStreamer developer David Schleef got decent performance on OMAP3's DSP (in the N900 and other embedded devices) for Theora. The Ogg Index project adds indexing to the Ogg (container format?) to fix stream seeking over the web (which is frequently unusable without an index).

Mozilla has been shipping Theora support in Firefox 3.5+, since it doesn't want to pay, can't pay, and shouldn't pay for licensing codecs. If suddenly software patents were invalid, it'd be fine to just standardize on h.264, but that probably won't happen.

A big part of the chicken-and-egg push for open formats requires working with content providers and distribution networks. So Mozilla has gotten Dailymotion, the Internet Archive, and other websites to support Ogg Vorbis and Theora.

The other half is getting the browser to support the formats. Firefox 3.5+ can handle a Theora <video> tag; Chrome ships Theora (and H.264) support; Opera will ship Theora only. In FireFox 3.6, we'll get fullscreen Theora. Also in the pipeline is GPU-accelerated playback and Mobile/Maemo optimizations.

Partial successes in this push to open video include Vimeo and YouTube planning to support HTML 5's <video> tag on their sites (though they'll only be using the non-free H.264 codec).

So open media formats are slowly advancing on the web. We're basically at the point where most web standards are friendly to free software; if Gecko and (or) WebKit can't implement it, people don't propose it as a standard.

20100121_003.jpg

hackergotchi
treitter

LCA 2010 - Day 2

Day 1

Glyn Moody -- Keynote on OSS-like transparency outside of software

This talk covered a handful of ways in which open source software has inspired collaborative efforts in non-software fields, even reminding science of its origins in "open workbook" experimentation, where all the raw data is provided (as it should be). Jim Kent, a hacker working on his own, just barely beat Craig Venter's company, Celera, to sequence and publicly publish the human genome. Without this effort, Celera may have patented chunks of the genome (which obviously would have been a Bad Thing™). Kent did this thanks to a 100-machine Linux cluster, which was obviously a big win for both open source software and science.

Glyn then went on to point out that one of the biggest problems with the global financial system (and a large part of the crash) was due to its opacity. Even the industry experts don't really know much of the details, because all the data are locked up. Some governments are working to force this data to be more open -- Recovery.gov (in the US) is a good start, and the UK, Australia, and New Zealand are working on similar efforts.

20100118_019.jpg

Emmanuele Bassi -- A year of Clutter

Emmanuele delivered a few lightning talks in a row to cover what's gotten Clutter to 1.0, what has happened since its release, the plans for Clutter 1.2 (due in March), and the general plan for Clutter 2.x (begin before Clutter 1.x is end-of-lifed, and add any features that necessarily require API/ABI breaks; any changes that don't require breaks will also find their way into Clutter 1.x).

I'm not terribly familiar with Clutter, but it sounds like they've been making a nice amount of progress (which is especially important if it may become a dependency of GTK 3.x).

Dave Airlies -- Graphics drivers in the kernel; now what?

At the time X was first designed, graphics drivers in the kernel were infeasible, since they'd need to be re-written for all of the (many) different Unix kernels at the time. The environment is very different now, so it's worth the effort. Kernel-mode-setting (KMS) is the solution that Linux has adopted.

Tungsten tried doing something like KMS, but being developed top-and-bottom-inward, it ended up with an ugly API that didn't work very well. Keith Packerd wrote the Graphics Execution Manager (GEM), and KMS ended up being a combination of Tungsten's code with something more like GEM's API.

A couple immediate benefits of this new way of handling graphics is that we can now begin to handle video card power management and use the kernel debugger from within X.

Dave briefly discussed Wayland, the alternate display server. It seems to be desktop-environment-dependent and it isn't very complete (eg, keyboard input doesn't work yet), so it won't be replacing X.org (any time soon, if ever).

The Intel KMS driver is the most complete (except for GMA500), and just needs a little more work. AMD/ATi support is pretty good, though there's a stunning number of different video cards to support. Nouveau (the open nVidia driver) just moved entirely to KMS, and it's making decent progress.

20100118_142.jpg

Jan Schmidt -- Toward GStreamer 1.0

Jan gave us an overview of the progress GStreamer has made since its initial version (a lot), the current state (pretty good), and the future. There have been discussions of finally promoting the version number to 1.0; some of the downsides include a risk of development lull (as happened between 0.8 and 0.10) and the benefits include the ability to cut deprecated code and make it more clear to new developers that it's safe to use.

We also got a handful of nice demos (fully-functional DVD support, including menus and special "asides" subtitle support). But the crowd favorite seemed to be a demo app that tied playback rate to his laptop's orientation (from -$fast to +$fast). As Jan put it, "that's what accelerometers are for."

20100118_003.jpg

Adam Jackson -- The Rebirth of Xinerama

In a follow-up to his talk at last year's LCA about Shatter (an effort to better support aribtrary multi-screen setups), this talk updated us on the state of Xinerama to support similar configurations. Shatter eventually proved unworkable, but there has been some headway in other areas. At this point, X and the graphics drivers can support reasonably large-sum-dimensions displays, but the multiplexing of work across multiple GPUs needs to play catch-up (since more systems are starting to have multi-GPU configurations). The performance is linear with the number of GPUs -- unfortunately, it's 1/n, not n.

Adam discussed a number of potential solutions, as well as other issues which may be looming over the horizon (which is worthwhile, since most users weren't too concerned about supporting multiple GPUs in a single machine even recently).

hackergotchi
treitter

LCA 2010 - Day 1

This week, I'm in beautiful Wellington, New Zealand for LCA 2010. It's very well-organized conference (as always), and the weather has been even better than predicted!
20100118_150.jpg

Benjamin Mako-Hill -- Keynote on anti-features

Mako described his concept of anti-features (product features which intentionally reduce its functionality) and provided a number of amusing examples. I remember coming up with essentially the same idea several years ago (shortly before I switched full-time to Linux, coincidentally enough). It somewhat boggles the mind that there are so many engineers spending their time crippling their own work.

As Mako pointed out, most cell phones are surprisingly restrictive, but most of us just accept it. I gleefully raised my hand when he asked how many people had root access on their phone. Score another point for the N900 (which even avoids the G1 developer version's "freedom tax")!

Jonathan Corbet -- Kernel Report

Jonathan explained in great, yet digestible, detail what's happened in the Linux kernel in the last year. This shouldn't be too shocking to anyone who's familiar with his great reporting on Linux Weekly News. I'm fairly bad at keeping up with kernel news, so this was a great summary.

20100118_068.jpg

Chris Double -- Implementing HTML 5 Video in Firefox

A great update on the history and latest-and-greatest in video support in Firefox. They first built their own solution on some of the higher-level Xiph Ogg Vorbis and Theora libraries, but ran into a number of performance problems; then tried per-platform native implementations (including GStreamer on Linux), but the major platforms have no default codecs in common; next the tower caught on fire and then fell into the swamp; and most recently they've switched to an internal implementation that uses only the lowest-level Xiph libraries.

It was interesting to hear about some of the implementation limits. Apparently sound on Linux is still hard.

Chris's demos were fantastic - playback was very smooth, even when doing impressive green-screen, object insertion, and other fancy transformations. I can't wait to see websites take advantage of the video tag. Anything that minimizes our reliance upon Flash is exciting to me!

Denis Kenzior -- oFono: Open Source Telephony

oFono is a framework for handling mobline phone call/text/etc. features. It works on the idea of generalized modems for the various radio types (which, based on its goal of exposing only the interesting details to applications, I assume are hidden from the API) and has a daemon to do the work of sending and receiving audio calls and SMSes. The framework is meant to only support mobile phone functionality and intentionally ignores voice-over-IP (which Telepathy handles very well on the desktop and embedded systems by way of its telepathy-sofiasip connection manager).

There's an in-progress Telepathy connection manager which wraps oFono named yafono, written by my Collaboran colleague Andres Salomon.

Denis states that the current functionality is enough such that you could implement a 2G iPhone's cell stack with it.

Matthew Garrett -- Social Success for (and in) the Linux Community

Matthew discussed some of the social shortcomings of the Linux Community, who we consider members (mostly just developers, unfortunately). We were also graced with a number of relevant quotes and a touching story of his own transformation from occasional mailing list flame-thrower to sympathetic human being.

You are viewing treitter