Recently I was working on finalizing a small command line application for Canonical Hardware Certification.
The application here is of no great importance but I wanted to give it a better look than what our applications traditionally look. I made
sure to pick the right colors from the Canonical color palette. I used appropriate paragraph layout and spacing (this is all text mode console, remember). It all looked great.
This
is, in part, possible thanks to the RGB color support offered by Gnome
Terminal. I tested it extensively and it was really quite gorgeous.
Then, I ran it in screen. Ugh. The application was barely readable!
Screen doesn't understand the escape codes I was sending and just
happily ignored them. This was not what I was after. I didn't want to
scrap everything either. I wanted this to work as best as the software
around this permits.
Fast forward to today. I spend a while
iterating on the basic idea and experimenting with a few ways to expose
it. The application I wrote was based on python3-guacamole. A simple
framework that helps to write command line applications, sitting basically one
level above argparse and main(). Guacamole has support for ANSI color
codes and exposes that neatly to the application via high-level
interface. Instead of calling print(), the application can call aprint()
which understands a few more keyword arguments. Among those arguments
are fg and bg (for foreground and background color, respectively).
Guacamole
also gained a color controller, an ingredient acting as a middleware
for all color handling inside the application. I created it to explore
the impact of usability-enhancing color use on users with various types
of color blindness. It is an accessibility tool but making it required
adding quite a few color transformation operations to Guacamole. If you
haven't seen that yet it's quite interesting. The rainbow demo, invoked
with the right options, is beautiful, in all its 24bit terminal emulator
glory.
Today I expanded on that idea and introduced two new concepts: a color mixer and a the palette.
The
mixer came first. It is a new property of the color controller (that
applications have access to). Its goal is to act as a optional downmix
of 24 bit RGB values to the special 6 * 6 * 6 palette supported by most
terminal emulators on the planet. Implementing it was actually quite
straightforward though the problem of coming up with an indexed image
for a given true-color image is an interesting research topic. Here most
"difficult" things don't really apply as we have a fixed palette that
we cannot change, we cannot do dithering really and "pixels" are huge.
This turned out to work great. I patched my application to enable the
indexed 8-bit mode mixer when a given TERM or SSH_CONNECTION value is
used and it worked like a charm.
Looking at the code though, I
wanted to avoid the inefficiency of doing the conversion for each "image". At the same time I wanted to allow applications to offer
optimized, hand picked indexed color equivalents of some RGB colors.
This all materialized as generalized, extensible named-color palette. Now, the application has calls like:
aprint("something",
fg="canonical.aubergine") instead of aprint("something",
fg=(constant-for-canonical-aubergine)). The named color is resolved in
exactly the same way as other constants like "black" or "red" would. What is also cool is
that the color mixer can now expose the preference for PREFER_RGB,
PREFER_INDEXED_256 or PREFER_INDEXED_8 and the color lookups from the
palette may fetch optimized values if the application provides any.
All
in all, this gives me a way to run my true-color terminal application
via screen, putty or OS X's terminal application. It behaves good in all
of those environments now.
The only problem left, is accurate
terminal emulator detection. I want guacamole to handle this internally
and let applications be blissfully unaware of particular limitations of
particular programs. This is an open problem, sadly. TERM is
useless. Everyone says "xterm" or something similar. People hack around it in their startup scripts since various programs hard-code detection to "xterm-256color" or similar. Many different programs use the same TERM value while offering widely different feature set. TERMCAP is equally useless. It might be better but nobody uses it anymore, with the notable exception of screen. All the stuff there is just old legacy crap from around the start of UNIX time. What I would love to have is something like what Apple's
Terminal.app does. It sets two environment variables that contain the
name and version of the terminal emulator. With this knowledge libraries can map out features, known issues and all the other stuff with far greater ease, simplicity and speed than any new standardized definitions (TERMCAP) would allow. Now if those also got sent
across SSH then my life would be much easier.
For now I will see what
can be done to get this just right in the Ubuntu ecosystem. I can
probably patch gnome-terminal, ssh (to forward those variable) and perhaps screen
(to set them).
I'll send a follow-up post when this lands in python-guacamole. I plan on doing a release later today.
Sunday, August 9, 2015
Monday, July 20, 2015
Using Snappy Ubuntu Core to test-boot anything
Lab-As-A-Service Inception OS
Today morning I realized that it is quite possible to use chain-loading to boot into a test OS from within a Snappy Ubuntu Core system. I've decided to try to implement that and you can give it a try now. LAAS inception 0.1 [snap].Update: I uploaded an older, broken version, please make sure you have this version installed:
a1bc56b7bc114daf2bfac3f8f5345b84 laas-inception_0.1_all.snap
Inception OS is a small Ubuntu Snappy Core-based system with a one more snap for programmatic control over boot process. Using the inception OS, any x86 or amd64 system (both in UEFI and legacy bios mode) can be converted into a remotely controlled web API that lets anyone reflash the OS and reboot into the fresh image.
The system always reboots into the Inception OS so this can be used to run unreliable software as long as the machine can be power-cycled remotely (which seems to be a solved problem with off-the-shelf networked power strips).
Installing the Inception OS
- Get a laptop, desktop or server that you can run Snappy Ubuntu Core on enough so that you have working networking (including wifi if that is what you wish to use) and working storage (so that you can see the primary disk. In general, the pre-built Snappy Ubuntu Core for amd64 can be used on many machines without any modification.
- Copy the image to a separate boot device. This can be a USB hard drive or flash memory of some sort. In my case I just dd'ed the uncompressed image to a 8GB flash drive (the image was 4GB but that's okay).
- Plug the USB device into your test device.
- In the boot loader, set the device to always boot from the USB device you just plugged in. Save the setting and reboot to test this.
- Use snappy-remote --url=ssh://1.2.3.4/ install-remote laas-inception*.snap to install the Inception snap.
- SSH to the test system and ensure that laas-inception.laas-inception command exists. Run it with the install argument. This will modify the boot configuration to make sure that the inception features are available.
- At this stage, the device can be power-cycled, rebooted, etc. Thanks to Snappy Ubuntu Core it should be resilient to many types of damage that can be caused by rebooting at the wrong moment.
Installing Test OSes
To install any new OS image for testing follow those steps.- Get a pre-installed system image. This is perfect for snappy (snappy-device-install core can be used to create one) and many traditional operating systems can be converted to pre-installed images one can just copy to the hard drive directly.
- Use ssh to connect to the Inception OS. From there you can download and copy the OS image onto the primary storage (hard drive or SSD) of the machine. Currently this is not implemented but later versions of the Inception OS will simply offer this as a web API with simple tools for remote administration from any platform.
- Use the laas-inception.laas-inception boot command to reboot into the test OS. This will restart the machine and boot from the primary storage exactly once. As soon as the machine restarts or is power cycled you will regain control as inception OS will boot again.
How it works
Inception OS is pretty simple. It uses GRUB chain-loading to boot anything that is installed on the primary storage. It uses a few tricks to set everything in motion but the general idea is simple enough that it should work on any machine that can be booted with GRUB. The target OS can be non-Linux (inception can boot Windows, for example, though reboots will kick back into the Inception OS).Monday, July 6, 2015
Backporting python-launchpadlib3 to Ubuntu 14.04
tl;dr; You can now use python 3 version of launchpadlib from ppa:zyga/launchpadlib-backports. Read on if you want to find out how I did it.
Today a colleague of mine asked for a small hint on how to do something with launchpadlib. I asked for the code sample and immediately stopped, seeing this is python 2 code. As python 2 is really not the way to start anything new in Ubuntu nowadays I looked at what's stopping us from using python 3.
It turns out, my colleague was using the LTS release (14.04) and python 3 version of launchpadlib just wasn't available at that time. Bummer.
Having a quick look at the dependencies I decided to help everyone out and create a PPA with the backported packages. Since this is a common process I though I would share my approach to both let others know and give more knowledgeable Debian developers a chance to correct me if I'm wrong.
The whole process starts with getting the source of the package you want to build. I wanted to get the latest and greatest packages so I grabbed the source package from Ubuntu 15.10 (wily). To do that I just go to packages.ubuntu.com and search for the right package. Here I wanted the python3-launchpadlib package. On the right-hand-side you can see the link to the .dsc file. You want that link so copy it now.
The right way to download each Debian source package is to use dget. Using a temporary directory as a workspace execute this command (if you read this later, the source package may not be available any more, you'd have to adjust the version numbers to reproduce the process).
dget http://archive.ubuntu.com/ubuntu/pool/main/p/python-launchpadlib/python-launchpadlib_1.10.3-2.dsc
With that package unpacked you want to change into the sub-directory with the unpackaged code. At this stage, you need to have a sbuild for Ubuntu 14.04. If you don't have one, it's time to make one now. You want to follow the excellent article on the Debian wiki for this. Many parts are just copy-paste but the final command you need to run is this:
sudo sbuild-createchroot --make-sbuild-tarball=/var/lib/sbuild/trusty-amd64.tar.gz trusty `mktemp -d` http://archive.ubuntu.com/ubuntu
Note that you cannot _just_ run it as there are some new groups you have to add and have available. Go read the article for that.
So with the sbuild ready to build our trusty packages, let's see what we get. Note, that in general, the process involves just those four steps.
At this time the process recursively continues. You grab a .dsc file with dget it, and try to sbuild it right there. Luckily, you will find that nothing here has more dependencies and that each of the four packages builds cleanly.
At this time, you want to create a new PPA. Just go to your launchpad profile page and look for the link to create it. The PPA will serve as a buffer for all the packages so that we can finally build the package we are after. Without the PPA we'd have to build a local apt repository which is just mildly more difficult and not needed since we want to share our packages anyway.
With the PPA in place you can now start preparing each package for upload. As a habit I bump the version number and change the target distribution version from wily / unstable to trusty. I also add a changelog entry that explains this is a backport and mentions the name of the PPA. The version number is a bit more tricky. You want your packages to be different from any packages in the Ubuntu archive so that eventual upgrades work okay. The way I do that is to use a (x-).1 version which is always smaller than the corresponding x Ubuntu version. Let's see how this works for each of the packages we have here.
Now all the dependencies are ready and I can do the final test build of launchpadlib itself. Since I always test-build everything I will now need to expose access to my PPA so that my sbuild (which knows nothing about it otherwise) can get the missing packages and build everything. This is the magic line that does it:
sbuild -d trusty --chroot-setup-commands='apt-key adv --keyserver keyserver.ubuntu.com --recv-key E62E6AAB' --extra-repository "deb http://ppa.launchpad.net/zyga/launchpadlib-backports/ubuntu trusty main"
Here, we have two new arguments to sbuild. First, we use --chroot-setup-commands to import the public key that launchpad is using to sign packages in my archive. Note that the key identifier is unique to my PPA (and probably to my launchpad account). You want to check the key listed on the PPA page you got. The second argument --extra-repository just makes our PPA visible to the apt installation inside the chroot so that we can resolve all the dependencies. On more recent versions of Ubuntu you can also use [trusted=yes] suffix but this doesn't work for Ubuntu 14.04.
After all the uploads are done you should wait and see that all the packages are built and published. This is clearly visible in the "package details" link on the PPA. If you see a spinning stopwatch then the package is building. If you see a green cogwheel then the package has built but is not yet published into the PPA (those are separate steps, like make and make install, kind of). When all the packages are ready I copied all the binary packages (without rebuilding) from trusty to utopic, vivid and wily so that anyone can use the PPA. The wily copy is a bit useless but it should let users use the same instructions even if they don't need anything, without getting weird errors they might not understand.
So there you have it. The whole process took around an hour and a half (including writing this blog post). Most of the time was spent waiting on particular test builds and on the launchpad package publisher.
If you have any comments, hints or suggestions please leave them in the commends section below. Thanks.
Today a colleague of mine asked for a small hint on how to do something with launchpadlib. I asked for the code sample and immediately stopped, seeing this is python 2 code. As python 2 is really not the way to start anything new in Ubuntu nowadays I looked at what's stopping us from using python 3.
It turns out, my colleague was using the LTS release (14.04) and python 3 version of launchpadlib just wasn't available at that time. Bummer.
Having a quick look at the dependencies I decided to help everyone out and create a PPA with the backported packages. Since this is a common process I though I would share my approach to both let others know and give more knowledgeable Debian developers a chance to correct me if I'm wrong.
The whole process starts with getting the source of the package you want to build. I wanted to get the latest and greatest packages so I grabbed the source package from Ubuntu 15.10 (wily). To do that I just go to packages.ubuntu.com and search for the right package. Here I wanted the python3-launchpadlib package. On the right-hand-side you can see the link to the .dsc file. You want that link so copy it now.
The right way to download each Debian source package is to use dget. Using a temporary directory as a workspace execute this command (if you read this later, the source package may not be available any more, you'd have to adjust the version numbers to reproduce the process).
dget http://archive.ubuntu.com/ubuntu/pool/main/p/python-launchpadlib/python-launchpadlib_1.10.3-2.dsc
With that package unpacked you want to change into the sub-directory with the unpackaged code. At this stage, you need to have a sbuild for Ubuntu 14.04. If you don't have one, it's time to make one now. You want to follow the excellent article on the Debian wiki for this. Many parts are just copy-paste but the final command you need to run is this:
sudo sbuild-createchroot --make-sbuild-tarball=/var/lib/sbuild/trusty-amd64.tar.gz trusty `mktemp -d` http://archive.ubuntu.com/ubuntu
Note that you cannot _just_ run it as there are some new groups you have to add and have available. Go read the article for that.
So with the sbuild ready to build our trusty packages, let's see what we get. Note, that in general, the process involves just those four steps.
- sbuild -d trusty
- dch # edit the changelog
- dpkg-buildpackage -S -sa
- dput ppa:zyga/launchpadlib-backports ../*.source.changes
At this time the process recursively continues. You grab a .dsc file with dget it, and try to sbuild it right there. Luckily, you will find that nothing here has more dependencies and that each of the four packages builds cleanly.
At this time, you want to create a new PPA. Just go to your launchpad profile page and look for the link to create it. The PPA will serve as a buffer for all the packages so that we can finally build the package we are after. Without the PPA we'd have to build a local apt repository which is just mildly more difficult and not needed since we want to share our packages anyway.
With the PPA in place you can now start preparing each package for upload. As a habit I bump the version number and change the target distribution version from wily / unstable to trusty. I also add a changelog entry that explains this is a backport and mentions the name of the PPA. The version number is a bit more tricky. You want your packages to be different from any packages in the Ubuntu archive so that eventual upgrades work okay. The way I do that is to use a (x-).1 version which is always smaller than the corresponding x Ubuntu version. Let's see how this works for each of the packages we have here.
- lazr.restfulclient has the Debian version 0.13.4-5 which I change to 0.13.4-5ubuntu0.1. This way both Ubuntu can upload 0.13.4-5ubuntu1 and Debian can upload 0.13.4-6 and users will get the correct update, nice.
- python-keyring has the Ubuntu version 4.0-1ubuntu1 which I change to 4.0-1ubuntu1.1 so that the subsequent 4.0-1ubuntu2 can be uploaded to Ubuntu without any conflicts.
- python-oauth has the Debian version 1.0.1-4 which I change to 1.0.1-4ubuntu0.1 to let Ubuntu update to -ubuntu1 eventually, if needed.
- python-wadllib has the Debian version 1.3.2-3 which I change to 1.3.2-3ubuntu0.1 in exactly the same way.
Now all the dependencies are ready and I can do the final test build of launchpadlib itself. Since I always test-build everything I will now need to expose access to my PPA so that my sbuild (which knows nothing about it otherwise) can get the missing packages and build everything. This is the magic line that does it:
sbuild -d trusty --chroot-setup-commands='apt-key adv --keyserver keyserver.ubuntu.com --recv-key E62E6AAB' --extra-repository "deb http://ppa.launchpad.net/zyga/launchpadlib-backports/ubuntu trusty main"
Here, we have two new arguments to sbuild. First, we use --chroot-setup-commands to import the public key that launchpad is using to sign packages in my archive. Note that the key identifier is unique to my PPA (and probably to my launchpad account). You want to check the key listed on the PPA page you got. The second argument --extra-repository just makes our PPA visible to the apt installation inside the chroot so that we can resolve all the dependencies. On more recent versions of Ubuntu you can also use [trusted=yes] suffix but this doesn't work for Ubuntu 14.04.
After all the uploads are done you should wait and see that all the packages are built and published. This is clearly visible in the "package details" link on the PPA. If you see a spinning stopwatch then the package is building. If you see a green cogwheel then the package has built but is not yet published into the PPA (those are separate steps, like make and make install, kind of). When all the packages are ready I copied all the binary packages (without rebuilding) from trusty to utopic, vivid and wily so that anyone can use the PPA. The wily copy is a bit useless but it should let users use the same instructions even if they don't need anything, without getting weird errors they might not understand.
So there you have it. The whole process took around an hour and a half (including writing this blog post). Most of the time was spent waiting on particular test builds and on the launchpad package publisher.
If you have any comments, hints or suggestions please leave them in the commends section below. Thanks.
Friday, June 5, 2015
Tarmac-for-git status update
Hi
I sent out a quick status update for the tarmac-for-git project I'm currently working on. You can find my update in the public archive below.
https://lists.ubuntu.com/archives/checkbox-devel/2015-June/000002.html
(If you don't know what tarmac is, it's a thing that runs somewhere and merges approved bzr branches from launchpad.net)
I don't want to re-iterate everything here but to give you some quick facts:
PS: you can get the prototype here
I sent out a quick status update for the tarmac-for-git project I'm currently working on. You can find my update in the public archive below.
https://lists.ubuntu.com/archives/checkbox-devel/2015-June/000002.html
(If you don't know what tarmac is, it's a thing that runs somewhere and merges approved bzr branches from launchpad.net)
I don't want to re-iterate everything here but to give you some quick facts:
- a working non-tarmac prototype exists
- some important launchpad API is not yet in production
- some hard choices have to be made if tarmac can evolve
- if you use any tarmac hooks
- if you run post-merge tests and if so, how
- how do you deploy tarmac today
PS: you can get the prototype here
Friday, April 3, 2015
Lantern Brightness Sensor
This is a quick update (easter easts a good fraction of my time) on the Lantern project.
First of all, I've created a Trello board for tracking and planning Lantern development. I will update it with separate tasks for software hardware and testing efforts.
I've been busy hacking on some new low-cost open source hardware that could be used for testing and debugging brightness controls in an automated way. Check out the board for details. If you are an experienced hardware designer and would like to help me out (either to mentor me or JFDIing it then by all means, I will welcome all help :-).
I'm a novice at hardware but I'm slowly getting through the basics of taking measurements with the TSL2561 sensor. Once the prototype is working well enough I plan on writing a few tests that can:
Oh, and I got http://pid.codes/1209/2000/ (see here for the backstory)
First of all, I've created a Trello board for tracking and planning Lantern development. I will update it with separate tasks for software hardware and testing efforts.
I've been busy hacking on some new low-cost open source hardware that could be used for testing and debugging brightness controls in an automated way. Check out the board for details. If you are an experienced hardware designer and would like to help me out (either to mentor me or JFDIing it then by all means, I will welcome all help :-).
I'm a novice at hardware but I'm slowly getting through the basics of taking measurements with the TSL2561 sensor. Once the prototype is working well enough I plan on writing a few tests that can:
- check that software control does result in a change of panel back-light brightness
- measure the effective LUX value seen for each available software setting
- automatically determine if brightness zero turns the back-light off
- maybe measure response time (difficult to do right)
Oh, and I got http://pid.codes/1209/2000/ (see here for the backstory)
Tuesday, March 17, 2015
Analyzing Lantern Submissions
I wasn't working on Lantern today much (there are no new features yet)
but I want to say thank you to everyone that has sent the second
version of the test. I realize that it's more complicated to prepare and
that it does include manual components but it is invaluable.
You can see all of the submissions in the linked repository. They are all JSON files so it should be simple to process in any language. There is no schema yet (now that I realized this I will probably write one) but it's rather simple to follow and understand what's inside.
The only part that is non-obvious is treatment of attachments. JSON is defined as all-unicode so to be able to preserve any binary blob that we may have to (a screen-shot, some random binary or mostly-text-with-GARBAGE-in-the-middle) we just base64-encode them.
Each JSON file (example) has a few top-level elements: results, resources and attachments. Plainbox, which generates the file, can do a bit more but that's all that we need in Lantern. Looking at the result map we can see that keys are just test identifiers and values are little structures with a few bits of data. The most interesting part there is the outcome which encodes if the test passed, failed or was skipped.
There are two kinds of tests that are interesting in the v2 lantern submissions. The first looks like 2015.pl.zygoon.lantern::test/intel_backlight/software-control note that the intel_backligth part is variable and can be acpi_video0 or nv_backlight or anything else really. This test checks if the software control is working through that device. The second test, which is only executed if software control works, is 2015.pl.zygoon.lantern::test/intel_backlight/brightness-0-is-visible. This test checks if setting brightness to zero actually turns the panel off.
Now I wrote this second batch of Lantern tests to check a theory:
So how to analyze a v2 submissions? Simple:
I preferred to write this description rather than the actual script to familiarize everyone with the Plainbox result format. It's a simple and intuitive format to store all kinds of test results. If you want to contribute and write this analysis script just fork the code on github and start hacking. It should be simple and I will gladly mentor anyone that wants to start coding or wants to contribute something simple to the project.
You can see all of the submissions in the linked repository. They are all JSON files so it should be simple to process in any language. There is no schema yet (now that I realized this I will probably write one) but it's rather simple to follow and understand what's inside.
The only part that is non-obvious is treatment of attachments. JSON is defined as all-unicode so to be able to preserve any binary blob that we may have to (a screen-shot, some random binary or mostly-text-with-GARBAGE-in-the-middle) we just base64-encode them.
Each JSON file (example) has a few top-level elements: results, resources and attachments. Plainbox, which generates the file, can do a bit more but that's all that we need in Lantern. Looking at the result map we can see that keys are just test identifiers and values are little structures with a few bits of data. The most interesting part there is the outcome which encodes if the test passed, failed or was skipped.
There are two kinds of tests that are interesting in the v2 lantern submissions. The first looks like 2015.pl.zygoon.lantern::test/intel_backlight/software-control note that the intel_backligth part is variable and can be acpi_video0 or nv_backlight or anything else really. This test checks if the software control is working through that device. The second test, which is only executed if software control works, is 2015.pl.zygoon.lantern::test/intel_backlight/brightness-0-is-visible. This test checks if setting brightness to zero actually turns the panel off.
Now I wrote this second batch of Lantern tests to check a theory:
firmware-based brightness control device, one that is based on /sys/class/backlight/*/type being equal to firmware, keeps the panel dim but lit when brightness zero is requested a raw driver will happily turn the panel backlight offWe now have the first essential piece of he puzzle, we know if the panel was turned off or not. The only missing bit is to know what kind of backlight control device we had, raw, firmware or platform. This data is saved in two places. The most natural way to access it is to look at a resource. Resources are a plainbox concept of allowing tests to generate structured data and keep this data inside the testing session. Plainbox uses this to probe the system and later on determine which tests to run. In Lantern we use this in a few places (1, 2). Since we also store this and the data is structured and easy to access we can simply look at it there. The interesting job identifier is 2015.pl.zygoon.lantern::probe/backlight_device. It can be found in the resource_map element and it is always an array. Each element is an object with fields defined by the resource job itself. Here it has the sysfs_type field which is exactly what we wanted to know.
So how to analyze a v2 submissions? Simple:
- Load each JSON file and then
- For each backlight device listed in resource_map["2015.pl.zygoon.lantern::probe/backlight_device"]
- Memorize ["sysfs_type"] and ["name"].
- If sysfs_type is equal to "firmware" then look at result_map["2015.pl.zygoon.lantern::test/" + name + "/brightness-0-is-visible"]["outcome"] to see if it is "pass".
- If sysfs_type is equal to "raw" then look at result_map["2015.pl.zygoon.lantern::test/" + name + "/brightness-0-is-visible"]["outcome"] to see if it is "fail".
- Each device that matches points 4 or 5 confirms out theory.
I preferred to write this description rather than the actual script to familiarize everyone with the Plainbox result format. It's a simple and intuitive format to store all kinds of test results. If you want to contribute and write this analysis script just fork the code on github and start hacking. It should be simple and I will gladly mentor anyone that wants to start coding or wants to contribute something simple to the project.
Monday, March 16, 2015
Lantern update
Hey
Lantern is progressing nicely today. I've received a number of submissions and it seems that brightness ranges are all over the place. Thanks to everyone that has contributed. This data will be very useful for analysis and the more you can send, the better.
I wanted to test a theory, that firmware-based brightness control device, one that is based on /sys/class/backlight/*/type being equal to firmware, keeps the panel dim but lit when brightness zero is requested a raw driver will happily turn the panel backlight off. This is a common issue with many laptops. There is no consistent behavior. Users don't know what to expect when they hit the brightness control all the way down.
To test that theory I've created a test provider for plainbox (which is the project I'm hacking at work most of the time). It's a big change from a tiny script that fits on one's screen to a large (it's large!) body of code feeding of a pretty big set of data files and scripts.
I did this to ensure that the solution is scalable. We can now do interactive tests, we can use i18n, we can do lots of complicated things that are useful as we expand the library of tests. Using a toolkit simply helps us along the way in ways that simple shell scripts cannot hope to.
Currently I've added two interactive tests:
So once again, I'd like to ask for your help. Look at the instructions and run the test suite. It takes about a minute on my laptop. Less if you already know all the instructions and don't have to follow along. As before, send submissions to my email address at zygmunt.krynicki<at>canonical.com. If you come across any problems please report them.
You can also contribute translations (see the po/ directory for the familiar stuff), tests and discussions.
Thanks
ZK
Lantern is progressing nicely today. I've received a number of submissions and it seems that brightness ranges are all over the place. Thanks to everyone that has contributed. This data will be very useful for analysis and the more you can send, the better.
I wanted to test a theory, that firmware-based brightness control device, one that is based on /sys/class/backlight/*/type being equal to firmware, keeps the panel dim but lit when brightness zero is requested a raw driver will happily turn the panel backlight off. This is a common issue with many laptops. There is no consistent behavior. Users don't know what to expect when they hit the brightness control all the way down.
To test that theory I've created a test provider for plainbox (which is the project I'm hacking at work most of the time). It's a big change from a tiny script that fits on one's screen to a large (it's large!) body of code feeding of a pretty big set of data files and scripts.
I did this to ensure that the solution is scalable. We can now do interactive tests, we can use i18n, we can do lots of complicated things that are useful as we expand the library of tests. Using a toolkit simply helps us along the way in ways that simple shell scripts cannot hope to.
Currently I've added two interactive tests:
- test that checks if software brightness control works at all
- the brightness zero test I've outlined above
So once again, I'd like to ask for your help. Look at the instructions and run the test suite. It takes about a minute on my laptop. Less if you already know all the instructions and don't have to follow along. As before, send submissions to my email address at zygmunt.krynicki<at>canonical.com. If you come across any problems please report them.
You can also contribute translations (see the po/ directory for the familiar stuff), tests and discussions.
Thanks
ZK
Is max_brigthness coming from a random number generator?
First off, thank you for sending contributions to Lantern. Please keep
them coming, we really need more data for meaningful statistics.
Now for the main dish: I wonder what's the cause of the seemingly random values of /sys/class/backlight/*/max_brightness as seen in the wild.
Currently I have instances of each of those:
So one laptop has 7 steps of backlight intensity, another has 312, 976 and some have 4882. What is the cause of such a wide range of values? Can the hardware be the cause? But then again, are engineers that built this really so careful to expose, say 852 values instead of 825, or 100.
Given how most backlight control from user's point of view works, apart from smooth transitions that some Windows 8 laptops do, 25 would be way more than enough values.
If anyone has some input on what may be causing this, I'd love to hear that. I'll start working on improving the analysis script to do some correlation between the CPU type, GPU type and observed values.
Now for the main dish: I wonder what's the cause of the seemingly random values of /sys/class/backlight/*/max_brightness as seen in the wild.
Currently I have instances of each of those:
[7, 10, 15, 89, 100, 255, 312, 494, 825, 852, 937, 976, 1808, 2632, 3828, 4437, 4438, 4882]
So one laptop has 7 steps of backlight intensity, another has 312, 976 and some have 4882. What is the cause of such a wide range of values? Can the hardware be the cause? But then again, are engineers that built this really so careful to expose, say 852 values instead of 825, or 100.
Given how most backlight control from user's point of view works, apart from smooth transitions that some Windows 8 laptops do, 25 would be way more than enough values.
If anyone has some input on what may be causing this, I'd love to hear that. I'll start working on improving the analysis script to do some correlation between the CPU type, GPU type and observed values.
Sunday, March 15, 2015
Lantern
So you've probably seen my earlier post asking for data contributions.
This quick post is a follow-up to that, to say Thank You to everyone that contributed data, shared my post or replied with useful feedback
This is just the beginning of the nascent project I've called Lantern. I have a few extra tools in the works and I will describe them properly when I'm ready. The culmination of this process will be an attempt to determine:
This quick post is a follow-up to that, to say Thank You to everyone that contributed data, shared my post or replied with useful feedback
This is just the beginning of the nascent project I've called Lantern. I have a few extra tools in the works and I will describe them properly when I'm ready. The culmination of this process will be an attempt to determine:
- If the kernel interface works
- If brigthness==0 is "dim but visible" or totally off
- If brigthness control via hardware keys is reflected in what the kernel sees
- If brigthness control via software is confusing the firmware (when manipulated by the hardware keys)
- If X and vt* are behaving differently (most of the time that is the case)
Crowdsourcing help needed! Share your /sys/class/backlight please!
I'm working on a little mini-project that deals with back-light.
I've read the kernel documentation I could find [1], [2], [3], [4] and I poked all my systems to see what values are produced. I only have a number of intel and intel/nvidia systems. I don't have any AMD portables at home. I would really like to see how they behave.
Towards that end I wrote a little tool that collects the most common properties (type and brigthness ranges) and dumps that to a tarball, along with the output of uname, lspci and a few others (nothing sensitive or personally-identifiable though). You can grab the tool from [5].
If you want to help me out please run the script and send the results to my email address at zygmunt<dot>krynicki<at>canonical.com.
Thanks!
I've read the kernel documentation I could find [1], [2], [3], [4] and I poked all my systems to see what values are produced. I only have a number of intel and intel/nvidia systems. I don't have any AMD portables at home. I would really like to see how they behave.
Towards that end I wrote a little tool that collects the most common properties (type and brigthness ranges) and dumps that to a tarball, along with the output of uname, lspci and a few others (nothing sensitive or personally-identifiable though). You can grab the tool from [5].
If you want to help me out please run the script and send the results to my email address at zygmunt<dot>krynicki<at>canonical.com.
Thanks!
Wednesday, February 11, 2015
Announcing Padme v1.0
Hi
I've just released Padme (named after the Star Wars character).
Padme is an implementation of mostly transparent proxy class for Python. What is unique about it is the ability to proxy objects of any class and to selectively un-proxy any method in a subclass. Check out the documentation for more examples, I'll post a quick one here:
At the same time, I'd like to ask interested parties to contribute and port Padme to Python 2.7, so that it can be universally useful to everyone. Please have a look at the contribution guide if you are interested.
I've just released Padme (named after the Star Wars character).
Padme is an implementation of mostly transparent proxy class for Python. What is unique about it is the ability to proxy objects of any class and to selectively un-proxy any method in a subclass. Check out the documentation for more examples, I'll post a quick one here:
>>> from padme import proxy
>>> pets = ['cat', 'dog', 'fish']
>>> pets_proxy = proxy(pets)
>>> pets_proxy
['cat', 'dog', 'fish']
>>> pets_proxy.append('rooster')
>>> pets
['cat', 'dog', 'fish', 'rooster']
>>> from padme import unproxied
>>> class censor_cat(proxy):
... @unproxied
... def __repr__(self):
... return super(censor_cat, self).__repr__().replace('cat', '***')
>>> pets_proxy = censor_cat(pets) >>> pets_proxy ['***', 'dog', 'fish', 'rooster']
At the same time, I'd like to ask interested parties to contribute and port Padme to Python 2.7, so that it can be universally useful to everyone. Please have a look at the contribution guide if you are interested.
Thursday, February 5, 2015
Checkbox Enchancement Proposal 8: Certification Status update
I've sent an update to checkbox-dev mailing list on th progress of implementation of CEP-8. Here's the summary if you want to quickly follow that:
- spec done
- need to finish xparsers to handle new override syntax
- need to finish TestPlanUnit class to have bulk/single update methods to apply all overrides
- need to think about how to handle multiple test plans in that configuration
- need to think about how to handle test plan in suspend/resume code (we need to store the one's we're running so that after resume we can still apply overrides to generated jobs)
- real data for cdts missing
- real data for 14.04 prototyped, blocked by xparsers, can land after review once unblocked
Thursday, January 29, 2015
The story of a certain one liner that failed in doctests
This is a pretty long story. Unusually enough, I'll let the code speak:
""" This module is the result of an evening of frustration caused by the need to support Python 3.2 and a failing doctest that exercises, unintentionally, the behavior of the compiled regular expression object's __repr__() method. That should be something we can fix, right? Let's not get crazy here: >>> import re >>> sre_cls = type(re.compile("")) >>> sre_cls <class '_sre.SRE_Pattern'> Aha, we have a nice type. It's only got a broken __repr__ method that sucks. But this is Python, we can fix that? Right? >>> sre_cls.__repr__ = ( ... lambda self: "re.compile({!r})".format(self.pattern)) ... # doctest: +NORMALIZE_WHITESPACE Traceback (most recent call last): ... TypeError: can't set attributes of built-in/extension type '_sre.SRE_Pattern' Hmm, okay, so let's try something else: >>> class Pattern(sre_cls): ... def __repr__(self): ... return "re.compile({!r})".format(self.pattern) Traceback (most recent call last): ... TypeError: type '_sre.SRE_Pattern' is not an acceptable base type *Sigh*, denial, anger, bargaining, depression, acceptance https://twitter.com/zygoon/status/560088469192843264 The last resort, aka, the proxy approach. Let's use a bit of magic to work around the problem. This way we won't have to subclass or override anything. """ from plainbox.impl.proxy import proxy from plainbox.impl.proxy import unproxied __all__ = ["PatternProxy"] class PatternProxy(proxy): """ A proxy that overrides the __repr__() to match what Python 3.3+ providers on the internal object representing a compiled regular expression. >>> import re >>> sre_cls = type(re.compile("")) >>> pattern = PatternProxy(re.compile("profanity")) Can we have a repr() like in Python3.4 please? >>> pattern re.compile('profanity') Does it still work like a normal pattern object? >>> pattern.match("profanity") is not None True >>> pattern.match("love") is not None False **Yes** (gets another drink). """ @unproxied def __repr__(self): return "re.compile({!r})".format(self.pattern)
Subscribe to:
Posts (Atom)