Sunday, August 28, 2016

OpenSCAD Rendering Tricks, Part 3: Web viewer

This is my sixth post in a series about the open source split-flap display I’ve been designing in my free time. Check out a video of the prototype.

Posts in the series:
Scripting KiCad Pcbnew exports
Automated KiCad, OpenSCAD rendering using Travis CI
Using UI automation to export KiCad schematics
OpenSCAD Rendering Tricks, Part 2: Laser Cutting
OpenSCAD Rendering Tricks, Part 3: Web viewer

One of my goals when building the split-flap display was to make sure it was easy to visualize the end product and look at the design in detail without having to download the full source or install any programs. It’s hard to get excited about a project you find online if you need to invest time and effort before you even know how it works or what it looks like. I’ve previously blogged about automatically exporting the schematics, PCB layout, and even an animated gif of the 3D model to make it easier to understand the project at a glance, but I wanted to take things a step further, with a full interactive 3D viewer right in the browser. Go ahead, click around:



(You can also find this on the split-flap project page)

The hard part about this, surprisingly, is not actually doing the 3d rendering in the browser with WebGL (I used three.js for that), but actually the process of converting an OpenSCAD model to something that three.js can render.

OpenSCAD can export an STL file of a model, which encodes the geometry of 3d shapes, but STL files don’t support any color or material properties. If we took the raw STL that OpenSCAD generates and gave it to three.js to render, we’d end up with something like this:


That cool 3d model we built doesn’t look so great when it’s monochromatic.

Are there other options for color? OpenSCAD can also export Additive Manufacturing File Format (AMF) files, which in theory can contain material/color information, but for a number of perfectly sensible reasons (discussed here, here, and here), OpenSCAD doesn’t actually include color when exporting AMF files either.

With a bit of scripting though, we can hack around this OpenSCAD limitation. Instead of exporting a single multi-color 3d model, we can export multiple STL files - one for each color in the model - and then tell three.js to render each in its appropriate color.

Color in OpenSCAD

Let’s first take a step back and look at an OpenSCAD model. Color can be applied to a component using RGB values like so:

color([0.8, 0.2, 0.1]) cube([2, 4, 8]);


Let’s suppose we have a more complicated model, with 3 distinct shapes in 2 colors:

color([0.8, 0.2, 0.1]) cube([2, 4, 8]);
color([0, 1, 0]) translate([5, 0, 0]) cube([2, 2, 2]);
color([0.8, 0.2, 0.1]) translate([0, 0, -5]) sphere(r=3, $fn=30);



In the more complicated case we’d want to export two STL files: one for the 2x4x8 box and sphere which are both the same red color, and one for the 2x2x2 cube which should be rendered in green.

Extracting colors from a .scad file

In order to automate per-color STL exports, we can start by reading the original .scad model and finding all unique colors. In the example above you could try simply searching for the color keyword and extracting the RGB value inside the parenthesis, but that approach falls apart for more sophisticated OpenSCAD models like this one:

really_cool_red = [0.8, 0.2, 0.1];
color(really_cool_red) cube([2, 4, 8]);


If we did a naive search for color(<value>) we’d end up extracting the string “really_cool_red” from inside the parenthesis, which is just a variable name and doesn’t actually tell us the RGB values!

It’s clear that just reading the raw source code won’t work; we need to actually run the code to evaluate expressions and variables. To do this, we can define an extremely straightforward “color extractor” module to help:

module color_extractor(c) {
    echo(extracted_color=c);
    children();
}


If we modify the .scad source by replacing instances of color( with color_extractor( [view code], then running OpenSCAD will print out the fully-evaluated color values for each usage of color_extractor:

ECHO: extracted_color = [0.8, 0.2, 0.1]
ECHO: extracted_color = [0, 1, 0]


It’s pretty easy to parse the RGB color values from that output [view code].

Rendering a single color at a time

The next step is to actually render STL files for each of those colors we identified.

Similar to our color_extractor approach above, we can define a “color_selector” module:

module color_selector(c) {
    if (c == <<<CURRENT COLOR>>>) {
        children();
    }
}


where <<<CURRENT COLOR>>> is some constant value.

Any child elements wrapped by a color_selector module will only be evaluated (and therefore rendered) if the color specified matches the constant <<<CURRENT COLOR>>>.

For each of the colors we identified earlier using the color_extractor we can do the following:
  • add the color_selector module definition to the .scad source, filling in the current color value for <<<CURRENT COLOR>>> [view code]
  • modify the .scad source to use color_selector( in place of color(. That is, color([1, 0, 0]) cube([2, 4, 8]) becomes color_selector([1, 0, 0]) cube([2, 4, 8]) [view code]
  • run OpenSCAD with the -o output.stl export option to generate an STL file containing only objects of the current color [view code]
A few of the single-color .scad models extracted from the splitflap design.

Putting it together

So far we can automatically detect all unique colors within a model and export separate STL files for each of those colors. The next step is to hook those up to three.js to render an interactive, full-color model.

Each of the the STL files only contains geometry and still no color info, so while we are exporting each color we can also generate a separate “manifest” json file that maps STL file names to the RGB color they represent:

    {
        "7c82990faec1bff4c63b2c808c53957a4a8846428de5f0f648a2550d7f22a6de.stl": [
            1.0,
            1.0,
            1.0
        ],
        "ef07a1fd1a788f2e516b8b90e6ce1b52186dc315adee7321b5cc79e6c0c2805f.stl": [
            0.0,
            0.0,
            0.0
        ],
        "809a14df161a03ec1105642994c3557f705c4d143339f6e12a87c6672b0dd420.stl": [
            1.0,
            0.843,
            0.0
        ],
        "26a4ace46ae39933e6a48c7216e05c492121851603e5b7af95fbc77a7f3d4698.stl": [
            0.882,
            0.694,
            0.486
        ],
        ...
    }


We finally have all the components we need to render the model!

Using three.js, we can start by loading that manifest file and for each entry call a helper function, passing the STL filename and RGB color for rendering:

var loader = new THREE.XHRLoader(THREE.DefaultLoadingManager);
loader.load('manifest.json', function(text) {
    var json = JSON.parse(text);
    for (var stl in Object.keys(json)) {
        var color = json[stl];
        loadStl(stl, new THREE.Color( color[0], color[1], color[2] ).getHex());
    }
});


The loadStl helper method needs to do a few things:
  • request and parse the STL file to get a THREE.Geometry for the shape (we use THREE.STLLoader() to do all the STL heavy lifting)
  • create a THREE.MeshPhongMaterial based on the RGB color from the manifest, to describe the object’s appearance
  • combine the Geometry and MeshPhongMaterial into a THREE.Mesh to represent the completed shape including color/texture
  • add the Mesh to the three.js Scene we’re building

var loadStl = function(url, color) {
    var loader = new THREE.STLLoader();
    loader.load(url, function(geometry) {
        var material = new THREE.MeshPhongMaterial({
            color: color,
            specular: 0x111111,
            shininess: 10
        });
        var mesh = new THREE.Mesh(geometry, material);
        mesh.castShadow = true;
        mesh.receiveShadow = true;
        scene.add(mesh);
    });
};


The rest of the viewer is a pretty standard three.js setup:
  • Set up a PerspectiveCamera [view code]
  • Configure a WebGLRenderer to render the scene to an html canvas element [view code]
  • Create a ground plane Mesh using a PlaneBufferGeometry and specify plane.receiveShadow = true so the model casts a shadow onto the ground [view code]
  • Add a HemisphereLight for general illumination [view code]
  • Add a few DirectionalLights to highlight the model and cast shadows [view code]
  • Create a Fog so that the ground plane fades away in the distance [view code]
  • Add OrbitControls so you can use the mouse/keyboard to move the camera around the model (see also Advanced Topics 3 below) [view code]

Conclusion

That’s about it! You can find a complete implementation of the OpenSCAD color exporter in the git repo, including the advanced topics discussed below. The three.js viewer javascript that powers https://scottbez1.github.io/splitflap and the interactive example above can be found in the /docs folder.

For the interactive example above, the STL and manifest are automatically generated by Travis and hosted on S3, based on the most recent code in the repo.

If you have any questions, reach out to me on Twitter or leave a comment below!

Advanced Topics 1: Walking a .scad dependency tree

The steps above described how to extract unique colors from a single .scad file, but more complex models often depend on components in multiple separate files (using a use<filename.scad> or include<filename.scad> statement to incorporate them).

In order to handle dependencies, we can do a basic breadth-first search (BFS) over .scad file nodes where the use and include statements define the edges to traverse [view code].

Advanced Topics 2: Keeping it clean

In the descriptions above, it was necessary to modify the .scad files in order to extract color information and export STL files, but it would be pretty bad etiquette to modify original source files from a script. Even if we made the script restore the files upon completion, you can still run into problems: what if you kill the script halfway through, before the restoration code runs?

The solution to this is to operate on a copy of the model rather than modifying the original. We can do this during the BFS by copying the contents of each .scad file we visit to an intermediate location for further processing [view code].

Of course, since each .scad file we visit might have dependencies, we have to remember to update any include<> statements to correctly point to the copied file paths rather than the originals [view code].

To make things simple, we name every copied intermediate file after its original file, but apply a hash to its absolute path. This gives an easy, deterministic filename that won’t contain any special characters:

def get_transformed_file_path(original_path):
    extension = os.path.splitext(original_path)[1]
    return hashlib.sha256(os.path.realpath(original_path)).hexdigest() + extension


[view code]

Using a hash of the full file path also effectively flattens the directory structure of files in the original model, which makes things easier to track and contain. For instance, some/referenced/file/in/some/subfolder/abc.scad becomes simply db8a65dd2f401b9bafc598c5693323f61c14ca5bdcec18a6d401524a99eaf6bf.scad. This works even if the original model used relative paths (include<../../foobar.scad>) or absolute paths (include</home/scott/model.scad>).

All of this .scad file copying and manipulation is wrapped up in a walk_and_mutate_scad_files helper, which takes a function that can mutate file content as it is copied to the intermediate output folder. [view code]

One additional advantage of copying files to another directory when modifying them is that it makes parallelization possible. Exporting each color requires slightly different .scad files each time; by copying the modified source files to distinct folders it’s possible to run multiple instances of OpenSCAD in parallel using a python Pool to export them faster [view code]

Advanced Topics 3: Bounded three.js OrbitControls

The standard three.js OrbitControls let you click to move the camera around the 3d scene. In our scene, however, there is a solid ground plane, so it doesn’t make sense to allow the camera to move below the ground.

OrbitControls offers a way to restrict the angle of the camera, with a minPolarAngle and maxPolarAngle option. Setting maxPolarAngle to PI/2 is close to the behavior we want - it only allows the camera to occupy the upper hemisphere of the scene - but isn’t quite right. In our scene, the camera is looking at a point roughly halfway up the model, so a maxPolarAngle of PI/2 would mean you could only look at the model straight on or from above; it would be impossible to look upward at the model like this:
This angle looking upward wouldn’t be possible if we restricted the maxPolarAngle to PI/2

Instead of restricting the camera angle, we want to restrict the camera’s location so it can never move below the ground plane. The standard OrbitControls doesn't support this, so we can add a customizable predicate function, validUpdate, to OrbitControls which determines whether or not the updated camera pose is allowed as the camera is moved. During the OrbitControls update, we can check if the new pose is valid, and if not, reset it to the previous pose [view code].

If we want to prevent the camera from moving below the ground plane, we can define a simple validUpdate predicate:

controls.validUpdate = function(position, quaternion, target) {
    // Don't allow camera to go below ground
    return position.y > 0;
};


Sunday, May 8, 2016

OpenSCAD Rendering Tricks, Part 2: Laser Cutting

This is my fifth post in a series about the open source split-flap display I’ve been designing in my free time. Check out a video of the prototype.

Posts in the series:
Scripting KiCad Pcbnew exports
Automated KiCad, OpenSCAD rendering using Travis CI
Using UI automation to export KiCad schematics
OpenSCAD Rendering Tricks, Part 2: Laser Cutting
OpenSCAD Rendering Tricks, Part 3: Web viewer

In addition to creating a nice animated rendering, I wanted to make sure I could consistently export the final vector design to be laser cut. There were three main challenges to this:
  1. Layout - All of the pieces that make up the 3D design need to be laid out flat so they can be cut out of a single sheet of wood.
  2. Kerf - When laser cutting, the beam burns away material, leaving a gap where cuts were made (referred to as kerf). This means that shapes will all be slightly smaller than desired if cut exactly to dimension, so the dimensions need to be adjusted to compensate.
  3. Generating output - Laser cutters typically operate using a vector image such as SVG, and expect a strict set of encoded properties, e.g. cut lines in blue, vector engraving in black, etc, so we need to transform OpenSCAD’s SVG output to conform.


Layout

For a little background, in the 3d model I designed each distinct piece (e.g. gear, front enclosure face, etc) as a planar shape (to be cut out of thin MDF wood board) laying flat on the XY plane. Here’s a simple example:

thickness = 4;
module a() {
    color("red") {
        linear_extrude(thickness, center=true) {
            difference() {
                square([40,80]);
                translate([10, 10]) {
                    square([20, 60]);
                }
            }
        }
    }
}

module b() {
    linear_extrude(thickness, center=true) {
        difference() {
            square([40, 40]);
            translate([20, 20]) {
                circle(r=15);
            }
        }
    }
}





Because each piece is a separate module, they can be moved and rotated (using the translate and rotate operators) to be assembled into a 3d model, or laid out flat next to each other in the plane for laser cutting:

module 3d() {
    translate([-2,0,0])
        rotate([0,-90,0])
            a();
    translate([0, 82, 0])
        rotate([90, 0, 0])
            b();
}

module flat() {
    projection() {
        a();
        translate([0, 90, 0]) {
            b();
        }
    }
}




The splitflap design uses this technique to reuse the same components in the 3d model and 2d flattened layout. The only thing you have to remember is to include all the pieces from the 3d model into the flattened module as well!


Kerf

While laser cutters enable small, intricate designs, it’s important to remember that just like a table saw blade, the laser beam doing the cutting is not infinitesimally small. This means that if the center of the laser follows the edges/lines of your design exactly, you will actually lose a small amount of material on either side of that line. This is referred to as “kerf,” which has a width that varies depending on the laser cutter, power/speed settings, and material being cut.

To illustrate, here’s an exaggerated example: you can see the desired design on the left, and in the middle I’ve superimposed a particularly wide “laser beam” path in blue as if the center of the laser followed the contours of the design to cut it out.



Notice how much less of the teal part is exposed in the middle image? On the right, you can see the material that would be left if a wood panel was cut using the blue “laser beam” path — the shape that we wanted came out way too small and thin!

To correct for this kerf, we need to adjust the design so that all edges are shifted outward by half the laser beam width. This can be done by applying the offset operator:

offset(delta=kerf/2) {
    projection() {
        a();
    }
}


Note that before the offset is applied a projection() is used, which flattens a 3d shape by removing the Z-axis. This is necessary because the offset operator only works on 2d geometry.

Below you can see the design after applying the kerf-adjustment offset on the left (it’s fatter and the hole is smaller than the original), along with an updated “laser beam” overlay in the middle image that follows those adjusted edges. If you look at what material would remain after cutting, in the rightmost image, you can see that the remaining shape is actually the size that we wanted from our original design (compare it to the original in the left image above)!



On a real design, the impact of kerf won’t be quite so visually obvious as in this example (it’s something small like 0.2mm for the wood I used), but that small difference can be pretty important if you want a clean, tight fit.

Generating output

The last piece of the puzzle is taking the flattened 3d design that’s been kerf-corrected and shipping it off to be laser cut. I ordered my laser cut parts from Ponoko, which provides a template SVG file and expects certain image properties for different types of laser cuts:




One common technique to save money when laser cutting is to make multiple pieces share a common cut line since you’re generally charged for the total length of all cuts.

This presents a problem though if you use a simple export of a single SVG image — sometimes OpenSCAD will merge shapes if their edges perfectly overlap:

The bottom piece is actually two separate components that accidentally got merged together!


Another issue with exporting the entire design as a single SVG is that you can’t render overlapping components with 2d shapes. In the splitflap design, the text to be engraved is aligned directly on top of the bottom panel:




But when flattened into a 2d shape, the overlapping text is merged into the bottom panel shape, and since the bottom panel is larger than the engraved text, the text is lost completely in the exported design.

With a bit of scripting it’s not too difficult to export each component to its own SVG before merging them to avoid both of these problems. To start with, we can create a wrapper module that lets us render a single child element at a time (and we can also use this to apply the kerf correction discussed above):

module projection_renderer(render_index = 0, kerf_width = 0) {
    echo(num_components=$children);
    offset(delta=kerf_width/2) {
        projection() {
            // Only include a single child, the one at index "render_index"
            children(render_index);
        }
    }
}



To use it, we just wrap the list of laid out elements with it:

render_index = 0;
projection_renderer(render_index=render_index, kerf_width=0.1) {
    a();
    translate([0, 90, 0]) {
        b();
    }
}


Then from a python script, we can first run OpenSCAD to identify the number of individual components to render (determined by looking for the output of the echo(num_components=$children) statement from the projection_renderer), and then invoke OpenSCAD that many times, using the -D render_index=<value> command line argument to increment the render_index variable each time.


Once all the components have been exported as separate SVGs, it’s easy to combine the <path> elements from each SVG into a single file.

There are a few other tricks I used so that the python script can distinguish between components that should be cut out vs. engraved and apply the appropriate stroke and fill styles in the final SVG.

You can find those tricks and more details in the source code:
/3d/generate_2d.py
/3d/projection_renderer.scad
/3d/projection_renderer.py
/3d/svg_processor.py
/3d/openscad.py

In a past blog post, I discussed how I run this script using Travis CI to automatically render the flattened 2d design (shown at the top of this post) and more every time the source code changes. You should check it out if you haven’t already: Automated KiCad, OpenSCAD rendering using Travis CI.

OpenSCAD Rendering Tricks, Part 1: Animated GIF

This is my fourth post in a series about the open source split-flap display I’ve been designing in my free time. Check out a video of the prototype.

Posts in the series:
Scripting KiCad Pcbnew exports
Automated KiCad, OpenSCAD rendering using Travis CI
Using UI automation to export KiCad schematics
OpenSCAD Rendering Tricks, Part 1: Animated GIF

Early when designing the split flap 3D model using OpenSCAD I wanted to include a visualization in the project’s README so others could see what it looked like. It’s possible to capture an image manually (File→Export→Export as Image), but that’s an extra thing to remember to do after every change and it’s also not very consistent. The image that’s exported is basically a snapshot of the current preview window, so the image dimensions and camera angle would be different each time. Plus, a single static image doesn’t fully convey the 3D model, so I wanted something more dynamic.

The final product: a 360° animation that cycles through three views of the model.

I was inspired by Bryan Duxbury’s blog post on creating an animated gif from an OpenSCAD model. He used OpenSCAD’s built-in animation feature, which lets you parameterize your model using a special animation time variable, $t. To make a spinning animation, you can just wrap your model in a rotate transformation proportional to $t. This works well, but still requires some manual export steps from the GUI.

To fully automate this, I used OpenSCAD’s command-line interface which lets you specify options like --imgsize=width,height and --camera=translatex,y,z,rotx,y,z,dist to control the exported image. This makes it easy to write a script that exports snapshots from 360 degrees:

num_frames = 50
start_angle = 135
for i in range(num_frames):
    angle = start_angle + (i * 360 / num_frames)
    openscad.run(
        'splitflap.scad',
        'frame_%05d.png' % i,
        output_size = [320, 240],
        camera_translation = [0, 0, 0],
        camera_rotation = [60, 0, angle],
        camera_distance = 600,
    )


(This uses a simple Python wrapper to invoke OpenSCAD’s command line interface)

 In addition to a simple rotation, I wanted to showcase different parts of the model in the animation. At the top of splitflap.scad , I defined a few variables that control the visibility/opacity of the enclosure and flaps (this was also useful while designing the model):

render_enclosure = 1; // 2=opaque color; 1=translucent; 0=invisible
render_flaps = true;


Then from a script, I can invoke OpenSCAD using arguments like -D render_enclosure=0 -D render_flaps=false which override the variable definitions in the file. I use this so that over the course of three animated revolutions you can see all the different parts of the design.

Three different views of the model by changing the render_enclosure and render_flaps variables.
Unfortunately, by invoking openscad once per frame, the 3D model’s geometry needs to be recompiled for every camera angle rendered, which takes a nontrivial amount of time. With a desired 50 frames per revolution * 3 rendering options, that’s 150 total invocations of OpenSCAD! As far as I can tell there’s no easy way around this, but we can still speed it up by using multiple cores.

Using a threadpool (multiprocessing.dummy.Pool in Python) we can enqueue each of the OpenSCAD frame-rendering tasks to be run in parallel across a specified number of workers. Since each OpenSCAD process uses up to a single core, we can choose a pool size to match the number of cores available.

from multiprocessing.dummy import Pool
num_frames = 50
start_angle = 135
def render_frame(i):
    angle = start_angle + (i * 360 / num_frames)
    openscad.run(
        'splitflap.scad',
        'frame_%05d.png' % i,
        output_size = [320, 240],
        camera_translation = [0, 0, 0],
        camera_rotation = [60, 0, angle],
        camera_distance = 600,
    )
pool = Pool() # By default, Pool uses one thread per available CPU
for _ in pool.imap_unordered(render_frame, range(num_frames)):
    # Consume results as they occur so any task exceptions are rethrown asap
    pass
pool.close()
pool.join()


As a minor aside, it’s not really necessary to use separate threads, since each task is already launching a separate subprocess, but a threadpool provides a convenient abstraction for bounded parallel execution.

On my machine, rendering with a 4-thread Pool reduced the rendering time from 6 minutes 41 seconds down to just under 3 minutes!

The last step is to put all those frames together as an animated gif, which is fairly straightforward using ImageMagick:
convert 'frame_*.png' -set delay 1x15 animation.gif

The full script implementation can be found in the following files:
/3d/generate_gif.py
/3d/openscad.py

In a past blog post, I discussed how I run this script using Travis CI to automatically re-render the 3d animation every time I make a change to the source code. You should check it out if you haven’t already: Automated KiCad, OpenSCAD rendering using Travis CI.

Thanks for reading! In part 2 I’ll cover some more OpenSCAD tricks with similar command line scripting techniques to easily export a design for laser cutting.

Friday, April 22, 2016

Using UI automation to export KiCad schematics

This is my third post in a series about the open source split-flap display I’ve been designing in my free time. I’ll hopefully write a bit more about the overall design process in the future, but for now wanted to start with some fairly technical posts about build automation on that project.

Posts in the series:
Scripting KiCad Pcbnew exports
Automated KiCad, OpenSCAD rendering using Travis CI
Using UI automation to export KiCad schematics

Since I’ve been designing the split-flap display as an open source project, I wanted to make sure that all of the different components were easily accessible and visible for someone new or just browsing the project. Today’s post continues the series on automatically rendering images to include in the project’s README, but this time we go beyond simple programmatic bindings to get what we want: the schematic!

"Wow, I bet someone had to manually click through the GUI to
export such a beautiful schematic!" Nope.

Unfortunately, KiCad’s schematic editor, Eeschema, doesn’t have nice Python bindings like its pcb-editing cousin Pcbnew (and probably won’t for quite some time). And there aren’t really any command line arguments to do this either. So we turn to the last resort: UI automation. That is, simulating interaction with the graphical user interface.

There are two main issues with automating the graphical user interface: the build system (Travis CI) is running on a headless machine with no display, and the script needs to somehow know where to click on screen.

As I mentioned in my last post, we can use X Virtual Framebuffer (Xvfb), which acts as a virtual display server, to solve the first problem. As long as Xvfb is running, we can launch Eeschema even when there’s no physical screen. This time, instead of using `xvfb-run` from a Bash script, I decided to use the xvfbwrapper Python library for additional flexibility. xvfbwrapper provides a Python context manager so you can easily run an Xvfb server while some other code executes.

from xvfbwrapper import Xvfb
with Xvfb(width=800, height=600, colordepth=24):
    # Everything within this block now has access
    # to an 800x600 24-bit virtual display
    do_something_that_needs_a_display()


So how do we actually script and automate interactions with the GUI, such as opening menus, typing text, and clicking buttons? I looked into a number of different approaches, such as Sikuli, which allows you to write high level “visual code” using screenshots and image matching, or Java’s Robot class which lets you program the mouse and keyboard using Java, but the easiest option I found by far was the command-line program xdotool.

With xdotool, you can easily probe and interact with the window system from the command line. For instance, you can output a list of all named windows by running:
xdotool search --name '.+' getwindowname %@

(This is an example of a chained command: the first part (search --name '.+') finds all windows whose name matches the regular expression ‘.+’ (any non-empty string) and places those window ids onto a stack. The second part runs the command getwindowname, with the argument %@ meaning “all window ids currently on the stack.”)

Going back to Eeschema, the option we want to automate (exporting the schematic) lives under the File → Plot → Plot menu. The trick to automating this is not to use the mouse to click (since then we’d need to know the coordinates on screen) but instead use keyboard shortcuts. Opening that menu from the keyboard just requires pressing “Alt+F” then “P” then “P”, which we can automate like this:

# First find and then focus the Eeschema window
xdotool search --onlyvisible --class eeschema windowfocus
# Send keystrokes to navigate the menus
xdotool key alt+f p p



We can similarly write commands to fill out the correct information in the “Plot Schematic” dialog once it opens. To change radio button selections, we can “Tab” numerous times to move focus through the various options. This is a bit fragile, since it relies on there being a stable set of options in the same order to work (and might break if KiCad were to add a new Page Size option, for instance), but is about the best we can do without using more complex UI automation tools.


To make it easier to debug what’s happening in the X virtual display, we can use a screen-recording tool like recordmydesktop to save a screencast of the graphical automation. This is particularly helpful when running on Travis where you can’t actually see what’s going on as the script runs.

Since we’re writing in Python, we can use some syntactic sugar with Python context managers to make it really easy to wrap a section of code with Xvfb and video recording. As a first step, we’ll need a context manager for running a subprocess:

class PopenContext(subprocess.Popen):
    def __enter__(self):
        return self
    def __exit__(self, type, value, traceback):
        if self.stdout:
            self.stdout.close()
        if self.stderr:
            self.stderr.close()
        if self.stdin:
            self.stdin.close()
        if type:
            self.terminate()
        self.wait()


and then we can create a macro that combines both an Xvfb context and a recordmydesktop subprocess context into a single context manager to be used together:

@contextmanager
def recorded_xvfb(video_filename, **xvfb_args):
    with Xvfb(**xvfb_args):
        with PopenContext([
                'recordmydesktop',
                '--no-sound',
                '--no-frame',
                '--on-the-fly-encoding',
                '-o', video_filename], close_fds=True) as screencast_proc:
            yield



You can use that macro like so:
with recorded_xvfb('output_video.ogv', width=800, height=600, colordepth=24):
    # This code runs with an Xvfb display available
    # and is recorded to output_video.ogv
    do_something_that_needs_a_display()

# Once the 'with' block exits, the X virtual display is
# no longer available, and the recording has stopped
run_non_recorded_things()



So, putting all of those elements together, we can use Xvfb to host the Eeschema GUI (even on a headless build machine), run recordmydesktop to save a video screencast to help understand and debug the visual interactions, and use xdotool to simulate key presses in order to click through Eeschema’s menus and dialogs. The code looks roughly something like this:

with recorded_xvfb('output.ogv', width=800, height=600, colordepth=24):
    with PopenContext(['eeschema', 'splitflap.sch']) as eeschema_proc:
        wait_for_window('eeschema', ['--onlyvisible', '--class', 'eeschema'])
        # Focus main eeschema window
        xdotool(['search', '--onlyvisible', '--class', 'eeschema', 'windowfocus'])
        # Open File->Plot->Plot')
        xdotool(['key', 'alt+f', 'p', 'p'])
        wait_for_window('plot', ['--name', 'Plot'])
        xdotool(['search', '--name', 'Plot', 'windowfocus'])

        [...]

        eeschema_proc.terminate()



This is what one of those recordings looks like:


You can find the full scripts in the github repo, particularly in these two files:
/electronics/scripts/export_util.py
/electronics/scripts/export_schematic.py

I also used a similar technique to export the component list .xml file (Tools → Generate Bill of Materials) which is then transformed into a .csv bill of materials:
/electronics/scripts/export_bom.py

Hopefully this was a useful overview of how I used UI automation to export schematics from KiCad. If you have questions, leave a comment here or open an issue on on github and I’ll try to respond. In my next post in this series I’ll switch gears a bit and talk about how I programmatically generate the OpenSCAD 3d animation you see at the top of the project’s README!

Sunday, April 17, 2016

Automated KiCad, OpenSCAD rendering using Travis CI

This is my second post in a series about the open source split-flap display I’ve been designing in my free time. I’ll hopefully write a bit more about the overall design process in the future, but for now wanted to start with some fairly technical posts about build automation on that project.

Posts in the series:
Scripting KiCad Pcbnew exports
Automated KiCad, OpenSCAD rendering using Travis CI
Using UI automation to export KiCad schematics

In my last post, I discussed how I scripted the export of 2d renderings of the custom PCB. In this post, I’ll cover how I hooked up that script and others to run automatically on every commit using Travis CI, with automated deployments to S3 to keep all the renderings in the README updated, like this one:
I'll talk about this particular animated OpenSCAD rendering in a future blog post

Why Travis?

Travis CI is a continuous build and test system, with Github integration and a matching free tier for open source projects. If you’ve ever seen one of these badges in a Github README, it’s probably using Travis:

That's the current build status, hopefully it's green!
The best thing about Travis though is that unlike many build systems (like Jenkins or Buildbot), nearly the entire build system configuration for Travis lives directly inside the repo itself (in a .travis.yml file). This has a few major advantages:

Reproducible (or at least reasonably well defined) build environment
Each Travis build starts off as a clean slate, and you’re responsible for defining and installing any extra dependencies on the machine yourself through code. This way you always end up with clearly documented dependencies, and that documentation can never go stale!

Enables different build/test configurations on each branch
One big problem with keeping your code separate from the build configuration (as is often the case with tools like Jenkins/Buildbot) is that the two need to stay in sync. Typically this is not a huge problem for slow, linear development, since occasional lock-step updates across repo and build system aren’t too painful.

The issues start when you have faster development with frequently changing build configurations or parallel development across branches. Now not only do you have to keep your build configuration in sync with changes in the source repo, but you also have to make it branch-aware and keep each branch’s build config in sync with the branches in the source repo! Travis avoids all of this because the .travis.yml file is naturally versioned alongside the source it’s building, and therefore just works in branches with no extra effort!

Build configuration changes can be tested!
Related to the previous point — since the .travis.yml file is checked in and versioned with the source code, changes to the source code that e.g. require new packages to be installed in the build environment can actually be fully tested as part of a feature branch or pull request before landing in `master`.

Travis with KiCad and OpenSCAD

The first step to automating my build was to install the right packages. The basic .travis.yml config looks like this:

    dist: trusty
    sudo: true
    language: generic
    install:
    - ./3d/scripts/dependencies.sh
    - ./electronics/scripts/dependencies.sh


Both KiCad (schematic/pcb software) and OpenSCAD (3d cad software) are under fairly active development, and their packages in the Ubuntu 14.04 are woefully out of date, so I use snapshot PPAs to install more modern versions of each (this necessitates the use of `sudo: true` above which allows for running `add-apt-repository` under sudo).

Each of the install scripts referenced above is pretty straightforward and looks roughly like this:

    #!/bin/bash
    set -ev
 
    sudo add-apt-repository --yes ppa:js-reynaud/kicad-4
    sudo apt-get update -qq
    sudo DEBIAN_FRONTEND=noninteractive apt-get install -y kicad inkscape imagemagick


The .travis.yml configuration for actually running the PCB export script and OpenSCAD rendering scripts as the main build steps is likewise pretty simple:

    # [... other stuff above ...]
    script:
    - (cd electronics && python -u generate_svg.py)
    - (cd 3d && xvfb-run --auto-servernum --server-args "-screen 0 1024x768x24" python -u generate_2d.py)
    - (cd 3d && xvfb-run --auto-servernum --server-args "-screen 0 1024x768x24" python -u generate_gif.py)


The only interesting part of that is the use of `xvfb-run`. Getting OpenSCAD exports working is slightly trickier than KiCad, since even OpenSCAD’s command-line interface requires a graphical environment to render images. The trick to make this work on a headless build machine is to use X virtual framebuffer (Xvfb), which lets you run a standalone X server detached from an actual display. So in the config above, I use the `xvfb-run` utility, which starts an Xvfb server, sets up the DISPLAY environment, runs the specified command, and then shuts everything down when the command completes; easy! (I’ll discuss the actual `generate_2d.py` and `generate_gif.py` script implementations in a future post)

From Travis to the README

Now that we’ve got Travis set up installing KiCad and OpenSCAD and exporting images from each on every commit, the next step is to actually get those renderings off the build machine and somewhere useful. To do that, I use Travis’s deploy tool to upload those build artifacts to S3.

The configuration is again pretty simple. Here’s what it takes to upload the entire “deploy” directory on the build machine to a publicly-readable directory named “latest” in my “splitflap-travis” S3 bucket:

    # [... other stuff above ...]
    deploy:
      provider: s3
      access_key_id: AKIAJY6VAINVQICEC47Q
      secret_access_key:
        secure: SYHsDA3WZfV6YlZ... [truncated for your viewing pleasure]
      bucket: splitflap-travis
      local-dir: deploy
      upload-dir: latest
      skip_cleanup: true
      acl: public_read
      cache_control: no-cache
      on:
        repo: scottbez1/splitflap
        branch: master


Since the .travis.yml file is checked into the repo and public, putting your actual S3 credentials inside would be silly! But Travis allows you to encrypt your credentials using a secret that only their build machines know, so everything’s nice and secure despite being public.

This lets me reference the latest 2d laser-cut rendering from the README file by referencing https://s3.amazonaws.com/splitflap-travis/latest/3d_laser_raster.png. Here’s what the current rendering looks like by the way:



One thing you may notice is the black bar at the bottom with the date and commit hash. I added that because Github’s image proxy caches extremely aggressively and I originally didn’t include the `cache_control: no-cache` line in my deployment config, so I needed some way to debug. It was pretty easy to add using ImageMagick, and now I can easily tell that the images in my README are showing the latest designs correctly:


    #!/bin/bash
    set -e
    LABEL="`date --rfc-3339=seconds`\n`git rev-parse --short HEAD`"
    convert -background black -fill white -pointsize 12 label:"$LABEL" -bordercolor black -border 3 input_image.png +swap -append output_image.png

(slight adaptation from the full script: annotate_image.sh)

If you do find yourself stuck with cached images on Github, you can manually evict them from the cache using an http PURGE request to the image url:
`$ curl -X PURGE https://camo.githubusercontent.com/xxxxxxxxxxxxx`

If you want to poke around the actual Travis configuration I’ve discussed above, here are some links to the real files:
/travis.yml
/3d/scripts/dependencies.sh
/electronics/scripts/dependencies.sh
/scripts/annotate_image.sh

In my next post I’ll cover how I used `Xvfb` , `xdotool` , and `recordmydesktop` to automatically export the KiCad schematic and bill of materials, which are only exposed through the GUI!

Saturday, April 16, 2016

Scripting KiCad Pcbnew exports

For the past few months I’ve been designing an open source split-flap display in my free time — the kind of retro electromechanical display that used to be in airports and train stations before LEDs and LCDs took over and makes that distinctive “tick tick tick tick” sound as the letters and numbers flip into place.

I designed the electronics in KiCad, and one of the things I wanted to do was include a nice picture of the current state of the custom PCB design in the project’s README file. Of course, I could generate a snapshot of the PCB manually whenever I made a change by using the “File→Export SVG file” menu option and then check that image into my git repo…


…but that gets tedious, is prone to human error, pollutes the git history with a bunch of old binary files, and isn’t very customizable.

For instance, the manual SVG export uses opaque colors which make it hard to see features that overlap, as well as using two different colors for items on the same layer (yellow and teal are both part of the front silkscreen layer below):

Functional rendering, but not exactly what I wanted.
Luckily, Pcbnew has built-in Python bindings which make it pretty straightforward to invoke certain features from standalone Python scripts. As a simple example, here’s how to plot a single layer to an SVG:

import pcbnew

# Load board and initialize plot controller
board = pcbnew.LoadBoard("splitflap.kicad_pcb")
pc = pcbnew.PLOT_CONTROLLER(board)
po = pc.GetPlotOptions()
po.SetPlotFrameRef(False)

# Set current layer
pc.SetLayer(pcbnew.F_Cu)

# Plot single layer to file
pc.OpenPlotfile("front_copper", pcbnew.PLOT_FORMAT_SVG, "front_copper")
print("Plotting to " + pc.GetPlotFileName())
pc.PlotLayer()
pc.ClosePlot()


As a minor note, there's not much documentation of the Python bindings, but if you search through the KiCad source code you can find the C++ interfaces that are exposed to Python. E.g. above, pcbnew.F_Cu is one of many possible layer constants and pcbnew.PLOT_FORMAT_SVG is one of several different plot formats.

While it’s in theory possible to specify the colors to use when plotting, I ran into issues where certain items were always plotted in their default color. For instance, when I plot the front silkscreen layer with the following options, the footprints are plotted in teal rather than the specified color, red:

pc.SetLayer(pcbnew.F_SilkS)
pc.SetColorMode(True)
po.SetColor(pcbnew.RED) # <-- NOTE THIS LINE po.SetReferenceColor(pcbnew.GREEN)
po.SetValueColor(pcbnew.BLUE)


A lot of the silkscreen ended up teal instead of red.

So instead of trying to get Pcbnew to output the exact SVG I wanted, I decided to export each layer as a separate monochrome SVG image and then post-process them to apply colors and merge them into a single output file. Since SVG images are just XML, it was easy to write a script, svg_processor.py, which allowed me to override the “fill” and “stroke” style attributes of the shapes, and then wrap all of the shapes in a <g> group tag to set the desired opacity.

(Note: the reason for wrapping in a group before applying opacity is that things like traces are rendered as a combination of multiple shapes, like a line + circle, so if you applied alpha=0.5 to each shape individually, a single trace would have varying degrees of opacity depending on how its subcomponents overlapped)

This allowed me to write a simple definition of the PCB layers to export and turn that into a nice, customizable rendering:

layers = [
  {'layer': pcbnew.B_SilkS, 'color': '#CC00CC', 'alpha': 0.8 },
  {'layer': pcbnew.B_Cu, 'color': '#33EE33', 'alpha': 0.5 },
  {'layer': pcbnew.F_Cu, 'color': '#CC0000', 'alpha': 0.5 },
  {'layer': pcbnew.F_SilkS, 'color': '#00CCCC', 'alpha': 0.8},
]

Ooooh, so beautiful!

As a final step after processing and merging, I use Inkscape's command line interface to shrink the .svg canvas to fit the image and convert the vector .svg file into a raster .png image like you see above:

inkscape --export-area-drawing --export-width=320 --export-png output.png --export-background '#FFFFFF' input.svg

The complete script to export .svg and .png renderings of the PCB can be found at https://github.com/scottbez1/splitflap/blob/580a11538d801041cedf59a3c5d1c91b5f56825d/electronics/generate_svg.py

In the next post, I cover how I automated this rendering process on every commit using Travis CI with S3 deployments to keep the image and gerbers referenced in the README always up to date!

Monday, June 11, 2012

Simple USB LED Controller - Part 2

After fixing my pinout mixup from the previous version, my Simple USB LED Controller (SULC) v0.2 works!

Check out Part 1 and Part 1.5 for a bit more background on SULC.  In short, it's a ridiculously simple way to control high-power RGB LEDs from a computer.  You can send commands like "red, blue" or "all green" to control the LEDs, rather than implementing some complex protocol.

The build process for this version was the same as my first prototype - using a laser-cut solder paste stencil and "frying pan" reflow soldering - so I don't have any new pictures to show of that.  However, I do have pictures and video of the new version in action:

(I ran out of TLC5940s, so I decided to make this board with just 2 of them rather than waiting for a shipment to arrive - notice the missing IC in the top right corner)


The video gives a brief overview and shows just how easy it is to control high-power LEDs with SULC:



The full design files (schematic, pcb, firmware, and software) are on github: https://github.com/scottbez1/sulc