Skip to content

`labgrid_genrule`

Matthew Clarkson requested to merge docker into main

This MR introduces a few rules that allow flexible registration of LabGrid executors into Bazel rules:

  • labgrid_executor
    • Sets up a Python binary that runs a context manager to setup/teardown the LabGrid environment.
  • @rules_labgrid//labgrid/toolchain/executor:type
    • A toolchain that registers a labgrid_executor binary against target constraints. This is used to setup the LabGrid environment in LabGrid rules such as labgrid_genrule. The toolchain resolution is used to match the executor to the correct target platform.
  • labgrid_genrule
    • Performs similar to the built-in genrule but within a labgrid_executor setup environment. This allows it to execute binary programs that expect LabGrid environment variables.
  • labgrid_transition
    • Transitions a labgrid_genrule to a platform(s) to execute the same flow on multiple platforms via LabGrid

The implementation is not currently hermetic, that can come later. Hence the test targets are marked "manual". To test out locally run:

git clone https://git.gitlab.arm.com/bazel/rules_labgrid.git --single-branch --branch docker --depth 1
cd rules_labgrid/e2e
bazelisk test docker:test

The local host will need to have the Docker daemon running plus the ssh and scp binaries. The e2e/docker/BUILD.bazel is heavily commented to follow the usage of the new rules. It is extremely explicit. I would expect real-world usage to use higher-level macros as so:

load("@rules_labgrid//labgrid/crossbar:defs.bzl", "labgrid_crossbar")
load("@rules_labgrid//labgrid/place:defs.bzl", "labgrid_place")
load("@rules_labgrid//labgrid/run:defs.bzl", "labgrid_run")

labgrid_crossbar(
    name = "crossbar",
    url = "...",
)

# Does the following things:
# - Creates the `labgrid_executor` with a built-in Context Manager that calls out to `labgrid-client` to reserve board
# - Uses the `:crossbar` rule to understand the connection details (note: this will change to `labgrid_grpc` once they migrate)
# - Uses the tags to reserver the correct board
# - Creates the `toolchain_info`/`toolchain` rules to register the executor with `@rules_labgrid//labgrid/toolchain/executor:type`
labgrid_place(
    name = "odroid-n2x",
    deps = [":crossbar"],
    tags = ["odroid-n2x"],
    target_compatible_with = [
        "//constraint/device:silicon",
        "//constraint/gpu/arm:mail-g52",
        "...others...",
    ],
)

# Runs a binary on the host (`cfg = "exec")
# `labgrid_run` is a macro that creates `labgrid_genrule`/`labgrid_transition` targets
labgrid_run(
    name = "exec",
    srcs = [":py-binary"],  # A Python binary that uses the LabGrid API
    args = ["--output", "$@"],
    outs = ["output.log"],
    cfg = "exec",
    platform = "//platform/silicon:mali-g52",
)

# Runs a binary on the target (`cfg = "target")
labgrid_run(
    name = "target",
    srcs = [":cxx-binary"],  # A C++ binary that expects to be running on a silicon board with a Mali G52
    args = ["--output", "$@"],
    outs = ["output.log"],  # Output is automatically transferred back from the device (hopefully, in an efficient manner)
    cfg = "target",
    platform = "//platform/silicon:mali-g52",
)

# Run a binary on multiple target platforms
labgrid_run(
    name = "rust",
    srcs = [":rust-binary"],
    args = ["--output", "$@"],
    outs = ["output.log"],
    cfg = "target",
    platforms = [ # Run the target on whatever the infrastructure deems is the best Mali GPU to select
        "//platform/emulator:mali",
        "//platform/silicon:mali",
        "//platform/fpga:mali",
    ],
)

One thing I'm not sure about: How does one describe that a executor that requires local resource runs the labgrid_executor locally? I am hoping that the execution requirements for a toolchain can be propagated into the action via ctx.actions.run#toolchain. Local executors should be registered with no-remote execution properties, i.e anything not using labgrid-client to resolve the board. Until we get something working with labgrid-client and local board connections, I won't be able to confirm that.

Future areas of investiation (likely parallel tickets):

  • Implement a manager that talks to a local board (/dev/ttyUSB0, for example)
  • Implement a manager that reserves a board with labgrid-client
  • Make the open container image example hermetic:
    • Would require a new OCI hermetic driver for LabGrid
    • Need to be able to pass through a built image from rules_oci
    • Need a hermetic ssh build in Bazel Central Registry
    • Would need to be able to emulate other platforms, which would likely require a hermetic build of QEMU
  • Implement a QMEU manager that can flash a image provided from Bazel (downloaded or otherwise)
  • Implement more rules/macros/examples labgrid_test/labgrid_place/labgrid_crossbar/labgrid_run
Edited by Matthew Clarkson

Merge request reports