13. Contributor’s guide¶
This section is notes on contributing to Charliecloud development. Currently, it is messy and incomplete. Patches welcome!
It documents public stuff only. If you are on the core team at LANL, also consult the internal documentation and other resources.
Note
We are interested in and will consider all good-faith contributions. While it does make things easier and faster if you follow the guidelines here, they are not required. We’ll either clean it up for you or walk you through any necessary changes.
13.1. Workflow¶
We try to keep procedures and the Git branching model simple. Right now, we’re pretty similar to Scott Chacon’s “GitHub Flow”: Master is stable; work on short-lived topic branches; use pull requests to ask for merging; keep issues organized with tags and milestones.
The standard workflow is:
Propose a change in an issue.
Tag the issue with its kind (bug, enhancement, question).
Get consensus on what to do and how to do it, with key information recorded in the issue.
Submit a PR that refers to the issue.
Assign the issue to a milestone.
Review/iterate.
Project lead merges with “squash and merge”.
13.1.1. Code review¶
Issues and pull requests. The typical workflow is:
Propose a change in an issue.
Get consensus on what to do, whether in the issue or elsewhere.
Create a pull request (PR) for the implementation.
Iterate the PR until consensus is reached to either merge or abandon.
Merge or close the PR accordingly.
The issue, not the PR, should be tagged and milestoned so a given change shows up only once in the various views.
GitHub PRs have two states, which are often poorly labeled. These states and our interpretations are:
Ready for review (the green Create pull request button). This means that the PR is ready to be merged once tests and code review pass. In-progress PRs headed in that direction should also be in this state (i.e., the trigger for review and possible merge is the review request, not a draft to ready-for-review transition).
Draft. This means not ready for merge even if tests and review pass. (GitLab would indicate this with a
WIP:
prefix in the title.)
Stand-alone PRs. If consensus is obtained through other means, e.g. out-of-band discussion, then a stand-alone PR is appropriate (i.e., don’t create an issue just for the sake of having an issue to link to a PR). A stand-alone PR should be tagged and milestoned, since there is no issue. Note that stand-alone PRs are generally not a good way to propose something.
Address a single concern. When practical, issues and PRs should address completely one self-contained change. If there are multiple concerns, make separate issues and/or PRs. For example, PRs should not tidy unrelated code, and non-essential complications should be split into a follow-on issue. However, sometimes one PR addresses several related issues, which is fine.
Documentation and tests first. The best practice for significant changes is to draft documentation and/or tests first, get feedback on that, and then implement the code. Reviews of the form “you need a completely different approach” are no fun.
CI must pass. PRs will usually not be merged until they pass CI, with exceptions if the failures are clearly unconnected and we are confident they aren’t masking a real issue. If appropriate, tests should also pass on relevant supercomputers.
Use close keywords in PRs. Use the issue-closing keywords (variations on “closes”, “fixes”, and “resolves”) in PR descriptions to link it to the relevant issue(s). If this changes, edit the description to add/remove issues.
PR review procedure. When your PR is ready for review — which may or may not be when you want it considered for merging! — do this:
Request review from the person(s) you want to look at it. The purpose of requesting review is so the person is notified you need their help.
If you think it’s ready to merge (even if you’re not sure), ensure the PR is (1) marked “ready for review” (green icon), and (2) the project lead is included in your review request.
In both cases, the person from whom you requested review now owns the branch, and you should stop work on it unless and until you get it back (modulo other communication, of course). This is so they can make tidy commits if needed without collision.
It is good practice to communicate with your reviewer directly to set expectations on review urgency.
Review outcomes:
Request changes: The reviewer believes there are changes needed, and the PR needs re-review after these are done.
Comment: The reviewer has questions or comments, and the PR needs re-review after these are addressed.
Approve: The reviewer believes the branch is ready to proceed (further work if draft, merging if ready for review). Importantly, the review can include comments/questions/changes but the reviewer believes these don’t need re-review (i.e., the PR author can deal with them independently).
Use multi-comment reviews. Review comments should all be packaged up into a single review; click Start a review rather than Add single comment. Then the PR author gets only a single notification instead of one for every comment you make, and it’s clear when the branch is theirs again.
Selecting a reviewer. Generally, you want to find a reviewer with time to do the review and appropriate expertise. Feel free to ask if you’re not sure. Note that the project lead must approve any PRs before merge, so they are typically a reasonable choice if you don’t have someone else in mind.
External contributions do not need to select a reviewer. The team will notice the PR and wrangle its review.
Special case 1: Often, the review consists of code changes, and the reviewer will want you to assess those changes. GitHub doesn’t let you request review from the PR submitter, so this must be done with a comment, either online or offline.
Special case 2: GitHub will not let you request review from external people, so this needs to be done with a comment too. Generally you should ask the original bug reporter to review, to make sure it solves their problem.
13.1.2. Branching and merging¶
Don’t commit directly to master. Even the project lead doesn’t do this. While it may appear that some trivial fixes are being committed to the master directly, what really happened is that these were prototyped on a branch and then fast-forward merged after the tests pass. (Note we no longer do this.)
Merging to master. Only the project lead should do this.
Branch naming convention. Name the branch with a brief summary of the
issue being fixed — just a couple words — with words separated by hyphens,
then an underscore and the issue number being addressed. For example, issue
#1773 is titled
“ch-image build
: --force=fakeroot
outputs to stderr despite
-q
”; the corresponding branch (for PR #1812) is called
fakeroot-quiet-rhel_1773
. Something even shorter, such as
fakeroot_1773
, would have been fine too.
Stand-alone PRs do the same, just without an issue number. For example, PR
#1804 is titled “add tab
completion to ch-convert
” and the branch is
convert-completion
.
It’s okay if the branch name misses a little. For example, if you discover during work on a PR that you should close a second issue in the same PR, it’s not necessary to add the second issue number to the branch name.
Branch merge procedure. Generally, branches are merged in the GitHub web
interface with the Squash and merge button, which is git merge
--squash
under the hood. This squashes the branch into a single commit on
master.
Commit message must be the PR number followed by the PR title, e.g.:
PR #268: remove ch-docker-run
The commit message should not mention issue numbers; let the PR itself do that.
The reason to prefer merge via web interface is that GitHub often doesn’t notice merges done on the command line.
After merge, delete the branch via the web interface.
Branch history tidiness. Commit frequently at semantically relevant times, and keep in mind that this history will probably be squashed per above. It is not necessary to rebase or squash to keep branch history tidy. But, don’t go crazy. Commit messages like “try 2” and “fix CI again” are a bad sign; so are carefully proofread ones. Commit messages that are brief, technically relevant, and quick to write are what you want on feature branches.
Keep branches up to date. Merge master into your branch, rather than rebasing. This lets you resolve conflicts once rather than multiple times as rebase works through a stack of commits.
Note that PRs with merge conflicts will generally not be merged. Resolve conflicts before asking for review.
Remove obsolete branches. Keep your repo free of old branches with the
script misc/branches-tidy
.
13.1.3. Miscellaneous issue and pull request notes¶
Acknowledging issues. Issues and PRs submitted from outside should be acknowledged promptly, including adding or correcting tags.
Closing issues. We close issues when we’ve taken the requested action, decided not to take action, resolved the question, or actively determined an issue is obsolete. It is OK for “stale” issues to sit around indefinitely awaiting this. Unlike many projects, we do not automatically close issues just because they’re old.
Closing PRs. Stale PRs, on the other hand, are to be avoided due to bit rot. We try to either merge or reject PRs in a timely manner.
Re-opening issues. Closed issues can be re-opened if new information
arises, for example a worksforme
issue with new reproduction steps.
13.1.4. Continuous integration (CI) testing¶
Quality of testing. Tagged versions currently get more testing for various reasons. We are working to improve testing for normal commits on master, but full parity is probably unlikely.
Cycles budget. The resource is there for your use, so take advantage of it, but be mindful of the various costs of this compute time.
Things you can do include focused local testing, cancelling jobs you know will fail or that won’t give you additional information, and not pushing every commit (CI tests only the most recent commit in a pushed group). Avoid making commits merely to trigger CI.
13.1.5. Issue labeling¶
We use the following labels (a.k.a. tags) to organize issues. Each issue (or stand-alone PR) should have label(s) from each category, with the exception of disposition which only applies to closed issues. Labels are periodically validated using a script.
Charliecloud team members should label their own issues. The general public are more than welcome to label their issues if they like, but in practice this is rare, which is fine. Whoever triages the incoming issue should add or adjust labels as needed.
Note
This scheme is designed to organize open issues only. There have been previous schemes, and we have not re-labeled closed issues.
13.1.5.1. What kind of change is it?¶
Choose one type from:
bug
Something doesn’t work; e.g., it doesn’t work as intended or it was mis-designed. This includes usability and documentation problems. Steps to reproduce with expected and actual behavior are almost always very helpful.
enhancement
Things work, but it would be better if something was different. For example, a new feature proposal, an improvement in how a feature works, or clarifying an error message. Steps to reproduce with desired and current behavior are often helpful.
refactor
Change that will improve Charliecloud but does not materially affect user-visible behavior. Note this doesn’t mean “invisible to the user”; even user-facing documentation or logging changes could feasibly be this, if they are more cleanup-oriented.
13.1.5.2. How important/urgent is it?¶
Choose one priority from:
high
High priority.
medium
Medium priority.
low
Low priority. Note: Unfortunately, due to resource limitations, complex issues here are likely to wait a long time, perhaps forever. If that makes you particularly sad on a particular issue, please comment to say why. Maybe it’s mis-prioritized.
deferred
No plans to do this, but not rejected. These issues stay open, because we do not consider the deferred state resolved. Submitting PRs on these issues is risky; you probably want to argue successfully that it should be done before starting work on it.
Priority is indeed required, though it can be tricky because the levels are
fuzzy. Do not hesitate to ask for advice. Considerations include: is customer
or development work blocked by the issue; how valuable is the issue for
customers; does the issue affect key customers; how many customers are
affected; how much of Charliecloud is affected; what is the workaround like,
if any. Difficulty of the issue is not a factor in priority, i.e., here we are
trying to express benefit, not cost/benefit ratio. Perhaps the Debian bug
severity levels provide
inspiration. The number of high
priority issues should be relatively
low.
In part because priority is quite imprecise, issues are not a priority queue, i.e., we do work on lower-priority issues while higher-priority ones are still open. Related to this, issues do often move between priority levels. In particular, if you think we picked the wrong priority level, please say so.
13.1.5.3. What part of Charliecloud is affected?¶
Choose one or more components from:
runtime
The container runtime itself; largely
ch-run
.image
Image building and interaction with image registries; largely
ch-image
. (Not to be confused with image management tasks done by glue code.)glue
The “glue” that ties the runtime and image management (
ch-image
or another builder) together. Largely shell scripts inbin
.install
Charliecloud build & install system, packaging, etc. (Not to be confused with image building.)
doc
Documentation.
test
Test suite and examples.
misc
Everything else. Do not combine with another component.
13.1.5.4. Special considerations¶
Choose one or more extras from:
blocked
We can’t do this yet because something else needs to happen first. If that something is another issue, mention it in a comment.
hpc
Related specifically to HPC and HPC scaling considerations; e.g., interactions with job schedulers.
uncertain
Course of action is unclear. For example: is the feature a good idea, what is a good approach to solve the bug, what additional information is needed.
usability
Affects usability of any part of Charliecloud, including documentation and project organization.
13.1.5.5. Why was it closed?¶
If the issue was resolved (i.e., bug fixed or enhancement/refactoring implemented), there is no disposition tag. Otherwise, to explain why not, choose one disposition from:
cantfix
The issue is not something we can resolve. Typically problems with other software, problems with containers in general that we can’t work around, or not actionable due to clarity or other reasons. Use caution when blaming a problem on user error. Often (or usually) there is a documentation or usability bug that caused the “user error”.
discussion
Converted to a discussion. The most common use is when someone asks a question rather than making a request for some change.
duplicate
Same as some other issue. In addition to this tag, duplicates should refer to the other issue in a comment to record the link. Of the duplicates, the better one should stay open (e.g., clearer reproduction steps); if they are roughly equal in quality, the older one should stay open.
moot
No longer relevant. Examples: withdrawn by reporter, fixed in current version (use
duplicate
instead if it applies though), obsoleted by change in plans.wontfix
We are not going to do this, and we won’t merge PRs. Sometimes you’ll want to tag and then wait a few days before closing, to allow for further discussion to catch mistaken tags.
worksforme
We cannot reproduce a bug, and it seems unlikely this will change given available information. Typically you’ll want to tag, then wait a few days for clarification before closing. Bugs closed with this tag that do gain a reproducer later should definitely be re-opened. For some bugs, it really feels like they should be reproducible but we’re missing it somehow; such bugs should be left open in hopes of new insight arising.
Note
We do not use the GitHub “closed as not planned” feature, so everything is “closed as completed” even if the reason is one of the above.
13.1.5.6. Deprecated labels¶
You might see these on old issues, but they are no longer in use.
help wanted
: This tended to get stale and wasn’t generating any leads.key issue
: Replaced by priority labels.question
: Replaced by Discussions. (If you report a bug that seems to be a discussion, we’ll be happy to convert it to you.)
13.2. Test suite¶
13.2.1. Timing the tests¶
The ts
utility from moreutils
is quite handy. The following
prepends each line with the elapsed time since the previous line:
$ ch-test -s quick | ts -i '%M:%.S'
Note: a skipped test isn’t free; I see ~0.15 seconds to do a skip.
13.2.2. ch-test
complains about inconsistent versions¶
There are multiple ways to ask Charliecloud for its version number. These
should all give the same result. If they don’t, ch-test
will fail.
Typically, something needs to be rebuilt. Recall that configure
contains the version number as a constant, so a common way to get into this
situation is to change Git branches without rebuilding it.
Charliecloud is small enough to just rebuild everything with:
$ ./autogen.sh && ./configure && make clean && make
13.2.3. Special images¶
For images not needed after completion of a test, tag them tmpimg
.
This leaves only one extra image at the end of the test suite.
13.2.4. Writing a test image using the standard workflow¶
13.2.4.1. Summary¶
The Charliecloud test suite has a workflow that can build images by two methods:
From a Dockerfile, using
ch-image
or another builder (seecommon.bash:build_()
).By running a custom script.
To create an image that will be built and unpacked and/or mounted, create a
file in examples
(if the image recipe is useful as an example) or
test
(if not) called {Dockerfile,Build}.foo
. This will create
an image tagged foo
. Additional tests can be added to the test suite
Bats files.
To create an image with its own tests, documentation, etc., create a directory
in examples
. In this directory, place
{Dockerfile,Build}[.foo]
to build the image and test.bats
with
your tests. For example, the file examples/foo/Dockerfile
will create
an image tagged foo
, and examples/foo/Dockerfile.bar
tagged
foo-bar
. These images also get the build and unpack/mount tests.
Additional directories can be symlinked into examples
and will be
integrated into the test suite. This allows you to create a site-specific test
suite. ch-test
finds tests at any directory depth; e.g.
examples/foo/bar/Dockerfile.baz
will create a test image tagged
bar-baz
.
Image tags in the test suite must be unique.
Order of processing; within each item, alphabetical order:
Dockerfiles in
test
.Build
files intest
.Dockerfiles in
examples
.Build
files inexamples
.
The purpose of doing Build
second is so they can leverage what has
already been built by a Dockerfile, which is often more straightforward.
13.2.4.2. How to specify when to include and exclude a test image¶
Each of these image build files must specify its scope for building and
running, which must be greater than or equal than the scope of all tests in
any corresponding .bats
files. Exactly one of the following strings
must appear:
ch-test-scope: skip
ch-test-scope: quick
ch-test-scope: standard
ch-test-scope: full
Other stuff on the line (e.g., comment syntax) is ignored.
Optional test modification directives are:
ch-test-arch-exclude: ARCH
If the output of
uname -m
matchesARCH
, skip the file.ch-test-builder-exclude: BUILDER
If using
BUILDER
, skip the file.ch-test-builder-include: BUILDER
If specified, run only if using
BUILDER
. Can be repeated to include multiple builders. If specified zero times, all builders are included.ch-test-need-sudo
Run only if user has sudo.
13.2.4.3. How to write a Dockerfile
recipe¶
It’s a standard Dockerfile.
13.2.4.4. How to write a Build
recipe¶
This is an arbitrary script or program that builds the image. It gets three command line arguments:
$1
: Absolute path to directory containingBuild
.
$2
: Absolute path and name of output image, without extension. This can be either:
Tarball compressed with gzip or xz; append
.tar.gz
or.tar.xz
to$2
. Ifch-test --pack-fmt=squash
, then this tarball will be unpacked and repacked as a SquashFS. Therefore, only use tarball output if the image build process naturally produces it and you would have to unpack it to get a directory (e.g.,docker export
).Directory; use
$2
unchanged. The contents of this directory will be packed without any enclosing directory, so if you want an enclosing directory, include one. Hidden (dot) files in$2
will be ignored.
$3
: Absolute path to temporary directory for use by the script. This can be used for whatever and need no be cleaned up; the test harness will delete it.
Other requirements:
The script may write only in two directories: (a) the parent directory of
$2
and (b)$3
. Specifically, it may not write to the current working directory. Everything written to the parent directory of$2
must have a name starting with$(basename $2)
.The first entry in
$PATH
will be the Charliecloud under test, i.e., barech-*
commands will be the right ones.Any programming language is permitted. To be included in the Charliecloud source code, a language already in the test suite dependencies is required.
The script must test for its dependencies and fail with appropriate error message and exit code if something is missing. To be included in the Charliecloud source code, all dependencies must be something we are willing to install and test.
Exit codes:
0: Image successfully created.
65: One or more dependencies were not met.
126 or 127: No interpreter available for script language (the shell takes care of this).
else: An error occurred.
13.3. Building RPMs¶
We maintain .spec
files and infrastructure for building RPMs in the
Charliecloud source code. This is for two purposes:
We maintain our own Fedora RPMs (see packaging guidelines).
We want to be able to build an RPM of any commit.
Item 2 is tested; i.e., if you break the RPM build, the test suite will fail.
This section describes how to build the RPMs and the pain we’ve hopefully abstracted away.
13.3.1. Dependencies¶
Charliecloud
Python 3.6+
either:
the provided example
centos_7ch
oralmalinux_8ch
images, ora RHEL/CentOS 7 or newer container image with (note there are different python version names for the listed packages in RHEL 8 and derivatives):
autoconf
automake
gcc
make
python36
python36-sphinx
python36-sphinx_rtd_theme
rpm-build
rpmlint
rsync
13.3.2. rpmbuild
wrapper script¶
While building the Charliecloud RPMs is not too weird, we provide a script to
streamline it. The purpose is to (a) make it easy to build versions not
matching the working directory, (b) use an arbitrary rpmbuild
directory, and (c) build in a Charliecloud container for non-RPM-based
environments.
The script must be run from the root of a Charliecloud Git working directory.
Usage:
$ packaging/fedora/build [OPTIONS] IMAGE VERSION
Options:
--install
: Install the RPMs after building into the build environment.
--rpmbuild=DIR
: Use RPM build directory rootDIR
(default:~/rpmbuild
).
For example, to build a version 0.9.7 RPM from the CentOS 7 image provided
with the test suite, on any system, and leave the results in
~/rpmbuild/RPMS
(note the test suite would also build the
necessary image directory):
$ bin/ch-image build -f ./examples/Dockerfile.centos_7ch ./examples
$ bin/ch-convert centos_7ch $CH_TEST_IMGDIR/centos_7ch
$ packaging/fedora/build $CH_TEST_IMGDIR/centos_7ch 0.9.7-1
To build a pre-release RPM of Git HEAD using the CentOS 7 image:
$ bin/ch-image build -f ./examples/Dockerfile.centos_7ch ./examples
$ bin/ch-convert centos_7ch $CH_TEST_IMGDIR/centos_7ch
$ packaging/fedora/build ${CH_TEST_IMGDIR}/centos_7ch HEAD
13.3.3. Gotchas and quirks¶
13.3.3.1. RPM versions and releases¶
If VERSION
is HEAD
, then the RPM version will be the content
of VERSION.full
for that commit, including Git gobbledygook, and the
RPM release will be 0
. Note that such RPMs cannot be reliably upgraded
because their version numbers are unordered.
Otherwise, VERSION
should be a released Charliecloud version followed
by a hyphen and the desired RPM release, e.g. 0.9.7-3
.
Other values of VERSION
(e.g., a branch name) may work but are not
supported.
13.3.3.2. Packaged source code and RPM build config come from different commits¶
The spec file, build
script, .rpmlintrc
, etc. come from the
working directory, but the package source is from the specified commit. This
is what enables us to make additional RPM releases for a given Charliecloud
release (e.g. 0.9.7-2).
Corollaries of this policy are that RPM build configuration can be any or no commit, and it’s not possible to create an RPM of uncommitted source code.
13.3.3.3. Changelog maintenance¶
The spec file contains a manually maintained changelog. Add a new entry for each new RPM release; do not include the Charliecloud release notes.
For released versions, build
verifies that the most recent changelog
entry matches the given VERSION
argument. The timestamp is not
automatically verified.
For other Charliecloud versions, build
adds a generic changelog entry
with the appropriate version stating that it’s a pre-release RPM.
13.4. Style hints¶
We haven’t written down a comprehensive style guide. Generally, follow the style of the surrounding code, think in rectangles rather than lines of code or text, and avoid CamelCase.
Note that Reid is very picky about style, so don’t feel singled out if he complains (or even updates this section based on your patch!). He tries to be nice about it.
13.4.1. Writing English¶
When describing what something does (e.g., your PR or a command), use the imperative mood, i.e., write the orders you are giving rather than describe what the thing does. For example, do:
Inject files from the host into an image directory.Add--join-pid
option toch-run
.Do not (indicative mood):
Injects files from the host into an image directory.Adds--join-pid
option toch-run
.Use sentence case for titles, not title case.
If it’s not a sentence, start with a lower-case character.
Use spell check. Keep your personal dictionary updated so your editor is not filled with false positives.
13.4.2. Documentation¶
Heading underline characters:
Asterisk,
*
, e.g. “5. Contributor’s guide”Equals,
=
, e.g. “5.7 OCI technical notes”Hyphen,
-
, e.g. “5.7.1 Gotchas”Tilde,
~
, e.g. “5.7.1.1 Namespaces” (try to avoid)
13.4.3. Dependency policy¶
Specific dependencies (prerequisites) are stated elsewhere in the documentation. This section describes our policy on which dependencies are acceptable.
13.4.3.1. Generally¶
All dependencies must be stated and justified in the documentation.
We want Charliecloud to run on as many systems as practical, so we work hard to keep dependencies minimal. However, because Charliecloud depends on new-ish kernel features, we do depend on standards of similar vintage.
Core functionality should be available even on small systems with basic Linux distributions, so dependencies for run-time and build-time are only the bare essentials. Exceptions, to be used judiciously:
Features that add convenience rather than functionality may have additional dependencies that are reasonably expected on most systems where the convenience would be used.
Features that only work if some other software is present can add dependencies of that other software (e.g.,
ch-convert
depends on Docker to convert to/from Docker image storage).
The test suite is tricky, because we need a test framework and to set up complex test fixtures. We have not yet figured out how to do this at reasonable expense with dependencies as tight as run- and build-time, so there are systems that do support Charliecloud but cannot run the test suite.
Building the RPMs should work on RPM-based distributions with a kernel new enough to support Charliecloud. You might need to install additional packages (but not from third-party repositories).
13.4.3.2. curl
vs. wget
¶
For URL downloading in shell code, including Dockerfiles, use wget -nv
.
Both work fine for our purposes, and we need to use one or the other
consistently. According to Debian’s popularity contest, 99.88% of reporting
systems have wget
installed, vs. about 44% for curl
. On the
other hand, curl
is in the minimal install of CentOS 7 while
wget
is not.
For now, Reid just picked wget
because he likes it better.
13.4.4. Variable conventions in shell scripts and .bats
files¶
Separate words with underscores.
User-configured environment variables: all uppercase,
CH_TEST_
prefix. Do not use in individual.bats
files; instead, provide an intermediate variable.Variables local to a given file: lower case, no prefix.
Bats: set in
common.bash
and then used in.bats
files: lower case,ch_
prefix.Surround lower-case variables expanded in strings with curly braces, unless they’re the only thing in the string. E.g.:
"${foo}/bar" # yes "$foo" # yes "$foo/bar" # no "${foo}" # no
Don’t quote variable assignments or other places where not needed (e.g., case statements). E.g.:
foo=${bar}/baz # yes foo="${bar}/baz" # no
13.4.5. Statement ordering within source files¶
In general, we order things alphabetically.
13.4.5.1. Python¶
The module as a whole, and each class, comprise a sequence of ordering units
separated by section header comments surrounded by two or more hashes, e.g.
## Globals ##
. Sections with the following names must be in this order
(omissions are fine). Other section names may appear in any order. There is
also an unnamed zeroth section.
Enums
Constants
Globals
Exceptions
Main
Functions
Supporting classes
Core classes
Classes
Within each section, statements occur in the following order.
imports
standard library
external imports not in the standard library
import charliecloud
other Charliecloud imports
assignments
class definitions
function definitions
__init__
static methods
class methods
other double-underscore methods (e.g.
__str__
)properties
“normal” functions (instance methods)
Within each group of statements above, identifiers must occur in alphabetical order. Exceptions:
Classes must appear after their base class.
Assignments may appear in any order.
Statement types not listed above may appear in any order.
A statement that must be out of order is exempted with a comment on its first line containing 👻, because a ghost says “OOO”, i.e. “out of order”.
13.4.6. Python code¶
13.4.6.1. Indentation width¶
3 spaces per level. No tab characters.
13.4.7. C code¶
13.4.7.1. const
¶
The const
keyword is used to indicate that variables are read-only. It
has a variety of uses; in Charliecloud, we use it for function pointer
arguments to state
whether or not the object pointed to will be altered by the function. For
example:
void foo(const char *in, char *out)
is a function that will not alter the string pointed to by in
but may
alter the string pointed to by out
. (Note that char const
is
equivalent to const char
, but we use the latter order because that’s
what appears in GCC error messages.)
We do not use const
on local variables or function arguments passed by
value. One could do this to be more clear about what is and isn’t mutable, but
it adds quite a lot of noise to the source code, and in our evaluations didn’t
catch any bugs. We also do not use it on double pointers (e.g., char
**out
used when a function allocates a string and sets the caller’s pointer
to point to it), because so far those are all out-arguments and C has
confusing rules about double
pointers and const
.
13.4.7.2. Lists¶
The general convention is to use an array of elements terminated by an element containing all zeros (i.e., every byte is zero). While this precludes zero elements within the list, it makes it easy to iterate:
struct foo { int a; float b; };
struct foo *bar = ...;
for (int i = 0; bar[i].a != 0; i++)
do_stuff(bar[i]);
Note that the conditional checks that only one field of the struct (a
)
is zero; this loop leverages knowledge of this specific data structure that
checking only a
is sufficient.
Lists can be set either as literals:
struct foo bar[] = { {1, 2.0}, {3, 4.0}, {0, 0.0} };
or built up from scratch on the heap; the contents of this list are
equivalent (note the C99 trick to avoid create a struct foo
variable):
struct foo baz;
struct foo *qux = list_new(sizeof(struct foo), 0);
baz.a = 1;
baz.b = 2.0;
list_append((void **)&qux, &baz, sizeof(struct foo));
list_append((void **)&qux, &((struct foo){3, 4.0}), sizeof(struct foo));
This form of list should be used unless some API requires something else.
Warning
Taking the address of an array in C yields the address of the first element,
which is the same thing. For example, consider this list of strings, i.e.
pointers to char
:
char foo[] = "hello";
char **list = list_new(sizeof(char *), 0)
list_append((void **)list, &foo, sizeof(char *)); // error!
Because foo == &foo
, this will add to the list not a pointer to
foo
but the contents of foo
, i.e. (on a machine with
64-bit pointers) 'h'
, 'e'
, 'l'
, 'l'
,
'o'
, '0'
followed by two bytes of whatever follows
foo
in memory.
This would work because bar != &bar
:
char foo[] = "hello";
char bar = foo;
char **list = list_new(sizeof(char *), 0)
list_append((void **)list, &bar, sizeof(char *)); // OK
13.5. Debugging¶
13.5.1. Python printf(3)
-style debugging¶
Consider ch.ILLERI()
. This uses the same mechanism as the standard
logging functions (ch.INFO()
, ch.VERBOSE()
, etc.) but it
(1) cannot be suppressed and (2) uses a color that stands out.
All ch.ILLERI()
calls must be removed before a PR can be merged.
13.5.2. seccomp(2)
BPF¶
ch-run --seccomp -vv
will log the BPF instructions as they are
computed, but it’s all in raw hex and hard to interpret, e.g.:
$ ch-run --seccomp -vv alpine:3.17 -- true
[...]
ch-run[62763]: seccomp: arch c00000b7: found 13 syscalls (ch_core.c:582)
ch-run[62763]: seccomp: arch 40000028: found 27 syscalls (ch_core.c:582)
[...]
ch-run[62763]: seccomp(2) program has 156 instructions (ch_core.c:591)
ch-run[62763]: 0: { op=20 k= 4 jt= 0 jf= 0 } (ch_core.c:423)
ch-run[62763]: 1: { op=15 k=c00000b7 jt= 0 jf= 17 } (ch_core.c:423)
ch-run[62763]: 2: { op=20 k= 0 jt= 0 jf= 0 } (ch_core.c:423)
ch-run[62763]: 3: { op=15 k= 5b jt=145 jf= 0 } (ch_core.c:423)
[...]
ch-run[62763]: 154: { op= 6 k=7fff0000 jt= 0 jf= 0 } (ch_core.c:423)
ch-run[62763]: 155: { op= 6 k= 50000 jt= 0 jf= 0 } (ch_core.c:423)
ch-run[62763]: note: see FAQ to disassemble the above (ch_core.c:676)
ch-run[62763]: executing: true (ch_core.c:538)
You can instead use seccomp-tools to disassemble and pretty-print the BPF code in a far easier format, e.g.:
$ sudo apt install ruby-dev
$ gem install --user-install seccomp-tools
$ export PATH=~/.gem/ruby/3.1.0/bin:$PATH
$ seccomp-tools dump -c 'ch-run --seccomp alpine:3.19 -- true'
line CODE JT JF K
=================================
0000: 0x20 0x00 0x00 0x00000004 A = arch
0001: 0x15 0x00 0x11 0xc00000b7 if (A != ARCH_AARCH64) goto 0019
0002: 0x20 0x00 0x00 0x00000000 A = sys_number
0003: 0x15 0x91 0x00 0x0000005b if (A == aarch64.capset) goto 0149
[...]
0154: 0x06 0x00 0x00 0x7fff0000 return ALLOW
0155: 0x06 0x00 0x00 0x00050000 return ERRNO(0)
Note that the disassembly is not perfect; e.g. if an architecture is not in your kernel headers, the system call name is wrong.
13.6. OCI technical notes¶
This section describes our analysis of the Open Container Initiative (OCI)
specification and implications for our implementations of ch-image
, and
ch-run-oci
. Anything relevant for users goes in the respective man
page; here is for technical details. The main goals are to guide Charliecloud
development and provide and opportunity for peer-review of our work.
13.6.1. ch-run-oci¶
Currently, ch-run-oci
is only tested with Buildah. These notes
describe what we are seeing from Buildah’s runtime expectations.
13.6.1.1. Gotchas¶
13.6.1.1.1. Namespaces¶
Buildah sets up its own user and mount namespaces before invoking the runtime, though it does not change the root directory. We do not understand why. In particular, this means that you cannot see the container root filesystem it provides without joining those namespaces. To do so:
Export
CH_RUN_OCI_LOGFILE
with some logfile path.Export
CH_RUN_OCI_DEBUG_HANG
with the step you want to examine (e.g.,create
).Run
ch-build -b buildah
.Make note of the PID in the logfile.
$ nsenter -U -m -t $PID bash
13.6.1.1.2. Supervisor process and maintaining state¶
OCI (and thus Buildah) expects a process that exists throughout the life of the container. This conflicts with Charliecloud’s lack of a supervisor process.
13.6.1.2. Bundle directory¶
OCI documentation (very incomplete): https://github.com/opencontainers/runtime-spec/blob/master/bundle.md
The bundle directory defines the container and is used to communicate between
Buildah and the runtime. The root filesystem (mnt/rootfs
) is mounted
within Buildah’s namespaces, so you’ll want to join them before examination.
ch-run-oci
has restrictions on bundle directory path so it can be
inferred from the container ID (see the man page). This lets us store state in
the bundle directory instead of maintaining a second location for container
state.
Example:
# cd /tmp/buildah265508516
# ls -lR . | head -40
.:
total 12
-rw------- 1 root root 3138 Apr 25 16:39 config.json
d--------- 2 root root 40 Apr 25 16:39 empty
-rw-r--r-- 1 root root 200 Mar 9 2015 hosts
d--x------ 3 root root 60 Apr 25 16:39 mnt
-rw-r--r-- 1 root root 79 Apr 19 20:23 resolv.conf
./empty:
total 0
./mnt:
total 0
drwxr-x--- 19 root root 380 Apr 25 16:39 rootfs
./mnt/rootfs:
total 0
drwxr-xr-x 2 root root 1680 Apr 8 14:30 bin
drwxr-xr-x 2 root root 40 Apr 8 14:30 dev
drwxr-xr-x 15 root root 720 Apr 8 14:30 etc
drwxr-xr-x 2 root root 40 Apr 8 14:30 home
[...]
Observations:
The weird permissions on
empty
(000) andmnt
(100) persist within the namespaces, so you’ll want to be namespace root to look around.hosts
andresolv.conf
are identical to the host’s.empty
is still an empty directory with in the namespaces. What is this for?mnt/rootfs
contains the container root filesystem. It is a tmpfs. No other new filesystems are mounted within the namespaces.
13.6.1.3. config.json
¶
OCI documentation:
This is the meat of the container configuration. Below is an example
config.json
along with commentary and how it maps to ch-run
arguments. This was pretty-printed with jq . config.json
, and we
re-ordered the keys to match the documentation.
There are a number of additional keys that appear in the documentation but not
in this example. These are all unsupported, either by ignoring them or
throwing an error. The ch-run-oci
man page documents comprehensively
what OCI features are and are not supported.
{
"ociVersion": "1.0.0",
We validate that this is “1.0.0”.
"root": {
"path": "/tmp/buildah115496812/mnt/rootfs"
},
Path to root filesystem; maps to NEWROOT
. If key readonly
is
false
or absent, add --write
.
"mounts": [
{
"destination": "/dev",
"type": "tmpfs",
"source": "/dev",
"options": [
"private",
"strictatime",
"noexec",
"nosuid",
"mode=755",
"size=65536k"
]
},
{
"destination": "/dev/mqueue",
"type": "mqueue",
"source": "mqueue",
"options": [
"private",
"nodev",
"noexec",
"nosuid"
]
},
{
"destination": "/dev/pts",
"type": "devpts",
"source": "pts",
"options": [
"private",
"noexec",
"nosuid",
"newinstance",
"ptmxmode=0666",
"mode=0620"
]
},
{
"destination": "/dev/shm",
"type": "tmpfs",
"source": "shm",
"options": [
"private",
"nodev",
"noexec",
"nosuid",
"mode=1777",
"size=65536k"
]
},
{
"destination": "/proc",
"type": "proc",
"source": "/proc",
"options": [
"private",
"nodev",
"noexec",
"nosuid"
]
},
{
"destination": "/sys",
"type": "bind",
"source": "/sys",
"options": [
"rbind",
"private",
"nodev",
"noexec",
"nosuid",
"ro"
]
},
{
"destination": "/etc/hosts",
"type": "bind",
"source": "/tmp/buildah115496812/hosts",
"options": [
"rbind"
]
},
{
"destination": "/etc/resolv.conf",
"type": "bind",
"source": "/tmp/buildah115496812/resolv.conf",
"options": [
"rbind"
]
}
],
This says what filesystems to mount in the container. It is a mix; it has tmpfses, bind-mounts of both files and directories, and other non-device-backed filesystems. The docs suggest a lot of flexibility, including stuff that won’t work in an unprivileged user namespace (e.g., filesystems backed by a block device).
The things that matter seem to be the same as Charliecloud defaults. Therefore, for now we just ignore mounts.
"process": {
"terminal": true,
This says that Buildah wants a pseudoterminal allocated. Charliecloud does not currently support that, so we error in this case.
However, Buildah can be persuaded to set this false
if you redirect
its standard input from /dev/null
, which is the current workaround.
Things work fine.
"cwd": "/",
Maps to --cd
.
"args": [
"/bin/sh",
"-c",
"apk add --no-cache bc"
],
Maps to COMMAND [ARG ...]
. Note that we do not run ch-run
via
the shell, so there aren’t worries about shell parsing.
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"https_proxy=http://proxyout.lanl.gov:8080",
"no_proxy=localhost,127.0.0.1,.lanl.gov",
"HTTP_PROXY=http://proxyout.lanl.gov:8080",
"HTTPS_PROXY=http://proxyout.lanl.gov:8080",
"NO_PROXY=localhost,127.0.0.1,.lanl.gov",
"http_proxy=http://proxyout.lanl.gov:8080"
],
Environment for the container. The spec does not say whether this is the complete environment or whether it should be added to some default environment.
We treat it as a complete environment, i.e., place the variables in a file and
then --unset-env='*' --set-env=FILE
.
"rlimits": [
{
"type": "RLIMIT_NOFILE",
"hard": 1048576,
"soft": 1048576
}
]
Process limits Buildah wants us to set with setrlimit(2)
. Ignored.
"capabilities": {
...
},
Long list of capabilities that Buildah wants. Ignored. (Charliecloud provides security by remaining an unprivileged process.)
"user": {
"uid": 0,
"gid": 0
},
},
Maps to --uid=0 --gid=0
.
"linux": {
"namespaces": [
{
"type": "pid"
},
{
"type": "ipc"
},
{
"type": "mount"
},
{
"type": "user"
}
],
Namespaces that Buildah wants. Ignored; Charliecloud just does user and mount.
"uidMappings": [
{
"hostID": 0,
"containerID": 0,
"size": 1
},
{
"hostID": 1,
"containerID": 1,
"size": 65536
}
],
"gidMappings": [
{
"hostID": 0,
"containerID": 0,
"size": 1
},
{
"hostID": 1,
"containerID": 1,
"size": 65536
}
],
Describes the identity map between the namespace and host. Buildah wants it much larger than Charliecloud’s single entry and asks for container root to be host root, which we can’t do. Ignored.
"maskedPaths": [
"/proc/acpi",
"/proc/kcore",
...
],
"readonlyPaths": [
"/proc/asound",
"/proc/bus",
...
]
Spec says to “mask over the provided paths ... so they cannot be read” and “sed the provided paths as readonly”. Ignored. (Unprivileged user namespace protects us.)
}
}
End of example.
13.6.1.4. State¶
The OCI spec does not say how the JSON document describing state should be given to the caller. Buildah is happy to get it on the runtime’s standard output.
ch-run-oci
provides an OCI compliant state document. Status
creating
will never be returned, because the create operation is
essentially a no-op, and annotations are not supported, so the
annotations
key will never be given.
13.6.1.5. Additional sources¶
buildah
man page: https://github.com/containers/buildah/blob/master/docs/buildah.mdbuildah bud
man page: https://github.com/containers/buildah/blob/master/docs/buildah-bud.mdrunc create
man page: https://raw.githubusercontent.com/opencontainers/runc/master/man/runc-create.8.mdhttps://github.com/opencontainers/runtime-spec/blob/master/runtime.md
13.6.2. ch-image¶
13.6.2.1. pull¶
Images pulled from registries come with OCI metadata, i.e. a “config blob”.
This is stored verbatim in /ch/config.pulled.json
for debugging.
Charliecloud metadata, which includes a translated subset of the OCI config,
is kept up to date in /ch/metadata.json
.
13.6.2.2. push¶
Image registries expect a config blob at push time. This blob consists of both OCI runtime and image specification information.
OCI run-time and image documentation:
Since various OCI features are unsupported by Charliecloud we push only what is necessary to satisfy general image registry requirements.
The pushed config is created on the fly, referencing the image’s metadata and layer tar hash. For example, including commentary:
{
"architecture": "amd64",
"charliecloud_version": "0.26",
"comment": "pushed with Charliecloud",
"config": {},
"container_config": {},
"created": "2021-12-10T20:39:56Z",
"os": "linux",
"rootfs": {
"diff_ids": [
"sha256:607c737779a53d3a04cbd6e59cae1259ce54081d9bafb4a7ab0bc863add22be8"
],
"type": "layers"
},
"weirdal": "yankovic"
The fields above are expected by the registry at push time, with the exception
of charliecloud_version
and weirdal
, which are Charliecloud
extensions.
"history": [
{
"created": "2021-11-17T02:20:51.334553938Z",
"created_by": "/bin/sh -c #(nop) ADD file:cb5ed7070880d4c0177fbe6dd278adb7926e38cd73e6abd582fd8d67e4bbf06c in / ",
"empty_layer": true
},
{
"created": "2021-11-17T02:20:51.921052716Z",
"created_by": "/bin/sh -c #(nop) CMD [\"bash\"]",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:08Z",
"created_by": "FROM debian:buster",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:19Z",
"created_by": "RUN ['/bin/sh', '-c', 'apt-get update && apt-get install -y bzip2 wget && rm -rf /var/lib/apt/lists/*']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:19Z",
"created_by": "WORKDIR /usr/local/src",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:19Z",
"created_by": "ARG MC_VERSION='latest'",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:19Z",
"created_by": "ARG MC_FILE='Miniconda3-latest-Linux-x86_64.sh'",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:21Z",
"created_by": "RUN ['/bin/sh', '-c', 'wget -nv https://repo.anaconda.com/miniconda/$MC_FILE']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:33Z",
"created_by": "RUN ['/bin/sh', '-c', 'bash $MC_FILE -bf -p /usr/local']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:33Z",
"created_by": "RUN ['/bin/sh', '-c', 'rm -Rf $MC_FILE']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:33Z",
"created_by": "RUN ['/bin/sh', '-c', 'which conda && conda --version']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:34Z",
"created_by": "RUN ['/bin/sh', '-c', 'conda config --set auto_update_conda False']",
"empty_layer": true
},
{
"created": "2021-11-30T20:14:34Z",
"created_by": "RUN ['/bin/sh', '-c', 'conda config --add channels conda-forge']",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:07Z",
"created_by": "RUN ['/bin/sh', '-c', 'conda install --yes obspy']",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:07Z",
"created_by": "WORKDIR /",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:08Z",
"created_by": "RUN ['/bin/sh', '-c', 'wget -nv http://examples.obspy.org/RJOB_061005_072159.ehz.new']",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:08Z",
"created_by": "COPY ['hello.py'] -> '.'",
"empty_layer": true
},
{
"created": "2021-11-30T20:15:08Z",
"created_by": "RUN ['/bin/sh', '-c', 'chmod 755 ./hello.py']"
}
],
}
The history section is collected from the image’s metadata and
empty_layer
added to all entries except the last to represent a
single-layer image. This is needed because Quay checks that the number of
non-empty history entries match the number of pushed layers.
13.7. Miscellaneous notes¶
13.7.1. Updating bundled Lark parser¶
In order to change the version of the bundled lark parser you must modify
multiple files. To find them, e.g. for version 1.1.9 (the regex is hairy to
catch both dot notation and tuples, but not the list of filenames in
lib/Makefile.am
):
$ misc/grep -E '1(\.|, )1(\.|, )9($|\s|\))'
What to do in each location should either be obvious or commented.