Acton is a advanced general purpose programming language offering functional and object-oriented style of programming based on the actor-model and async I/O. Type safe and with capabilities based security, Acton is statically compiled for high performance and portability. In other words, pretty much perfect ;) We hope you enjoy it as much as we do. It's readily available to build anything from advanced "shell scripts" to low level databases.
Unique among programming languages, Acton offers orthogonal persistence, which means you don't have to think about how to persist data, or rather the state of your program, for durability. Acton will do it for you, using its fault-tolerant distribute database. Pretty damn cool!
Hello World
We follow tradition and introduce Acton with the following minimal example
Source:
# This is a comment, which is ignored by the compiler.
# An actor named 'main' is automatically discovered and recognized as the root
# actor. Any .act file with a main actor will be compiled into a binary
# executable and the main actor becomes the starting point.
actor main(env):
print("Hello World!")
env.exit(0)
Compile and run:
acton hello.act
./hello
Output:
Hello World!
Description
When an Acton program runs, it really consits of a collection of actors that interact with each other. In the above example, we have just a single actor, which has been given the name main and that acts as the root actor of our system. The root actor of a system takes a parameter env, which represents the execution environment. env has methods for accessing command line arguments and carries a reference to the capabilities of the surrounding world, WorldCap, for accessing the environment, e.g. reading from and writing to keyboard/screen and files, working with sockets etc.
Installation
For Debian derivative distributions that use .dpkg and the APT ecosystem. Add the Acton APT repo and install from there:
sudo install -m 0755 -d /etc/apt/keyrings
sudo wget -q -O /etc/apt/keyrings/acton.asc https://apt.acton-lang.io/acton.gpg
sudo chmod a+r /etc/apt/keyrings/acton.asc
echo "deb [signed-by=/etc/apt/keyrings/acton.asc] http://apt.acton-lang.io/ stable main" | sudo tee -a /etc/apt/sources.list.d/acton.list
sudo apt-get update
sudo apt-get install -qy acton
Installation
Tip releases are built from the latest commit on the acton git repo main branch. They are built at least once a night, so can be thought of as nightlies but more up to date.
For Debian derivative distributions that use .dpkg and the APT ecosystem. Add the Acton APT tip repo and install from there:
sudo install -m 0755 -d /etc/apt/keyrings
sudo wget -q -O /etc/apt/keyrings/acton.asc https://apt.acton-lang.io/acton.gpg
sudo chmod a+r /etc/apt/keyrings/acton.asc
echo "deb [signed-by=/etc/apt/keyrings/acton.asc] http://aptip.acton-lang.io/ tip main" | sudo tee -a /etc/apt/sources.list.d/acton.list
sudo apt-get update
sudo apt-get install -qy acton
Shebang
While Acton is a compiled language and the acton compiler produces an executable binary, script style execution is also possible through the use of a shebang line.
Source:
#!/usr/bin/env runacton
actor main(env):
print("Hello World!")
env.exit(0)
Ensure the executable bit is set and run your .act file directly:
chmod a+x hello.act
./hello.act
Output:
Hello World!
Acton Projects
Besides compiling individual .act files, it is possible to organize Acton code into an Acton Project, which is suitable once you have more than one .act source code file.
Use acton to create a new project called foo:
acton new foo
Output:
Created project foo
Enter your new project directory with:
cd foo
Compile:
acton build
Run:
./out/bin/foo
Description
Use acton build to build a project. The current working directory
must be the project directory or a sub-directory to the project
directory. acton will discover all source files and compile them
according to dependency order.
Add a main actor to any source file directly under src/ to produce an executable binary. For example, if src/hello.act contains a main actor, it will produce out/bin/hello using main as the root actor.
Projects and modules work together: src/ defines the local module
tree, while Build.act defines the project identity and its external
dependencies. See Modules for local source layout and
Package Management for remote dependencies and
override behavior.
Build configuration and lineage
Projects must include a Build.act file. Two common fields are name and fingerprint, where the fingerprint captures the project’s lineage:
name = "hello"
fingerprint = 0x1234abcd5678ef00
name and fingerprint are required for Acton projects. Acton validates that the fingerprint matches the name’s lineage prefix. A mismatch indicates a rename or a fork, so the build fails and tells you to generate a new fingerprint for the new name. If either field is missing, the build fails with guidance to add it.
Running Tests
Writing tests is an integral part of writing software. In an Acton project, you can run all the tests by issuing acton test:
foo@bar:~hello$ acton test
Tests - module hello:
foo: OK: 278 runs in 50.238ms
All 1 tests passed (23.491s)
foo@bar:~hello$
See the Testing section on how to write tests.
Language
This chapter is the core language guide and reference.
It is organized around common programming tasks rather than around a taxonomy of language features. The early sections are the things most people need first. The later sections go deeper into Acton-specific parts of the language such as actors, concurrency, and capabilities.
What to read first
If you are new to Acton, read in this order:
- Common programming concepts
- Collections and everyday data
- Missing values and failures
- Modeling data and interfaces
- Organizing code
- Working with types
- Actors & concurrency
- Environment and capabilities
How to use this chapter as reference
- Use the book's search when you want to find a specific feature, keyword, or builtin quickly. This chapter is organized for reading and programming tasks, not as a strict feature index.
- Use section landing pages for overview.
- Use subpages for details and examples.
- Use the beginner and advanced callouts to choose the level of detail you need.
- For runtime and tooling behavior, see the chapters after Language (Testing, Build System, Package Management, and RTS).
Common programming concepts
This section covers the pieces you use in almost every Acton program: names and scope, built-in types, expressions, functions, comments, and control flow.
These pages are the foundation for the rest of the guide. Read them in order:
- Variables, constants, and scope
- Built-in types and literals
- Expressions and operators
- Functions
- Comments
- Control flow
Most examples use actor main(env): as the entry point because Acton
programs run from a root actor. You do not need to understand actors in
depth yet to follow these pages, it will be explained later.
Variables, constants, and scope
Names let you store values and use them again later.
Programmers often use variable to mean "a named place where a value is stored". In Acton, the more precise question is whether that name is a constant binding or a mutable one.
A constant binding keeps the same value for its lifetime. A mutable binding is one whose value may be updated later. In everyday English, mutable just means "able to change".
Where a name is defined determines where it can be used. This is called its scope.
Scope means "the part of the program where a name is visible". A name defined inside a function or method is only available there. Module-level names live across the module. Actor bodies have their own rules, which are covered below.
greeting = "hello" # module-level constant
def show_local():
greeting = "hi" # local name that shadows the module-level greeting
print("local greeting:", greeting)
def show_global():
print("global greeting:", greeting)
actor main(env):
show_local()
show_global()
env.exit(0)
In this example:
- The module-level
greetingis a constant. - The
greetinginsideshow_localis a different local name.
Module-level names
Names defined at module level are constants. They are useful for helper functions, reusable values, and definitions that the rest of the module shares.
port = 9000
def address():
return "127.0.0.1:" + str(port)
Here port is a constant: after it is defined, you use it,
but you do not update it. That is a good default for names that describe
configuration, helper values, and definitions shared across a module.
Local names
Names defined inside a function or method are local to that body.
def greet(name):
message = "Hello " + name
print(message)
Here, message only exists inside greet.
Names in actors
Acton treats names at the top level of an actor body differently from names inside a function or method.
actor Counter():
var remaining = 3
label = "counter"
_unit = "items"
def tick():
print(label, remaining, _unit)
remaining -= 1
In this example:
remainingis private mutable actor state.labelis a public constant attribute._unitis private to the actor because its name starts with_.
Read var as "this actor-local name will change over
time". If you plan to update a value in actor code, make that explicit
up front.
A plain name at the top level of an actor body is not an ordinary
local variable. It becomes a constant actor attribute instead. If the
name starts with _, it stays private to the actor. Without
the leading underscore, other actors can read that constant through an
actor reference.
Acton keeps module-level bindings constant and pushes mutable shared state into actors. That changes how you structure larger programs: refactoring stateful logic usually means introducing an actor boundary, not another mutable top-level name.
Shadowing
Shadowing means introducing a new local name with the same spelling as an existing one. The outer name still exists, but the inner one is the one used in that scope.
name = "Acton"
def show():
name = "local"
print(name)
show()
print(name)
Here, show introduces a local name that shadows the module-level
name. Inside show, name means "local". Outside show, it still
means "Acton".
Shadowing is sometimes useful, but overusing it makes code harder to read. Prefer distinct names when the two values mean different things.
Built-in types and literals
Every value in Acton has a type.
Many values can be written directly in source code. These are called literals.
42, True, and "hello" are all
literals: the value is written directly in the program instead of being
computed somewhere else.
Common built-in types
| Type | Example | Notes |
|---|---|---|
int | 42 | 64-bit signed integer |
bigint | 123456789012345678901234567890 | arbitrary-precision integer |
i8, i16, i32, u1, u8, u16, u32, u64 | u16(42) | explicitly sized integers |
float | 3.14 | 64-bit floating-point number |
complex | complex.from_real_imag(1.0, 2.0) | complex number |
bool | True | True or False |
str | "hello" | Unicode text |
| tuple | (1, "two") | fixed-size group of values |
actor main(env):
whole = 42
huge = bigint(123456789012345678901234567890)
ratio = 3.5
truth = True
name = "Acton"
point = (x=3, y=4)
z = complex.from_real_imag(2.0, 3.0)
print(whole, huge, ratio, truth, name, point, z)
env.exit(0)
Integer literals are not all the same internally. Small whole-number
literals usually fit in int, while very large literals may
need u64 or bigint. If the exact type matters,
write it explicitly.
Choosing a type
Use:
intfor ordinary whole numbersbigintwhen whole numbers may grow beyond the normalintrangefloatfor fractional valuesboolfor yes/no conditionsstrfor text- tuples for small fixed-size groups of values
Reach for fixed-size integers when width or sign matters, and for
complex when you need real and imaginary parts together.
Lists, dictionaries, and sets are covered in Collections.
More detail
Integers
Integers are whole numbers such as 0, 42, and -7.
Acton has three groups of integer types:
intfor the normal 64-bit signed integer typebigintfor integers that must grow beyond theintrange- explicitly sized signed and unsigned integers such as
i32andu16
bigint lets values grow arbitrarily large ensuring correct program behavior
when you are uncertain about the exact size needed. However, as bigint is
significantly slower than the bounded integer types, do not default to bigint
out of convenience. For the vast majority of normal use cases, int is large
enough and considerably faster. Use exact-width integers when you specifically
need their bit width.
Bounded integer types can often be compiled in an unboxed form, which avoids
boxing overhead and can make arithmetic much faster, several orders of
magnitude, than bigint in tight code. That is another reason to
prefer int or an exact-width integer when the bounded range is the
right fit, and reserve bigint for values that truly need arbitrary
precision.
If you are not sure which integer type to use, start with
int. Move to bigint when values may get very
large, and use the exact-width types when you need to match a protocol,
file format, or external API.
| Type | Min | Max |
|---|---|---|
i8 |
-128 | 127 |
i16 |
-32768 | 32767 |
i32 |
-2147483648 | 2147483647 |
u1 |
0 | 1 |
u8 |
0 | 255 |
u16 |
0 | 65535 |
u32 |
0 | 4294967295 |
u64 |
0 | 18446744073709551615 |
int |
-9223372036854775808 | 9223372036854775807 |
bigint |
arbitrary | arbitrary |
Basic use
actor main(env):
count = 42
port = u16(5000)
huge = bigint(123456789012345678901234567890)
print("count:", count)
print("port:", port)
print("huge:", huge)
print("widened:", int(port))
env.exit(0)
Use int for everyday counting and arithmetic. Use bigint when a
value may exceed the normal machine-sized range. Use exact-width types
when the bit pattern matters.
Converting integers
Convert by calling the target type as a constructor.
int(42)
bigint(42)
u16(255)
Widening to a larger type is straightforward:
small = u16(255)
widened = int(small)
Converting to a narrower type checks that the value fits:
safe = u16(12345)
# u16(70000) would raise ValueError
Large integer literals are inferred by size. Values above the normal
int range may infer as u64 or
bigint. When you care about the exact type, annotate it or
use an explicit constructor.
Floating-point numbers
float is Acton's 64-bit floating-point type.
Use it for values with a fractional part, such as measurements, ratios, and scientific calculations.
A decimal point usually means a float literal. Use
floats for measurements and ratios, and do not be surprised by small
rounding artifacts in the last digits.
actor main(env):
distance = 12.5
time = 4.0
speed = distance / time
print("speed:", speed)
print("rounded:", round(speed, 2))
print("formatted: %.2f" % speed)
env.exit(0)
Floating-point arithmetic is approximate, not exact.
a = 0.1
b = 0.2
print(a + b)
Floats trade exactness for range and speed. They are usually the right tool for physical measurements and approximate calculations, but not for values where exact decimal behavior is required. Equality on computed floats is often brittle, so treat exact comparison with care once values have gone through prior arithmetic.
Complex numbers
Complex numbers combine a real part and an imaginary part.
Create them with complex.from_real_imag(real, imag).
If you have not used complex numbers before, think of them as two floating-point values with arithmetic rules built in. Most programs do not need them, but they are useful in math-heavy domains.
actor main(env):
a = complex.from_real_imag(1.0, 2.0)
b = complex.from_real_imag(3.0, 4.0)
print("sum:", a + b)
print("product:", a * b)
print("real part:", a.real())
print("imag part:", a.imag())
print("conjugate:", a.conjugate())
print("magnitude:", abs(a))
env.exit(0)
Arithmetic
Complex numbers support the usual arithmetic operations:
+and-*/**
They also support equality testing with == and !=.
a = complex.from_real_imag(1.0, 2.0)
b = complex.from_real_imag(1.0, 2.0)
print(a == b)
Complex numbers are part of Acton's numeric world and work with the
same floating-point realities as float: rounding and
precision limits still matter. Any algorithm that wants a notion of
ordering has to state it explicitly, such as by comparing magnitude,
rather than relying on the usual ordered-number intuition.
Errors
Division by zero raises ZeroDivisionError.
zero = complex.from_real_imag(0.0, 0.0)
# a / zero
Tuples
Tuples group a fixed number of values into one value.
The fields in a tuple can have different types.
A tuple is a good first step for a small value with a fixed shape. When the shape needs names but still behaves like plain data, use a named tuple. When the value needs methods, invariants, or a lifecycle, move to a class.
A tuple has a fixed shape. If you need a small value with exactly two
or three fields, a tuple is often a good fit. If you keep forgetting
what .0 and .1 mean, switch to a named tuple
so the fields explain themselves. If the data starts needing methods or
construction rules, switch to a class.
actor main(env):
pair = ("Ada", 36)
point = (x=3, y=4)
print(pair.0)
print(pair.1)
print(point.x)
print(point.y)
env.exit(0)
Access positional tuple fields with .0, .1, and so on.
Named tuples use field names such as .x and .y.
Returning tuples from functions
Tuples are handy when a function naturally returns a small fixed group of values.
def parse_result():
return (ok=True, code=200)
Acton can infer the tuple shape here from the returned value.
Named tuples are the bridge between raw tuple positions and classes. They keep the value lightweight while making the shape self-documenting. Because the tuple shape is part of the type, changing field count or names is an API change.
Expressions and operators
Acton evaluates expressions to values. Operators combine values into new expressions.
Common operators
- Arithmetic:
+,-,*,/,//,%,** - Comparison:
==,!=,<,<=,>,>= - Boolean:
and,or,not - Membership:
in,not in
actor main(env):
a = 10
b = 3
word = "Acton"
point = (3, 4)
print("sum:", a + b)
print("product:", a * b)
print("floor division:", a // b)
print("remainder:", a % b)
print("power:", b ** 3)
print("comparison:", a > b)
print("boolean:", a > 0 and b > 0)
print("membership:", "ct" in word)
print("indexing:", word[0], point[1])
print("slicing:", word[1:4])
env.exit(0)
Method calls, function calls, indexing, and slicing are expressions too. They all produce values.
name = "Acton"
first = name[0]
upper = name.upper()
Reading expressions
Read an expression from the values upward. Start with the pieces that already have values, then apply the operator or call.
When reading an expression, start with the smallest parts that already have values and then see how the operator or call combines them. If you have to stop and think about precedence, add parentheses.
Precedence
Acton follows standard operator precedence rules. When in doubt, use parentheses to make intent explicit.
x = 2 + 3 * 4 # 14
y = (2 + 3) * 4 # 20
Boolean logic
and and or are useful when the right-hand side only makes sense if
the left-hand side has already passed a check.
if user_is_known and user_is_enabled:
print("welcome")
and and or short-circuit. That means Acton
only evaluates the right-hand side when it is needed to determine the
result. That matters not just for speed, but for semantics: you can use
short-circuiting to guard operations that would otherwise fail or do
unnecessary work. Calls, indexing, slicing, and later optional
operations all participate in the same expression model.
a = 0
if a != 0 and 10 / a > 2:
print("large enough")
Functions
Functions package reusable logic behind a name. Declare a function with
def.
Functions are values too, so you can pass behavior into another function when the call site should choose the variation. See Higher order functions for that pattern.
def clamp(n, low, high):
if n < low:
return low
if n > high:
return high
return n
actor main(env):
print(clamp(3, 0, 5))
print(clamp(-2, 0, 5))
print(clamp(9, 0, 5))
env.exit(0)
Function arguments are local names inside the function body.
Type inference makes local helpers cheap to write, but it does not make their types unimportant. As soon as a function is used across a module boundary, callers depend on whatever the compiler inferred for its arguments, return value, effects, and generic constraints. Those facts are part of the callable contract even when no explicit signature appears in the source.
That has a practical consequence: changing a function from
pure to proc, tightening a generic bound, or
making a result optional is not just an implementation tweak. It can
force changes at call sites. That is why public functions deserve a
higher bar than local helpers. It is often fine to let inference handle
small local code, but API-facing functions should be read as typed
interfaces whether the signature is written down or not.
Arguments
Arguments are available only inside the function where they are defined. That makes them useful for small, self-contained pieces of logic.
Think of a function as a small named tool. You give it input values, it does some work, and it may give a result back. Calling a function is just another expression, so you can store or pass along its result.
When the behavior itself should vary, pass a function instead of branching in every caller. That keeps the shared work in one place.
Returning values
Use return to send a value back to the caller.
def square(n):
return n * n
If a function reaches the end without a return, the result is None.
return ends the function immediately. Any code after it
in the same block does not run.
def greet(name):
print("Hello", name)
Practical guidance
- Keep functions focused on one job.
- Prefer returning values over printing inside helper functions.
- Give functions names that describe what they compute or do.
Higher order functions
Acton supports higher order functions, which means you can pass a function as an argument to another function and choose behavior at the call site.
That is Acton's nearest equivalent to the reusable part of Rust's closure and iterator story. Acton does not have a Rust-style closure or iterator-adapter path to learn first; use higher-order functions, comprehensions, and explicit iteration instead.
A function can be treated like any other value. That makes it easy to reuse one loop, one validation path, or one calculation with different behaviors plugged in.
def apply_twice(fun, value):
return fun(fun(value))
def double(n):
return 2 * n
def square(n):
return n * n
actor main(env):
print(apply_twice(double, 3))
print(apply_twice(square, 2))
env.exit(0)
This prints:
12
16
Use this pattern when the operation is the thing that changes and the overall shape of the work stays the same.
Higher order functions work best when the varying behavior is small, stateless, and easy to describe as "apply this operation here". That makes them a good fit for callbacks, adapters, validation hooks, and small reusable transformation steps. Once the behavior needs evolving state across calls, the design question changes: you are no longer just passing behavior, you are passing behavior plus state.
At that point, an actor or a small object is often a better home than trying to simulate closure-heavy code by threading more and more helper arguments through every call. The same applies to collection work: if a transformation is local and easy to read, a comprehension or short loop usually expresses it more clearly than a stack of tiny callback-style helpers. Reach for higher order functions when they clarify the reusable variation, not just because the language allows them.
Comments
Use # for comments.
Comments are for readers of the code. Use them to explain intent, assumptions, or surprising choices.
def area(width, height):
# Width and height are measured in meters.
return width * height
actor main(env):
# Keep the greeting short because it is printed in a narrow terminal.
message = "Hello"
print(message)
env.exit(0)
A comment is ignored by the compiler. It is there only for people
reading the source code. There is no separate block-comment syntax here;
use # on each line you want to comment.
What to comment
Use comments for things the code does not make obvious:
- why a check exists
- units or external constraints
- a temporary workaround
- a non-obvious invariant
Practical guidance
- Prefer comments that explain why, not comments that only repeat what the code already says.
- Keep comments close to the code they describe.
- Update or remove comments when the code changes.
Control flow
Control flow decides which code runs, when it runs, and how often it runs.
Acton does not have a Rust-style match expression. Use
if/elif/else for branching, and combine that with optionals or
exceptions when the question is about absence or failure.
Start with these three everyday tools:
if/elif/elsefor branchingforfor iterating over itemswhilefor repeating while a condition stays true
Loops also support a few extra tools:
breakto stop the loop earlycontinueto skip to the next iterationelseto run code after a loop finishes normally, without abreak
If you are new to programming, start with if,
for, and while and treat loop
else as optional. Most code uses the first three tools far
more often.
for n in range(5):
if n == 2:
continue
if n == 4:
break
print(n)
else:
print("finished without break")
Loop else is tied to break, not to "zero
iterations". It runs whenever the loop finishes normally, which makes it
useful for search-style logic but easy to misread if used casually.
Acton also has actor-specific control patterns such as
after. Those matter once you start
thinking about actors, timers, and concurrency.
if / elif / else
Use if to run code only when a condition is true.
Use elif for more cases and else for the fallback case.
def describe(n):
if n < 0:
return "negative"
elif n > 0:
return "positive"
else:
return "zero"
actor main(env):
print(describe(-7))
print(describe(0))
print(describe(5))
env.exit(0)
The conditions in if, elif, and while are expressions that
evaluate to True or False.
n = 7
if n % 2 == 0:
print("even")
else:
print("odd")
Conditionals compose with Acton's expression model, so guards often mix comparisons, membership tests, and short-circuiting in one place. Once the branching logic starts encoding type or protocol decisions, helper functions or explicit narrowing usually read better than a long ladder.
for
Use for ... in ... to go through each value in something like a
string, tuple, or range(...).
actor main(env):
names = ("Ada", "Grace", "Linus")
for name in names:
print("Hello", name)
for n in range(3):
print("n =", n)
env.exit(0)
A for loop is the usual choice when you want to go
through each item in a collection or repeat something a known number of
times. The loop variable is a new local name that takes on each value in
turn.
range(stop) counts from 0 up to, but not including, stop.
for n in range(5):
print(n)
You can also use range(start, stop, step).
for n in range(2, 10, 2):
print(n)
The loop variable is local to the loop body. A common pattern is to use
for with a collection when the values matter, and range(...) when the
count matters.
In Acton, for is usually the clearest way to consume an
iterable because it avoids manual index state and keeps the element type
front and center. When you need both the index and the value, prefer
enumerate(...) over range(...). Use
range(...) when the numbers themselves matter, not as a
default substitute for iterating a collection.
for i, name in enumerate(names):
print(i, name)
while
Use while when you want to keep looping for as long as a condition
stays true.
actor main(env):
var remaining = 3
while remaining > 0:
print("remaining:", remaining)
remaining -= 1
print("done")
env.exit(0)
With while, make sure something in the loop body can
eventually make the condition false. If that condition depends on a
changing value in actor code, that value usually needs
var.
while is useful when the number of iterations is not the main point.
What matters is the condition.
actor main(env):
var retries = 5
while retries > 0:
print("trying...")
retries -= 1
env.exit(0)
If you already have a collection or a simple numeric range, a
for loop is usually clearer than a while loop. Reach for while
when the condition really is the center of the logic.
A while loop makes state transitions explicit, which is
useful for retries and actor-local state machines. The tradeoff is that
you now own the loop invariant and termination condition, so it is more
error-prone than for when the iteration boundary is already
known.
Typical uses include retry loops, waiting for a condition to change, and small state machines where each pass updates the state before checking again.
Collections and everyday data
Acton has three built-in collection types for everyday data:
- Lists keep values in order and allow duplicates.
- Dictionaries map keys to values.
- Sets keep unique values and make membership checks direct.
If you are unsure where to start, choose by the question you want to answer. Use a list when order matters, a dictionary when you need to look things up by key, and a set when you mainly care about uniqueness or membership.
Use this section when you need to keep many values together and work with them as one logical value.
Collections pair naturally with higher order functions and comprehensions. That is the common Acton path for turning one collection into another without spreading the transformation across several places.
All collection types are statically typed. A list has one element type, a set has one element type, and a dictionary has one key type and one value type.
Lists
Lists are ordered, mutable sequences. Use them when position matters or when you want to keep adding and removing items over time.
Think of a list as a numbered row of slots. Slot 0 is
the first item, slot 1 is the second, and so on. A list
holds one element type, so a list[str] is for strings
only.
Creating lists
fruits = ["apple", "banana", "orange"]
tasks: list[str] = []
numbers: list[int] = [1, 2, 3]
Use a literal when you already have values. Use an empty list when you plan to fill it later. If the compiler cannot infer the element type from context, add an annotation.
Lists are dynamic arrays. Appending is cheap most of the time because storage grows in chunks, not one item at a time.
Reading values
items = ["first", "second", "third", "fourth"]
print(items[0])
print(items[-1])
print(items[1:3])
print(len(items))
print("second" in items)
print(items.index("third"))
Indexing starts at 0. Negative indexes count from the end. Slices use
the familiar start:stop form and include the start but exclude the
stop.
If you are coming from a one-based indexing language, the first item
is still at position 0 here.
Updating lists
items = ["first", "second", "third"]
items.append("fourth")
items.insert(1, "new")
items.extend(["fifth", "sixth"])
print(items)
print(items.pop())
print(items.pop(0))
del items[1]
append() adds one item to the end. insert() places an item at a
specific index. extend() adds several items from another iterable.
pop() removes and returns an item, and del items[i] removes by
index without returning anything.
Appends are amortized O(1). Inserting or deleting near the front of a long list is O(n) because elements need to shift. `pop()` is O(1) at the end and O(n) at other positions.
Common utilities
values = [9, 5, 123, 14, 1, 5]
print(sorted(values))
values.reverse()
print(values)
print(values.count(5))
copy_of_values = values.copy()
values.clear()
print(copy_of_values)
print(values)
sorted() returns a new list. reverse() changes the existing list in
place. count() scans the whole list and counts matches. copy() makes
a shallow copy, which is enough when the elements themselves are simple
values.
If the list contains mutable values, a shallow copy only duplicates the outer list. The items inside are still shared.
Iterating over lists
names = ["Ada", "Bjarne", "Grace"]
for name in names:
print(name)
for i, name in enumerate(names):
print(i, name)
Iteration gives you each item in order. Use enumerate() when you also
need the current index.
List comprehensions
numbers = [1, 2, 3, 4, 5]
squares = [n * n for n in numbers]
evens = [n for n in numbers if n % 2 == 0]
List comprehensions are the compact way to build a new list from an existing iterable. Read them as "make a list of this expression for each item that matches the condition".
Type safety
All items in a list must be of the same type. Mixing types like
["foo", 1, True] will not compile.
strings = ["foo", "bar", "baz"]
numbers = [1, 2, 3, 4, 5]
When a list is empty, give it a type if the surrounding code does not make the element type obvious.
Dictionaries
Dictionaries map keys to values. They keep insertion order, so when you iterate over a dictionary you get keys back in the order they were first added.
Use a dictionary when you want to look something up by name, id, or some other key. If you keep searching through a list for a matching value, a dictionary is often the better shape.
Creating dictionaries
counts = {"apples": 2, "bananas": 4}
empty_counts: dict[str, int] = {}
The key type and value type are fixed for a given dictionary. When a dictionary is empty, give it a type if the surrounding code does not make that obvious.
Looking up values
counts = {"apples": 2, "bananas": 4}
print(counts["apples"])
print(counts.get("pears"))
print(counts.get_def("pears", 0))
print("apples" in counts)
print(len(counts))
Direct indexing says the key must already exist. get() returns None
when a key is missing. get_def() lets you provide a default value
instead of handling None later.
`None` is a real value, so `get()` is best when missing keys are normal and uninteresting. If `None` means something in your data, `get_def()` or an explicit membership test is clearer.
Updating entries
counts = {"apples": 2, "bananas": 4}
counts["bananas"] = 5
counts["pears"] = 1
del counts["apples"]
print(counts)
Assigning to a key adds it if it is new or replaces the old value if it already exists.
Removing entries
counts = {"apples": 2, "bananas": 4, "pears": 1}
print(counts.pop("bananas"))
print(counts.pop_def("missing", 0))
print(counts.popitem())
pop() removes a key and returns its value. pop_def() does the same
thing with a fallback when the key is absent. popitem() removes and
returns one key/value pair.
Iterating over dictionaries
counts = {"apples": 2, "bananas": 4, "pears": 1}
for key in counts:
print(key)
for key, value in counts.items():
print(key, value)
print(list(counts.keys()))
print(list(counts.values()))
print(list(counts.items()))
Iterating over a dictionary yields keys. Use items() when you need
both the key and the value.
Updating from other data
counts = {"apples": 2}
more_counts = {"bananas": 4, "pears": 1}
counts.update(more_counts.items())
counts.setdefault("apples", 0)
counts.setdefault("grapes", 3)
update() merges entries from another iterable of key/value pairs.
setdefault() is useful when you want to add a missing key only once
and keep the existing value otherwise.
Dictionary comprehensions
words = ["hello", "world", "acton"]
lengths = {word: len(word) for word in words}
filtered = {word: len(word) for word in words if len(word) > 4}
indexed = {i: word for (i, word) in enumerate(words)}
Dictionary comprehensions are the concise way to build a dictionary from an iterable. They are useful when the new keys and values come from the old data directly.
Sets
Sets store unique values. Use them when you care about membership or deduplication more than order.
A set is like a bag of distinct items. Adding the same value twice does not create a duplicate. If you only need to know whether something is present, a set usually fits better than a list.
Creating sets
tags = {"docs", "guide", "acton"}
empty_tags: set[str] = set()
Use {...} for a non-empty set. Use set() for an empty one, because
{} means an empty dictionary.
Checking and updating
tags = {"docs", "guide"}
print("docs" in tags)
print("api" not in tags)
tags.add("api")
tags.add("docs")
tags.discard("guide")
tags.update({"reference", "tutorial"})
print(tags)
print(len(tags))
add() inserts one value. discard() removes a value if it is present
and does nothing if it is not. update() adds values from another set
or any iterable of values.
Removing values
tags = {"docs", "guide", "acton"}
print(tags.pop())
print(tags)
pop() removes and returns one element. On an empty set it raises an
exception, so check before calling it if the set may be empty.
Iteration and order
tags = {"docs", "guide", "acton"}
for tag in tags:
print(tag)
print(sorted(tags))
Sets do not preserve order. Do not rely on the printed order when the exact sequence matters. If you need stable display order, sort the set first.
Set comprehensions
words = ["hello", "world", "hello", "acton"]
unique_words = {word for word in words}
long_words = {word.upper() for word in words if len(word) > 4}
remainder_classes = {n % 3 for n in range(10)}
Set comprehensions are a compact way to build a set while automatically removing duplicates.
Missing values and failures
Programs usually need to represent two different cases:
- a value is genuinely absent
- an operation failed and execution should stop or recover
Acton uses different tools for those cases. None and optional types
model ordinary absence. Exceptions model failures.
Use an optional when absence is part of the normal result. Use an exception when the operation could not finish as intended. A caller can branch on an optional; an exception means the current path should stop unless it is explicitly handled.
There is no Rust-style match path for this in Acton. Use ordinary
branching, optional checks, and exception handling instead.
Start here when you need to decide whether a function should return
None, raise an exception, or let an expression keep propagating an
optional result.
Optional chaining and forced unwrapping are expression-level tools. They do not replace the type system or exception handling; they make the common cases shorter. Keep absence in the type when it is expected, and reserve exceptions for broken assumptions, invalid input, and other failures that should not be treated as routine control flow.
- Optionals and
Nonecovers the basic meaning ofNoneand optional return values - Optionals explains narrowing, optional chaining, and forced unwrapping
- Errors and exceptions covers
raise,try,except,else, andfinally
Optionals and None
None is the value Acton uses for "nothing here".
An optional type ?T means a value is either a T or None.
Use an optional when absence is part of the normal result.
`?str` means "a string or None". `?int` means "an
integer or None". The `?` belongs to the type, not the
value.
def lookup_name(users: dict[str, str], username: str) -> ?str:
if username in users:
return users[username]
return None
actor main(env):
users = {"alice": "Alice Andersson"}
name = lookup_name(users, "bob")
if name is None:
print("No match")
else:
print("Found:", name)
if name is not None:
print("Upper:", name.upper())
env.exit(0)
Use is None and is not None to check whether an optional is
present. Those checks are the normal way to branch on the value.
Optional values are common in lookups, parsing, and APIs that may or
may not find a result. None is not a general placeholder for every
kind of empty value; use it when absence itself matters.
For nested access, see Optionals. It covers
how None propagates through chains and when to force a value instead
of carrying the optional further.
Optionals
An optional value is either a value of some type or None. The type is
written ?T.
For example, `?str` means "a string or None". A value
of type ?str might hold "Ada", or it might
hold None.
name: ?str = None
A value of type ?T cannot be used everywhere a plain T is expected.
Acton must be able to see that the value is present first.
Narrowing
if x is not None narrows x from ?T to T inside that branch.
if isinstance(x, SomeClass) does the same while also refining the
type.
def upper_or_none(text: ?str) -> ?str:
if text is not None:
return text.upper()
return None
Inside the if branch, text is treated as str, so ordinary string
methods are available.
When you need several guarded accesses, it is often clearer to bind an intermediate name and narrow that name explicitly.
class Residence():
def __init__(self, rooms: int, name: ?str = None):
self.rooms = rooms
self.name = name
class Person():
def __init__(self, name: str, residence: ?Residence):
self.name = name
self.residence = residence
def residence_name(person: ?Person) -> ?str:
if person is not None:
residence = person.residence
if residence is not None:
return residence.name
return None
Each access is guarded by a test that rules out None before the next
access.
Optional chaining
Optional chaining is a shorter way to keep None flowing through a
single expression.
def residence_name(person: ?Person) -> ?str:
return person?.residence?.name
If the value to the left of ?. is None, the whole expression
evaluates to None. If the value to the left of ?[...] is None,
indexing or slicing is skipped and the result is None.
Use ?. for attribute access and method calls, and ?[...] for
indexing and slicing.
def loud_residence_name(person: ?Person) -> ?str:
return person?.residence?.name?.upper()
def first_port(config: ?dict[str, list[int]]) -> ?int:
return config?.get("ports")?[0]
The result of an optional chain is still optional. For example,
person?.residence?.rooms has type ?int, not int.
Optional chaining only affects the current expression. It does not
narrow the value for later statements, so use is None or is not None
when you need to branch on the result.
Optional chaining lifts each later access into optional context. Each
step only runs if the previous step produced a real value; otherwise the
whole expression settles to None immediately. That is why a
chain is good for one-pass extraction of a nested value: it preserves
absence without forcing you to invent a sentinel or write a stack of
temporary checks just to carry None through the expression.
That same property is also the limit of the feature. A chain tells
you only the final result, not which step failed or what should happen
next. Once you need branching, logging, recovery, or repeated use of an
intermediate value, stop chaining and narrow explicitly with
is None / is not None. Optional chaining is
best for compact extraction, not for complex control flow.
Forced unwrapping
Sometimes None is not an acceptable outcome. In that case, use forced
unwrapping to require a value to be present.
!. and ![...] follow the same shapes as ?. and ?[...], but they
raise ValueError if the value to the left is None instead of
returning None.
def required_residence_name(person: ?Person) -> str:
return person!.residence!.name!.upper()
This returns str, not ?str. If person, residence, or name is
None, evaluation stops and raises ValueError.
You can mix ? and ! in the same chain. Later steps still see the
result of earlier ones, so a later ! will raise if an earlier step
produced None.
def first_port_required(config: ?dict[str, list[int]]) -> int:
return config!.get("ports")![0]
def first_port_or_none(config: ?dict[str, list[int]]) -> ?int:
return config?.get("ports")?[0]
Use forced unwrapping when absence indicates a bug or broken invariant and execution should stop immediately.
Forced unwrapping is for invariants: states that should already have been proven by surrounding logic. It is the right tool when a missing value means the program state is wrong, not when absence is still part of the normal control flow.
Errors and exceptions
Use exceptions when something is wrong and the current path should stop.
Raise an exception with raise, and handle it with try and except.
def parse_port(text: str) -> int:
port = int(text)
if port < 0 or port > 65535:
raise ValueError("port must be between 0 and 65535")
return port
actor main(env):
for text in ("8080", "70000"):
try:
port = parse_port(text)
except ValueError as e:
print("invalid input:", e)
else:
print("port:", port)
env.exit(0)
Use an exception for a real error. If "no result" is expected and normal, an optional value is often a better fit.
try structure
A try statement can contain these parts:
tryfor the code that may failexceptfor handling specific exception typeselsefor code that should run only when nothing failedfinallyfor cleanup that should happen either way
try:
value = parse_port("9000")
except ValueError as e:
print("bad input:", e)
else:
print("ready to use:", value)
finally:
print("done")
except runs only for matching exceptions. else runs only when the
try block completed without raising. finally runs whether the try
block succeeded or failed.
Keep try blocks narrow so it stays obvious which
operation can fail.
Catch specific exceptions before broader ones, and keep exception handling close to boundaries such as input parsing, file access, network calls, and other integration points. Inside core logic, prefer domain values or optionals when the situation is expected rather than using exceptions as routine branching.
Modeling data and interfaces
Once functions and collections stop expressing a problem clearly, you usually need a more explicit model.
In Acton, that usually means moving through a simple progression:
- tuples for small anonymous values
- named tuples for small structured values with readable fields
- classes for values with methods or invariants
- protocols for shared behavior across unrelated types
Start with a tuple when the value is small and anonymous. Move to a named tuple when the shape deserves field names but not behavior. Move to a class when the data needs methods, a constructor, or rules that should stay close to the value. Use a protocol when the real question is whether several different types can be used in the same place.
Use this section when you need to decide how a domain concept should be represented, how an object becomes valid, and how different types can share the same behavior without sharing the same class.
Classes and protocols solve different problems. Classes define the shape and lifecycle of a value. Protocols define an observed interface that multiple concrete types can satisfy. That choice affects API shape, initialization, dispatch, and type inference, not just code organization.
Classes and objects
Classes let you name a concept, keep its data together, and attach the behavior that belongs with that data. If a tuple or named tuple starts to feel too anonymous, a class is usually the next step.
Use a class when a value should have:
- a clear name instead of a bundle of positions
- related pieces of data that travel together
- methods that operate on that data
- construction rules or invariants that matter to callers
class Circle(object):
def __init__(self, radius):
self.radius = radius
def diameter(self):
return self.radius * 2
actor main(env):
circle = Circle(3.14)
print(circle.diameter())
env.exit(0)
What a class gives you
In the example above:
radiusis an attribute stored on eachCirclediameter()is a method that uses that attributeselfrefers to the object the method is running on
If a tuple starts to feel anonymous or unclear, that is usually a
sign that the value wants to become a class. Inside methods,
self is the current object, so self.name means
"this object's name".
Attributes are often introduced by assignments in __init__, but you
can also declare them explicitly in the class body when the shape should
be obvious up front.
class Person(object):
name: str
def __init__(self, name, age):
self.name = name
self.age = age
Class initialization
Use __init__ to leave the object valid. Before self escapes, every
required attribute must already be set.
Read the rule as two steps:
- Build the object locally.
- Let
selfout only after construction is complete.
Think of __init__ as the boundary between "not ready"
and "ready". If another function needs the object, wait until every
required field is assigned.
self escapes when you:
- pass
selfto another function - call a method on
self - capture a bound method such as
self.on_event - return
self - store
selfsomewhere that outlives the constructor
The important invariant is not "all assignments happen early" but
"self must not escape before the object is fully
initialized". Once you read the rules that way, the branch, loop,
callback, and base-class cases all follow the same rule.
Build the object first
During construction, use local variables for intermediate values and assign to attributes once the values are ready. If you read from an attribute before assigning it, the initialization order is wrong.
class Config(object):
def __init__(self, base_value: int):
self.base = base_value
self.doubled = self.base * 2
self.quadrupled = self.doubled * 2
Let self out last
Once the object is complete, you can call helper methods or register the instance with the rest of the system.
class BankAccount(object):
def __init__(self, owner: str, initial_deposit: float):
self.owner = owner
self.balance = initial_deposit
self.transaction_log = []
self.log_transaction("Account opened")
register_account(self)
if initial_deposit > 10000:
flag_for_review(self)
Control flow
Branches and loops are fine as long as every normal path leaves the object complete.
Branches
Conditional branches work when every branch that completes normally
initializes the same required attributes. Branches that raise
exceptions do not need to finish construction.
class Rational(object):
num: int
denom: int
def __init__(self, num: int, denom: int):
if denom == 0:
raise ValueError("Denominator cannot be zero")
if denom > 0:
self.num = num
self.denom = denom
else:
self.num = -num
self.denom = -denom
Try/except
try/except works the same way. The else branch is part of the
normal path, so it must also leave the object initialized.
Loops
Loops are fine as long as they do not leak self before the object is
ready.
class Example(object):
def __init__(self, data: list[int]):
total = 0
for item in data:
total += item
self.values = data
self.computed = total
for item in data:
self.process(item)
Common mistake
Passing a method reference like self.method also makes self escape,
even if the method is not called immediately.
class Handler(object):
callback: Callback
data: int
def __init__(self):
self.callback = Callback(self.on_event)
self.data = 42
def on_event(self):
pass
Parent classes
If a base class owns state, initialize that state before exposing the
derived object. Call the parent __init__ as part of your own
construction.
class Account(object):
account_id: str
created_date: str
def __init__(self, account_id: str):
self.account_id = account_id
self.created_date = current_date()
class BankAccount(Account):
owner: str
balance: float
def __init__(self, account_id: str, owner: str, initial_deposit: float):
Account.__init__(self, account_id)
self.owner = owner
self.balance = initial_deposit
Class inheritance
Inheritance lets one class specialize another.
Use it when there is a real "is a" relationship between the two types. The derived class inherits behavior from the base class and can add or override methods of its own.
Use inheritance sparingly. It couples representation, lifecycle, and method lookup, so a base class becomes part of the derived type's public contract. When the shared need is only behavior, a protocol is often a better fit. When the shared need is only state reuse, composition is often clearer.
class Shape(object):
def area(self) -> float:
raise NotImplementedError("subclasses must implement area()")
class Circle(Shape):
def __init__(self, radius):
self.radius = radius
def area(self):
return 3.14 * self.radius ** 2
class Square(Shape):
def __init__(self, side):
self.side = side
def area(self):
return self.side ** 2
actor main(env):
circle = Circle(3.14)
square = Square(3.14)
print(circle.area() + square.area())
env.exit(0)
In this example:
Shapedefines a common interfaceCircleandSquareinherit fromShape- each subclass provides its own
area()implementation - code written against
Shapecan still callarea()on either value
Read class Circle(Shape): as "Circle is a kind of
Shape". If that sentence feels wrong, inheritance is usually the wrong
tool.
Protocols
Protocols describe behavior that different types can share.
Use a protocol when your main question is "what operations does this value support?" rather than "what class does it inherit from?"
protocol Processable[T]:
process : () -> T
class Message(object):
def __init__(self, text):
self.text = text
extension Message(Processable[str]):
def process(self):
return self.text.upper()
In this example:
Processable[T]defines a capabilityMessageis an ordinary class- the
extensionsays thatMessageimplements that protocol
Inheritance says one class is a kind of another class. A protocol
says a type supports a certain behavior, whether or not there is any
inheritance relationship. Read
extension Message(Processable[str]) as "Message implements
the Processable[str] protocol".
Protocols are useful at API boundaries because they let you depend on a capability instead of a concrete class. Several unrelated types can offer the same behavior, and one type can offer several unrelated behaviors.
Protocols matter both for programming style and for type inference. They let you describe the shape of an API without committing to a concrete hierarchy. In Acton, protocols can also be implemented by extensions after a class is defined, which makes them useful for retrofitting shared behavior onto existing types.
Protocols in practice
protocol Printable:
print : () -> None
def render(item: Printable):
item.print()
Here, render only cares that the value can be printed. It does not
care which class the value comes from.
Protocols and generic constraints
Protocols also show up in generic type signatures.
def bigger[A(Ord)](a: A, b: A) -> A:
if a > b:
return a
return b
Here, A(Ord) means the type A must implement the Ord protocol so
that > is available.
Built-in protocols such as Ord, Hashable, Iterable, and Mapping
are documented in the reference section under Built-in
protocols.
Protocol method dispatch
Protocol dispatch is about choosing which protocol implementation to use.
For ordinary class methods, the answer is usually the method on the actual class. For protocol methods, the result depends on the type the program is observing at that point.
Here are two classes, the base class Point and the derived class
Point3D. Both implement Eq.
class Point(object):
def __init__(self, x: int, y: int):
self.x = x
self.y = y
extension Point (Eq):
def __eq__(self, other):
return self.x == other.x and self.y == other.y
class Point3D(Point):
def __init__(self, x: int, y: int, z: int):
self.x = x
self.y = y
self.z = z
extension Point3D (Eq):
def __eq__(self, other):
return self.x == other.x and self.y == other.y and self.z == other.z
def comparator(a: Point, b: Point) -> bool:
return a == b
actor main(env):
p1 = Point3D(1, 2, 3)
p2 = Point3D(1, 2, 4)
print(p1 == p2)
print(comparator(p1, p2))
env.exit(0)
The first comparison uses Point3D as the observed type, so it uses the
Eq implementation for Point3D.
The second comparison goes through comparator(a: Point, b: Point), so
the observed type has been forced to Point. That means protocol
dispatch uses the Eq implementation for Point, which ignores z.
If protocol dispatch feels surprising, first ask: what type is the program actually seeing here? If you annotate values as a base type too early, you can force dispatch to use the base-type implementation.
In this case, the better API is usually a generic one that keeps the full type:
def generic_comparator[A(Eq)](a: A, b: A):
return a == b
This is one reason generic constraints such as [A(Eq)]
often produce better behavior than prematurely forcing values into a
base-class type. Protocol dispatch follows the observed type, so type
annotations, container element types, and narrowing steps can all change
behavior without changing the value itself.
The same issue can appear when values are stored in a collection typed as the base class:
actor main(env):
ref_point = Point3D(1, 2, 4)
p1 = Point3D(1, 2, 3)
p2 = Point3D(1, 2, 4)
my_points: list[Point] = [p1, p2]
for point in my_points:
if point == ref_point:
print("Found the reference (compared as Point)", point)
if isinstance(point, Point3D) and point == ref_point:
print("Found the reference (compared as Point3D)", point)
Here, isinstance narrows the observed type back to Point3D, so the
more specific protocol implementation is used again.
What to watch
- If you annotate values as a base type too early, protocol dispatch can use the base-type implementation.
- If you want behavior to follow the concrete value, keep the type generic for as long as possible.
- If you need a more specific implementation after narrowing, use the narrower type again before the protocol method call.
Organizing code
As a program grows, you need more than a single file full of functions. The goal is not to create a deep tree of files. The goal is to make the code easier to read, change, and import.
Start by grouping related code together, then split it when it begins to serve more than one purpose. A practical first cut is often one module for parsing, one for domain logic, and one for I/O or startup code. Keep local modules, project metadata, and external dependencies in one clear path rather than treating them as separate puzzles.
You do not need a deep hierarchy early on. Small modules with clear names are usually easier to work with than a large set of thin files with vague responsibilities.
This section connects the local source tree, the project file, and the package graph. Use it together with Projects and Package Management.
In Acton, a module boundary is more than a foldering choice. Because module-level bindings are constant, a module behaves much more like a namespace and API surface than like an object with hidden mutable runtime state. That means a good split changes both how people read the program and how the compiler sees it.
A coherent module split narrows imports, isolates reasons to change, and makes the public surface smaller. It also improves the mechanical side of the toolchain: discovery, type checking, caching, and API documentation all work in terms of modules. Once a file becomes a real boundary that other code imports on purpose, its top-level names and docstrings stop being decoration and start becoming part of the module's interface.
This section is about the everyday structure around your code:
- how to split code into modules
- how to import names
- how naming conventions make code easier to read
Read next:
Modules
Modules let you split a program into smaller named units. In a project,
modules live under src/, and subdirectories become part of the module
name. For example, src/a/b.act is imported as import a.b.
See Projects for how acton build discovers those files
and Package Management for external
dependencies that live alongside local modules.
Use modules to group code by responsibility. A good module has a clear job. For example:
- parsing and validation
- domain logic and shared types
- I/O, network access, or other side effects
- startup and orchestration
Start with one module per clear topic. If a file starts mixing unrelated ideas, move one topic into its own module. Keep the project tree and the module tree aligned so the file layout stays easy to follow.
Import forms
Use import when you want the module name itself:
import timeimport time as timmy
Use from ... import ... when you want specific names directly:
from time import nowfrom time import now as rightnow
import time
import time as timmy
from time import now
from time import now as rightnow
actor main(env):
print(time.now())
print(timmy.now())
print(now())
print(rightnow())
env.exit(0)
Keep imports explicit, and use aliases only when they improve readability or avoid a name clash.
Module-level code
Module-level names are constants. Mutable program state belongs in actors, not in modules.
That means:
- helper functions and constants fit naturally at module level
- mutable variables do not
- program startup logic should live in actors such as
main
default_timeout = 5
def timeout_seconds():
return default_timeout
Modules can also carry docstrings. Because module-level bindings are constant, importing a module is closer to importing a namespace of definitions than shared mutable runtime state.
That is why module boundaries work best when they are stable and purposeful. Prefer names that explain the boundary directly, and keep the module surface small unless the file is intentionally acting as a public API.
How to split code
Split a module when it starts doing too many different jobs.
Good reasons to make a new module include:
- a group of functions is reused from several places
- a feature has a clear boundary, such as
parser,storage, orprotocol - one part of the code changes for different reasons than the rest
- the file has grown enough that imports and names are hard to scan
Avoid creating modules just to make a tree look deep. A short, direct module path is usually easier to work with than a very fine-grained one.
Naming
Acton has naming rules and conventions. Some are enforced by the compiler, and some are conventions that keep larger programs readable.
Functions
Use lower-case words with _ between them.
parse_userload_configsend_report
Function names should describe what the function does.
Actors and classes
Use PascalCase with two or more alphanumeric characters.
HttpServerOrderBookFileCache
Do not use a single upper-case letter for a class or actor name. Those names are reserved for type variables.
Type variables
Type variables use a single upper-case letter, optionally followed by digits.
AT1
Use them for generic code, not for ordinary domain concepts. If a name describes a real thing in the program, make it a normal type or actor name instead.
Modules and files
Module names come from file paths, so filename choice matters. Use
short, lower-case names and let the import path reflect the structure of
src/. For example, src/a/b.act is imported as import a.b.
Naming is part of API design. In Acton, a module name, an imported symbol, and a type name often appear together, so keeping them short and predictable reduces noise in the code. This matters even more once a project has several modules and cross-module dependencies.
Private names
A leading _ is the usual marker for implementation details that
should not be treated as part of the public surface. Use it for names
that are only meant to be used inside one module or actor.
Practical guidance
Good names make code easier to split into modules, and they make imports easier to read. If a name feels awkward at the call site, it is usually worth changing before the code grows.
- Prefer names that describe what a thing is or does.
- Use the same word for the same concept across modules.
- Avoid abbreviations unless they are standard in your domain.
- Make helper names specific enough that call sites read naturally.
Working with types
Acton is statically typed. The compiler checks types when your program is compiled, but it can infer many of them for you. In practice, that means you usually write the shape of the code and let the compiler fill in the rest.
This section explains how to work with that type system deliberately: how to read inferred signatures, when to write explicit annotations, and how generics, constraints, and effects show up in real code.
Acton does not have Rust-style lifetimes. Mutable state lives inside actors, access to the outside world is passed explicitly as capabilities, and ordinary object lifetime is handled by the runtime. See Actors, Lifetime, and Environment and capabilities for the Acton model.
Optional types are introduced earlier under Missing values and failures because they come up quickly in everyday code, but they are also part of the broader type story.
def describe(value):
if value > 0:
return "positive"
return "zero or negative"
actor main(env):
n = 3
text = describe(n)
print(text)
env.exit(0)
In this example, Acton can infer the types without any annotations.
n is an int, describe returns str, and text is also a str.
That is often the most pleasant way to write small programs: start with
plain code, then add annotations only where they improve clarity.
You do not need to annotate every value. A small helper, a local temporary, or a short private function can often stay inferred without hurting readability. Add types where they explain intent, not as decorations.
Reading inferred signatures
Use --sigs when you want the compiler to show the types it inferred.
acton types.act --sigs
Acton's inferred signatures include more than argument and return
types. You will see generic binders, protocol constraints, optional
types, tuple rows, and effect markers. Reading a signature means
reading both the data shape and the callable behavior. The compiler's
output is often the clearest summary of what an API actually promises,
so --sigs is a useful first step before deciding whether to
make that promise explicit.
This is especially useful when:
- you want to understand what type a helper function ended up with
- you are learning how generic constraints are written
- you want to turn an inferred signature into an explicit API
- you need to see effect markers or optionality in a callable type
What this section covers
- Explicit types when you want to state an API, narrow a local value, or make inference easier to follow
- Generics when one definition should work for many concrete types without losing safety
- Effects (
pure,mut,proc,action) when you want to treat side effects as part of the type information - Optionals when you want the deeper type-system view of values that may be absent
Explicit types
Acton can infer many types, but explicit annotations are still useful. They are the way to say "this value must stay this shape" when the code would otherwise leave room for interpretation.
def repeat(text: str, count: int) -> str:
return text * count
describe_port : (int) -> str
def describe_port(port):
return "port " + str(port)
actor main(env):
port: int = 9000
print(repeat("ha", 3))
print(describe_port(port))
env.exit(0)
Read name: Type as "name has type Type" and
-> Type as "returns Type". A separate signature line can
be easier to scan when the implementation is long or the signature is
part of the public surface of a module.
You can annotate:
- function parameters
- return values
- local names
- class and actor attributes
- separate signature lines for named APIs
- effect markers on callables
When explicit types help
Write annotations when:
- you want an API to be clear to readers
- inference becomes hard to understand
- you want the compiler to reject the wrong shape earlier
- a value could otherwise be inferred more loosely than you want
- you are defining a reusable helper and want its contract visible
A useful default is to annotate the things other people will read first: public functions, methods, actor fields, and data structures. Leave short local expressions inferred unless the type is surprising or important to the code around it.
Explicit annotations also control generalization. If inference would make a helper more polymorphic than you want, a written signature can pin the API down and keep later changes from widening it by accident. That is especially useful for callback types, actor-facing entrypoints, and shared utility code where the signature is the real contract.
Generics
Generics let you write code that works for many types without throwing away type safety. They are how you describe a pattern once and reuse it for every concrete type that fits the pattern.
A simple generic function
def first[A](items: list[A]) -> A:
return items[0]
actor main(env):
print(first([1, 2, 3]))
print(first(["a", "b", "c"]))
env.exit(0)
[A] introduces a type variable named A. In this function:
itemsis alist[A]- the return value is also
A - calling
firston alist[int]returns anint - calling it on a
list[str]returns astr - the compiler checks each call with the concrete type it sees there
If the brackets feel abstract at first, read them as "for any type
named A". Each call picks a concrete type for A, so
first([1, 2, 3]) uses int while
first(["a"]) uses str. That is how one
definition stays reusable without giving up compile-time checking.
Constrained generics
Sometimes a generic function needs more than "any type". It may require that the type supports some operation or protocol.
def bigger[A(Ord)](a: A, b: A) -> A:
if a > b:
return a
return b
Here, A(Ord) means A must implement the Ord protocol so the
function can compare a and b. The constraint is part of the type
information, not an implementation detail. Without it, the compiler
would not know that > is valid for A.
Acton can often infer generic parameters and protocol constraints for
you. Using --sigs is a good way to see what the compiler
understood before you decide whether to write the generic signature
explicitly. That matters when a helper starts being reused widely,
because the inferred constraints determine both flexibility and dispatch
behavior.
Generics on classes
Built-in collection types use the same syntax.
class list[A] (object):
...
That means a list[int] and a list[str] have the same generic shape
but different element types.
The same idea applies to your own classes and records. If a container or wrapper stores values without caring which concrete type they are, make that type parameter explicit.
When to add constraints
Add a constraint when a type parameter must support a particular operation:
- comparison, as in
Ord - equality or hashing, if the code depends on it
- a protocol, if the function calls methods from that protocol
Do not add a constraint just because it looks formal. The compiler only needs the bounds that the body actually uses.
Effects (pure, mut, proc, action)
Acton tracks effects as part of function and callable typing. An effect marker tells you what the callable is allowed to do, so it is part of the type information you read and design with.
The four effect markers are:
pure: no side effectsmut: may update stateproc: functions that call actorsaction: action or callback style code used heavily in actor APIs
pure def square(x: int) -> int:
return x * x
class Counter:
value: int
def __init__(self):
self.value = 0
mut def next(self) -> int:
self.value += 1
return self.value
actor Greeter():
def hello(msg):
print(msg)
proc def show_square(g: Greeter, x: int) -> None:
g.hello("square: " + str(square(x)))
actor main(env):
counter = Counter()
greeter = Greeter()
print("counter:", counter.next())
n = square(7)
print("n:", n)
show_square(greeter, 9)
action def stop() -> None:
env.exit(0)
after 0.1: stop()
In this example:
squareispureCounter.nextismutshow_squareisprocbecause it calls an actorstopis anaction- the effect markers are part of the callable signatures, not notes
Effect inference
If you omit the effect marker, Acton infers it from the body.
Use explicit annotations when you want an API to promise purity, make mutation clear, or document that a callback or actor-facing entrypoint has a particular effect. The effect is part of the contract just like its argument and return types.
A useful first habit is to keep calculations pure and push printing,
I/O, and actor orchestration into a smaller layer of effectful code.
Read pure as "calculation only", mut as "may
update state", proc as "calls actors", and
action as "actor action or callback".
That division makes code easier to reason about. If a helper is pure, you know it only depends on its inputs. If it is mut or proc, you know it may change state or interact with actors, which makes its API more specific.
When the four effects show up
pureis common for ordinary calculations and helpers that should be easy to reuse anywheremutis common on methods that update state or work with local mutable dataprocis common for functions that orchestrate work by calling actorsactionoften appears in actor methods, timers, cleanup hooks, and callback types such asaction(str) -> None
Effects often explain why a signature looks the way it does. A function may have a simple data type and still be effectful, and the effect marker is what tells you whether it stays in pure computation or crosses into stateful or actor-driven work.
Effects also appear in inferred signatures. As your code gets more generic or callback-heavy, those effect annotations become part of how you read and design APIs. In Acton, purity is a real constraint on what a function may call, so effect annotations are part of the contract, not just commentary.
Practical guidance
- Prefer
purefor deterministic, test-friendly core logic. - Use
mutwhen a callable really updates state. - Use
procfor functions that orchestrate work by calling actors. - Expect
actionin actor APIs, timers, cleanup hooks, and callbacks. - Keep pure logic separated from actor-driven orchestration code.
- Read the effect marker together with the argument and return types.
A useful mental model is pure <= mut <= proc, with
action <= proc on a separate branch. That is why pure
code can be used where a mutating or procedural callable is accepted,
and why actions participate in the broader effect system without being
the same thing as ordinary sequential procedures. When you design higher
order APIs, the effect on the callback is as important as its argument
types.
In practice, this means you should choose the weakest effect that describes the callable accurately. That keeps more code reusable and leaves the effect system useful as the codebase grows.
Actors & concurrency
Actors are the center of Acton's model for mutable state and concurrency. If you want to understand how Acton programs stay structured as they grow, start here.
Start with the basic actor main(env) pattern. The core
model is simple: an actor owns state, handles one message at a time,
and communicates with other actors by calling their methods.
This is also the closest Acton replacement for Rust-style lifetime thinking: actors own mutable state, capability references are passed explicitly, and the runtime manages ordinary object lifetime.
The important guarantee is local seriality: within one actor, state
changes are observed as if messages were handled one at a time. That is
why var can hold mutable state without introducing the kind
of shared-memory races that make threaded code hard to reason about.
The actor boundary is doing real work here: mutation stays local, while
concurrency appears only in the relationships between actors.
Viewed that way, many Acton features are the same model in different
forms. Sync and async calls, await, delayed callbacks with
after, and lifecycle hooks all control when new work is
placed into an actor's mailbox and when another actor is allowed to make
progress. What Acton does not promise is one global timeline across the
whole program. Ordering is local to each actor unless your own protocol
establishes something stronger.
This section covers:
- how actors own state and methods
- how the root actor starts a program
- how actors talk to each other
- how delayed work and cleanup fit into actor code
- how concurrency works without shared mutable memory
Read these pages next:
- Actors
- Root Actor
- Lifetime
- Attributes
self- Actor methods
- Sync Method calls
- Async Method calls
- Control flow in an async actor world
- after / sleep
- Concurrency
- Cleanup
Effect annotations such as pure, mut, proc, and action are
documented under Working with types.
Actors
Actors are Acton's primary unit for state and concurrency.
If you are looking for the Acton answer to Rust-style lifetime concerns, this is it: actors own mutable state, capabilities are passed explicitly, and the runtime handles ordinary object lifetime.
If classes feel like objects that own data, actors are a useful way to think about objects that own state and participate in concurrent work. The actor body runs once when the actor starts; its methods run later in response to calls.
An actor combines:
- private state owned by one actor
- sequential execution inside that actor
- method-based communication with other actors
- a place to put mutable program state
actor Greeter(name):
print("starting", name)
def hello():
print("hello from", name)
actor main(env):
greeter = Greeter("Acton")
await async greeter.hello()
env.exit(0)
Code in the actor body runs once when the actor is created. That body is where initialization happens, and it may define methods that operate on the actor's private state.
Actors are also where several Acton-specific language pieces meet:
var, await, async,
after, capability passing, and effectful APIs. The main
guarantee is actor-local sequentiality: state changes happen one
handled message at a time inside that actor.
What to read next
- Root Actor for the entrypoint pattern
- Lifetime for how actors stay alive
- Attributes and
selffor actor state - Actor methods, Sync Method calls, and Async Method calls for communication
- Concurrency, Control flow in an async actor world, and after / sleep for concurrent behavior
- Cleanup for best-effort finalization
Root Actor
Every Acton program has a root actor. A binary executable must have
one, and by default Acton uses an actor named main if it finds one in
the source file. You can choose a different root with --root.
Given this Acton program:
actor main(env):
print("Hello World!")
env.exit(0)
The following acton commands will all produce the same output.
acton hello.act
acton hello.act --root main
acton hello.act --root hello.main
The first invocation relies on the default rule of using an actor named
main. The second explicitly selects main as the root actor. The
third uses a qualified name that includes both the module name and the
actor name. Qualified names are useful when a project contains several
actors that could otherwise be mistaken for the entrypoint.
A normal Acton program consists of many actors arranged in a tree. The root actor is at the top of that tree and starts the rest of the program directly or indirectly. The Acton runtime bootstraps the root actor and passes it the process-level capability object.
The root actor is the boundary where process-level capabilities
enter the actor world. That is why main(env) keeps showing
up throughout the guide: it is both the conventional entrypoint and the
place where outside-world access begins.
Any executable Acton program must have a root actor. Acton libraries that are imported into another program do not.
Lifetime
In many languages, reaching the end of main ends the program. Acton is
different. Actors stay alive as long as another actor keeps a reference
to them, and they sit idle until they receive more work. Actors without
references can be garbage collected. The root actor stays alive until
the program exits.
That means a program like this will keep running even though the root
actor body reaches its end. You must explicitly tell the runtime to stop
the actor world with env.exit().
Actor lifetime is reference-based. Non-root actors disappear when no
live actor keeps a reference to them, while the root actor persists
until the program exits. This is why shutdown in Acton is explicit
rather than an implicit fall-through from the end of main.
Source:
actor main(env):
print("Hello world!")
Compile and run:
acton noexit.act
Output:
$ ./noexit
<you will never get your prompt back>
Actor Attributes & Constants
Actors usually keep their state in top-level attributes. Use var for
mutable actor-local state and a plain binding for a constant.
The split between var and plain top-level bindings is
one of the key boundaries around state. Mutable actor data stays
private to the actor, while constant attributes can be read through an
actor reference without shared mutable memory.
Source:
actor Act():
var something = 40 # private actor variable attribute
fixed = 1234 # public constant
def hello():
# Local state is accessed directly inside the actor.
something += 2
print("Hello, I'm Act & value of 'something' is: " + str(something))
actor main(env):
actor1 = Act()
await async actor1.hello()
print("Externally visible constant: ", actor1.fixed)
# This would give an error, try uncommenting it
# print(actor1.something)
env.exit(0)
Compile and run:
acton attrs.act
Output:
Hello, I'm Act & value of 'something' is: 42
Externally visible constant: 1234
Without var, an actor attribute is a constant. Constants are safe to
share with other actors because they do not expose mutable state.
self
self is implicitly bound inside an actor. Use it when you need to pass
a reference to the current actor to another actor. Inside the actor,
self is usually unnecessary.
self inside an actor is about identity and
communication, not ordinary attribute access. Passing
self hands out a callback path to the current actor, which
makes it central to request/reply and subscription-style protocols.
Source:
actor Pinger(ponger: Ponger):
def pong(message: str):
print("Pinger: Got pong:", message)
print("Pinger: Sending ping to Ponger...")
ponger.ping(self)
actor Ponger():
def ping(pinger: Pinger):
print("Ponger: Got ping!")
# Call back to the pinger
pinger.pong("Hello from Ponger!")
# Usage
actor main(env):
ponger = Ponger()
pinger = Pinger(ponger)
env.exit(0)
Output:
Pinger: Sending ping to Ponger...
Ponger: Got ping!
Pinger: Got pong: Hello from Ponger!
In this example, Pinger passes self to Ponger when pinging, allowing the ponger to send a pong back.
Actor methods
Actor methods are declared inside an actor with def.
An actor method runs in the context of that actor and can access its private state.
An actor method is like a normal method with one extra idea: it runs
inside an actor that owns its own state and message flow. That is why a
method like compute can use secret without
receiving it as an argument each time.
Local actor methods can call each other by name. Methods on other actors are called through that actor's reference.
Because actors are sequential, local function calls and local method calls run one step at a time inside that actor. Concurrency appears when actors call each other. Whether a remote call is sync or async changes ordering and waiting behavior, not just syntax.
def multiply(a, b):
return a * b
actor main(env):
var secret = 42
def compute(a):
return multiply(a, secret)
result = compute(3)
print("Result:", result)
env.exit(0)
Actor methods are public by default. Calls to other actors can be synchronous or asynchronous.
Sync Method calls
Acton lets one actor call another actor synchronously when it needs a result back immediately.
A method call is synchronous when the caller uses the return value.
A synchronous actor call feels like an ordinary function call: ask for a result, wait, then continue.
actor Calculator():
def square(n):
return n * n
actor main(env):
calc = Calculator()
answer = calc.square(7)
print("The answer is", answer)
env.exit(0)
Here, main waits for calc.square(7) to finish and then continues
with the returned value.
Sync calls suspend the current actor until the other actor replies. As systems grow, it is usually better to keep sync chains short and push longer work into asynchronous flows.
When to use sync calls
- use them when a result is needed right away
- prefer them for small, direct requests
- be careful with long chains of sync actor-to-actor calls
Async Method calls
Async calls let an actor tell another actor to do something without waiting for a return value.
A method call is asynchronous when the caller does not use the return value.
An async call is closer to "send this message" than "call this function and wait".
actor Worker(name):
def say(msg):
print(name, "received:", msg)
actor main(env):
w1 = Worker("one")
w2 = Worker("two")
w1.say("hello")
w2.say("world")
def stop():
env.exit(0)
after 0.1: stop()
Here, main sends two messages and keeps going. It does not wait for
either worker to return a value.
When to use async calls
- use them for fire-and-forget work
- use them when another actor should react independently
- use them to avoid blocking the current actor on a result
Control flow in an async actor world
Acton programs do not have only one simple top-to-bottom control flow.
Once an actor exists, it keeps living and reacting to incoming method calls until it stops. That means control flow is closer to "react to messages over time" than to "run one main function and finish immediately".
If you are new to this style, start with one actor and a few methods. Then add async calls and delayed work once the basic message flow makes sense. Inside one actor, each message still runs sequentially; the new part is thinking about what messages may arrive next.
This reactive model means message order belongs to each actor's own mailbox and handling, not one global timeline for the whole program. That same boundary is why callbacks, async calls, and actor lifetime fit together in Acton.
A mental model
It helps to think in terms of actors reacting:
- an actor receives a method call
- it handles that work sequentially
- it may call other actors
- it may schedule more work with
after - then it becomes idle again until the next message arrives
This is why actor code often looks different from ordinary function code. The interesting question is usually not just "what happens next?", but also "what happens later, and in response to what message?"
after and sleep
Use after when work should happen later.
after 1: tick() tells the runtime to schedule tick() to run about
one second later. Meanwhile, the actor is free to handle other
messages.
If you would normally reach for sleep() in another
language, first ask whether what you really want is "run this later".
In Acton, that usually means after. Read
after 1: tick() as "schedule this call for later", not
"pause here for one second".
actor main(env):
var count = 0
def tick():
print("tick", count)
count += 1
if count >= 3:
env.exit(0)
else:
after 1: tick()
tick()
after is the normal tool for:
- timeouts
- retries
- pacing repeated work
- scheduling a follow-up action
Why not sleep?
Normal actor code should avoid blocking waits. A delayed callback with
after lets the actor go idle and react to other messages in the
meantime.
There is a low-level sleep in the RTS for debugging and runtime
work, but it is not the idiomatic control tool for actor programs.
after keeps the actor schedulable, and the callback sees
whatever state the actor has when that later message is handled.
Actor concurrency
Multiple actors can make progress concurrently. In this example, Foo and Bar both keep ticking while the root actor schedules shutdown.
Concurrency here means the actors can make progress independently, not that their outputs follow a fixed interleaving. The runtime may run them in parallel on multiple workers, but the semantic guarantee is still per-actor sequential execution rather than any global ordering between actors.
Source:
actor Counter(name):
var counter = 0
def periodic():
print("I am " + name + " and I have counted to " + str(counter))
counter += 1
after 1: periodic()
periodic()
actor main(env):
foo = Counter("Foo")
bar = Counter("Bar")
def exit():
env.exit(0)
after 10: exit()
Compile and run:
acton concurrency.act
./concurrency
Output:
I am Foo and I have counted to 0
I am Bar and I have counted to 0
I am Foo and I have counted to 1
I am Bar and I have counted to 1
I am Foo and I have counted to 2
I am Bar and I have counted to 2
I am Foo and I have counted to 3
I am Bar and I have counted to 3
I am Foo and I have counted to 4
I am Bar and I have counted to 4
I am Bar and I have counted to 5
I am Foo and I have counted to 5
I am Bar and I have counted to 6
I am Foo and I have counted to 6
I am Bar and I have counted to 7
I am Foo and I have counted to 7
I am Foo and I have counted to 8
I am Bar and I have counted to 8
I am Foo and I have counted to 9
I am Bar and I have counted to 9
Cleanup / Finalization
It is possible to run special code when an actor is about to be garbage collected. This is mainly for actors that manage outside resources and need a best-effort final step when they become unreachable.
Define an actor action called __cleanup__ and the runtime will
schedule it when garbage collection notices that the actor is no longer
reachable. There is no hard guarantee when __cleanup__ will run, and
it can take more than one collection round before all finalizers are
handled.
Cleanup hooks are intentionally weakly timed. They are useful for best-effort resource cleanup at the boundary to the outside world, but they are not a substitute for explicit shutdown protocols when a system needs deterministic release or ordering.
Source:
actor Foo():
action def __cleanup__():
print("Cleaning up after myself...")
actor main(env):
for i in range(20):
Foo()
a = 1
for i in range(99999):
a += i
def _stop():
env.exit(0)
after 0.1: _stop()
Output:
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Cleaning up after myself...
Environment and capabilities
Programs that talk to the outside world need explicit access. In
Acton, that access comes through the root env actor and through
capability references that are passed around like any other value.
Most small programs start with env and end with
env.exit(0). That is normal. The important part is that
env is passed in explicitly. When a function needs access
to the outside world, give it the specific capability or environment
reference it needs instead of assuming that access is always there.
This chapter covers the practical side of that model:
- Security explains why access is explicit.
- Environment covers
env, arguments, variables, stdin, and terminal mode changes.
Use this chapter when you need to:
- read command line arguments
- inspect or change environment variables
- read from standard input
- handle interactive terminal input
- decide what authority a helper should receive
Capability design is part of API design. When a helper takes a wide environment reference or a broad outside-world capability, that choice becomes part of the helper's contract: callers must now trust it with everything that capability can do, not just the one operation the current implementation happens to use.
A narrower capability does more than look tidy. It limits authority, reduces the amount of code that must be audited when security matters, and makes substitution easier in tests or alternate runtimes. If a helper only needs to open a TCP connection, that is the capability it should receive. Anything wider increases coupling and makes accidental authority leaks more likely over time.
Security
Acton's security model starts from a simple rule: code can only use what it has a reference to.
Actors are isolated from each other. To call an actor, read from it, or otherwise interact with it, code must already hold a reference to that actor. There is no ambient authority hiding behind a module import or a global variable.
That is close to the object capability model.
A useful mental model is: no ambient authority. If code can reach something, that access had to come from somewhere concrete. In practice, give each function or actor only the references it actually needs. That keeps the code easy to reason about and makes accidental access paths harder to create.
Because there are no mutable globals, reachable state is either:
- local to the current actor
- reachable through references that were explicitly passed in
actor Vault():
def read():
print("secret")
actor Reader(vault):
def show():
await async vault.read()
actor main(env):
vault = Vault()
reader = Reader(vault)
await async reader.show()
env.exit(0)
Reader can call Vault only because the reference was passed in
explicitly. If the reference is not available, the access is not
available.
The key point is not only "there are no mutable globals". It is that authority itself becomes something code can pass, withhold, or narrow. If a function or actor never receives a filesystem, network, or process capability, it cannot perform those actions by accident or by hidden convention. That makes ordinary API boundaries double as security boundaries.
This is why explicit capability passing matters so much in Acton. A reference is not just a way to reach a value; it is also the way authority enters a piece of code. That applies equally to actor references inside the program and to outside-world capabilities such as files, networking, environment access, and terminal control.
Capabilities to access outside world
Any useful program eventually needs to interact with the outside world. That can mean reading files, opening sockets, or sending data to a remote host. In many languages those operations are always available to any code. In Acton they are explicit.
Things outside the actor world are represented by actors and accessed through capability references. A capability is a reference that grants permission for a specific kind of operation. Without the reference, the operation is not available.
For example, TCPConnection needs a TCPConnectCap to connect to a
remote host over TCP. The type system enforces that requirement. If the
right capability is not available, the code does not compile.
TCPConnectCap sits inside a capability hierarchy that starts at
WorldCap and narrows from there:
WorldCap >> NetCap >> TCPCap >> TCPConnectCap
The root actor, typically main(), takes an Env reference as its
first argument. env.cap is the root WorldCap capability for
accessing the outside world.
import net
actor main(env):
def on_connect(c):
c.close()
def on_receive(c, data):
pass
def on_error(c, msg):
print("Client ERR", msg)
connect_cap = net.TCPConnectCap(net.TCPCap(net.NetCap(env.cap)))
client = net.TCPConnection(connect_cap, env.argv[1], int(env.argv[2]),
on_connect, on_receive, on_error)
That structure matters because it lets the program choose how much authority to hand out. A deeply nested helper, or a dependency of a dependency, can only do what its received capability allows.
Restrict and delegate
When a function takes a capability argument, it should normally take
the narrowest capability it actually needs. If the code only needs to
open a TCP connection, pass TCPConnectCap. Do not pass WorldCap
just because it is available.
When you write a helper, ask what the helper really does. Give it only the capability needed for that work. If a library asks for a wider capability than the work requires, that is a design problem in the library.
The deeper design point is capability attenuation: code should pass along narrower powers than it originally received whenever possible. That keeps authority local, makes APIs easier to audit, and prevents a convenient helper from quietly becoming a wide ambient escape hatch into the outside world.
Capability-friendly interfaces
Capability-friendly APIs are explicit about their authority boundaries. If one part of a library logs to files and another part talks to a remote host, split those responsibilities or make the narrower paths easy to select.
The goal is not ceremony. The goal is to keep authority local and visible. A capability that is not passed in cannot be used, and a capability that is not passed on cannot escape further into the program.
Environment
The environment is your program's practical link to the outside world.
Most programs meet it first as the env argument passed to the root
actor.
actor main(env):
print("args:", env.argv)
env.exit(0)
You can think of env as the handle your program receives
for talking to the world around it. It is passed into main;
it is not a hidden global that every function can reach automatically.
If a helper only needs one small piece of that power, pass the smaller
piece instead of the whole environment.
From there, this chapter covers the most common environment tasks:
Use env when code needs process arguments, environment variables,
standard input, or terminal configuration. Pass it only to the code
that actually needs that access.
env is best understood as a bundle of process-level
capabilities, not as one ordinary argument. It carries authority over
things such as arguments, environment variables, standard input, and
terminal configuration. If that whole bundle gets threaded through many
layers, those layers quietly become coupled to process concerns even
when they only need one small part of it.
In larger programs, treat the root actor as the place where broad authority is received, then pass inward only the narrower capability or value a helper actually needs. That keeps APIs honest about what they depend on, makes tests easier to fake, and prevents a small utility from accidentally gaining much wider access to the outside world than its job requires.
Environment variables
Environment variables are a common way to pass configuration into a
program. Acton lets you read, set, and unset them through env.
The string-based helpers are the most convenient:
env.getenv(name)returns astrorNoneenv.setenv(name, value)stores a string valueenv.unsetenv(name)removes a variable
They assume UTF-8 text, which is practical for most programs. When you need exact byte-level control, use the byte-oriented variants instead:
env.getenvbenv.setenvbenv.unsetenvb
The string helpers are deliberately opinionated. They make the common
case easy, but they also make the text boundary explicit. If your code
needs to preserve the original bytes, avoid round-tripping through
str until you have chosen a decoding strategy yourself.
actor main(env):
env_user = env.getenv("USER")
if env_user is None:
env_user = "unknown"
print("User:", env_user)
if env.getenv("FOO") is None:
env.setenv("FOO", "bar")
foo_env = env.getenv("FOO")
if foo_env is not None:
print("FOO:", foo_env)
env.unsetenv("LANG")
env.exit(0)
Output:
User: myuser
FOO: bar
Common patterns
Read a variable once, check whether it is missing, and choose a safe default. Do not assume configuration is always present.
Use environment variables for configuration values that are naturally textual, such as names, paths, flags, and addresses. If the value is binary or needs exact decoding rules, use the byte-oriented APIs and decode in your own code.
When a program sets or unsets variables, it is changing process state,
not just a local dictionary. Keep that in mind when passing env
through helpers.
Reading stdin input
Read from stdin by installing a handler with env.stdin_install. The
handler receives data as it arrives. In the common text case, the data
is decoded to str.
actor main(env):
def interact(input):
print("Got some input:", input)
env.stdin_install(interact)
You can make the text decoding explicit by providing on_stdin,
encoding, and on_error.
When encoding is not set, Acton tries to discover the encoding from
LANG. If nothing useful is found, it falls back to UTF-8.
actor main(env):
def interact(input):
print("Got some input:", input)
def on_stdin_error(err, data):
print("Some error with decoding the input data:", err)
print("Raw bytes data:", data)
env.stdin_install(on_stdin=interact, encoding="utf-8",
on_error=on_stdin_error)
If the data is binary, or if you want to delay decoding, install a bytes handler instead.
actor main(env):
def interact(bytes_input):
# Decode only when this code is ready to decide how.
print("Got some input:", bytes_input.decode())
env.stdin_install(on_stdin_bytes=interact)
The important distinction is between a decoded text stream and a raw
byte stream. If the input protocol is truly textual, the string path is
usually right. If framing, binary payloads, or uncertain encodings
matter, keep the data as bytes until your own code decides
how to decode it.
Common patterns
Use the text callback for ordinary command line input, especially when the input is line-oriented or clearly textual.
Use the bytes callback when the input may contain binary data, partial fragments of multibyte characters, or a protocol with its own framing rules.
Treat decode errors as part of the interface. If the program expects text, decide what to do when the bytes do not decode cleanly instead of letting that decision hide inside a helper.
Interactive stdin
Interactive programs do not usually want line-buffered input. A text editor, a terminal UI, or a game often needs individual key presses as they happen.
By default, stdin is in canonical mode. That means the terminal buffers input and usually handles line editing before your program sees anything. If you want raw key presses, switch stdin to non-canonical mode.
actor main(env):
def interact(input):
print("Got some input:", input)
# Set non-canonical mode so we get each key press directly.
env.set_stdin(canonical=False)
# Turn off terminal echo.
env.set_stdin(echo=False)
env.stdin_install(interact)
Canonical mode is the right default for ordinary command line tools. Non-canonical mode is for programs that need to manage the terminal themselves.
The runtime copies the terminal settings on startup and restores them on exit, so you do not need to restore echo manually in the common case.
Interactive stdin changes the terminal contract for the whole process, not just one helper function. That is why the runtime restores settings on exit for you, and why these programs should be careful and intentional about when they enter non-canonical mode.
Common patterns
Switch to non-canonical mode only when you need it, and keep that code close to the part that depends on raw input. That makes the terminal state easier to reason about.
Disable echo when raw input should not be shown back to the user, such as when reading passwords or handling single-key commands.
Built-in & Standard Library
This part of the guide is reference material.
Acton has two closely related pieces here:
- built-in definitions that are always available, such as core types and built-in protocols
- standard library modules that you import explicitly, such as
mathandre
Built-in names are part of the language environment. Standard library
modules are brought into scope with import.
Use Built-in protocols for the protocols
defined in __builtin__, including Eq, Ord, Hashable,
Iterable, and Mapping.
This split matches the implementation structure as well: built-in
protocols and core types live in __builtin__, while module
APIs such as math live in separate standard library
modules.
Use Standard Library for imported modules.
Built-in protocols
Built-in protocols are defined in __builtin__.
They show up in generic constraints, operator behavior, and the APIs of built-in collections. This section is reference material for the protocols that come with the language.
Built-in protocol constraints are part of the type system, not just documentation. They affect which operators are available, which generic functions typecheck, and how protocol methods are resolved.
The built-in protocols are grouped here as:
- General protocols for iteration, comparison, operators, and hashing
- Numeric protocols for number-like types
- Collection protocols for collection-shaped APIs
Read a header such as protocol Ord (Eq) as "Ord
extends Eq". A type that implements Ord must also satisfy
the requirements of Eq.
For how protocols work as a language feature, read Protocols.
General protocols
These protocols cover iteration, comparison, operators, and hashing.
Iteration
Iterable[A]: values that can produce an iterator with__iter__and can therefore be used inforloops and other iteration-based APIs
Identity and comparison
Identity: identity comparison withisandis notEq: equality comparison with==and!=Ord (Eq): ordering with<,<=,>, and>=
If you see a constraint such as A(Ord) in a type
signature, it means values of type A support ordering.
Operator families
Logical:&,|,^, and their in-place formsPlus:+,+=, and__zero__Minus:-and-=Times[A] (Plus):*and*=, with right-hand operand typeADiv[A]:/and/=, with result typeA
Hashing
Hashable (Eq): values that can feed data into ahasher; required for dictionary keys and set elements
If you see a constraint such as A(Hashable), values of
type A can be used where hashing is required.
See Hashable for the full protocol reference and an implementation example.
Hashable
The Hashable protocol defines how values participate in hashing.
It is required for dictionary keys and set elements.
If a collection needs to look up a value by hash, the value's type must be hashable. That is why dictionary keys and set elements have this constraint.
Protocol definition
Hashable extends Eq, which means hashable values must also support
equality:
protocol Hashable (Eq):
hash : (hasher) -> None
The built-in helper functions are:
def hash(x: Hashable) -> u64
def seed_hash(seed: u64, x: Hashable) -> u64
How hashing works
Hashing uses a two-part design:
- A
hasherobject accumulates hash input. - A value's
hashmethod feeds its data into thathasher.
When you call hash(x), Acton creates a hasher, asks x to feed its
state into it, and then finalizes the result as a u64.
p = Point(10, 20)
h = hash(p)
print("hash:", h)
Implementing Hashable
To make a custom type hashable, define both equality and hashing so they describe the same identity.
class Point:
x: int
y: int
def __init__(self, x: int, y: int):
self.x = x
self.y = y
extension Point(Hashable):
def __eq__(self, other):
return self.x == other.x and self.y == other.y
def hash(self, h):
self.x.hash(h)
self.y.hash(h)
The important rules are:
- Hash all fields used by equality.
- Hash them in a stable order.
- Do not leave out part of the value's identity.
If two values compare equal, they must feed the same data into the hasher.
Hashable follows equality, not the other way around. If
two values compare equal but feed different data into the hasher, sets
and dictionaries can behave incorrectly in subtle ways even though the
program still typechecks.
Using hashable values in collections
Once a type implements Hashable, values of that type can be used in
sets and as dictionary keys:
def test_hashable_point():
p1 = Point(1, 2)
p2 = Point(3, 4)
p3 = Point(1, 2)
points = {p1, p2, p3}
point_names = {p1: "origin", p2: "destination"}
print("unique points:", len(points))
print("point names:", point_names)
Built-in types
Many built-in value types implement Hashable, including:
boolint,bigint, and the fixed-width integer typesfloatcomplexstrbytes
dict and set use Hashable for their key or element types, but are
not themselves documented as Hashable here.
Numeric protocols
These protocols describe the operations available on built-in numeric types.
Protocol hierarchy
Number (Times[Self], Minus): core numeric behavior such as+,-,*,**, unary+and-,abs,real,imag, andconjugateReal (Number): float-style conversion and rounding operations such as__float__,trunc,floor,ceil, androundRealFloat (Real): floating-point real numbersRational (Real): values withnumeratoranddenominatorIntegral (Rational, Logical): integer-style operations such as//,%, shifts, bit operations, and indexing
Built-in implementations
int,bigint, and the fixed-width signed and unsigned integer types implementIntegralfloatimplementsRealFloatcompleximplementsNumber
Use the narrowest constraint that matches the operations you need:
Choose Number when you need ordinary arithmetic.
Choose Integral when you need integer-only operations such
as floor division, modulo, or bit shifts.
def square[A(Number)](x: A) -> A:
return x * x
def bucket[A(Integral)](x: A, size: A) -> A:
return x // size
Numeric constraints affect both operator availability and result
types. For example, Div[A] separates the type of the result
from the type of the operands, which is why integer division through
/ can return a different type than floor division through
//.
Collection protocols
These protocols describe the shared APIs behind built-in collection types.
These protocols describe behavior, not concrete storage. They let the language talk about "something indexable" or "something iterable" without naming one specific collection type.
Core collection protocols
Indexed[A(Eq), B]: lookup, assignment, and deletion with[]Sliceable[A] (Indexed[int, A]): slicing withstart:stop:stepCollection[A] (Iterable[A]): construction from an iterable andlenContainer[A(Eq)] (Collection[A]): membership withinandnot in
Specialized collection protocols
Sequence[A] (Sliceable[A], Collection[A], Times[int]): ordered, sliceable collections with repetition and mutating operations such asappend,insert, andreverseMapping[A(Eq), B] (Container[A], Indexed[A, B]): key/value collections withget,pop,keys,values, anditemsSet[A(Eq)] (Container[A], Ord, Logical, Minus): set operations, ordering, membership, and mutating updates such asaddanddiscard
Built-in types make use of these protocols:
list[A]implementsSequence[A]dict[A(Hashable), B]implementsMapping[A, B]set[A(Hashable)]implementsSet[A]
The protocol surface is sometimes looser than a concrete type's full
requirements. For example, Mapping[A, B] talks about
key/value behavior in general, while dict specifically
requires hashable keys.
Standard Library
This section documents modules that you import explicitly.
Use import to bring a standard library module into
scope.
Modules:
math
The math module provides the constant pi and common floating-point
functions.
Source:
import math
actor main(env):
angle = math.pi / 4.0
print(math.sin(angle))
print(math.sqrt(9.0))
env.exit(0)
Import the module first with import math, then call its
functions as math.sqrt(...), math.sin(...),
and so on.
The module currently exposes:
pisqrt,exp,logsin,cos,tanasin,acos,atansinh,cosh,tanhasinh,acosh,atanh
These functions take and return float.
The module defines a RealFuns protocol and implements it
for float. The exported module functions delegate through
that protocol.
re
The re module provides regular expression matching.
Source:
import re
actor main(env):
m = re.match(r"(foo[a-z]+)", "bla bla foobar abc123")
if m is not None:
print("Got a match:", m.group[1])
env.exit(0)
Import the module with import re, then call functions
such as re.match(...).
re.match also accepts an optional start_pos to begin scanning at a
specific index (defaults to 0).
Output:
Got a match: foobar
Testing
Testing your code is a really good idea! While Acton's type system allows interfaces to be precisely defined, it is imperative that the behavior of a function or actor is tested!
Test functions and test actors are automatically discovered by the compiler. Run tests with acton test. Import the testing module and name your test functions or actors starting with _test_. The type signature should match the test intended test category. Use the assertion functions available in the testing module. Here is a simple unit test:
Source:
import testing
def _test_simple():
testing.assertEqual(1, 1)
Run:
acton test
Output:
Building project in /home/user/foo
Compiling example.act for release
Finished compilation in 0.016 s
Final compilation step
Finished final compilation step in 0.437 s
Tests - module example:
simple: OK: 1523 runs in 50.001ms
All 1 tests passed (0.604s)
There are 4 kinds of tests
- unit tests
- small simple tests of pure functions
- synchronous actor tests
- involving one or more actors, returning results synchronously
- asynchronous actor tests
- involving actors but use asynchronous callbacks for return values
- environment tests
- similar to async actor test in that a callback is used for the return value
- these tests have access to the full environment via the
envargument and can thus communicate with the outside world- this is a source of non-determinism so be mindful of this and try to avoid non-deterministic functions to the largest degree possible
When possible, strive to use unit tests rather than actor based tests and strive to avoid env tests.
For snapshot-based assertions, see Snapshot testing.
Cached test results
The Acton test runner caches test results which means that repeated invokations of acton test might not actually (re)run tests. Cached failures and errors are still shown by default, so you never miss a failing test. Cached successes are hidden unless you pass --show-cached. Pass --no-cache to force all selected tests to run, even if cached results exist.
This means that the developer experience for test driven development is great even for project with a very large amount of tests as the content hash driven test runner only recompiles and reruns tests that are actually affected by a change.
Note that test input need to be contained within .act source code files in order for the compiler to consider them part of the content hash. You cannot use external .txt files or similar as input to test functions, since the compiler won't consider those parts of the implementation. Also see incremental compilation for more details on content hashing and how it applies to testing.
Snapshot tests are the main exception: even when the test code hash matches the cache, acton test still checks the expected snapshot file on disk against the cached snapshots/output/... metadata. If the output snapshot is missing, the expected file changed, or Acton cannot cheaply prove the expected file is older than the last produced output, the test is rerun instead of trusting the cached result.
Module Filtering
You can run tests from specific modules using the --module flag:
acton test --module foo --module bar
This will only run tests from the foo and bar modules, skipping all other test modules.
Capability-gated tests
Some tests depend on external capabilities (for example network services, hardware, or system setup). In tests that receive a test context argument (t), use t.require(...) and pass available capabilities with --tag:
import testing
def _test_external_service(t):
t.require("external-service")
# test logic that depends on external-service being available
Run with capabilities:
acton test --tag external-service
If the required capability is not enabled, the test is marked as skipped.
Capabilities are runtime environment signals used by t.require(...); they are not a pre-test selection/filter mechanism. This applies to test context objects like SyncT, AsyncT, and EnvT.
You can also skip explicitly:
def _test_todo(t):
t.skip("not implemented yet")
Unit tests
Source:
import testing
def _test_simple():
foo = 3+4
testing.assertEqual(7, foo)
Run:
acton test
Output:
Building project in /home/user/foo
Compiling example.act for release
Finished compilation in 0.016 s
Final compilation step
Finished final compilation step in 0.442 s
Tests - module example:
simple: OK: 1565 runs in 50.079ms
All 1 tests passed (0.600s)
Unit tests are a good starting point for testing small units of your program. Pure functions are deterministic and are thus preferable for tests over non-deterministic tests using actors. You are limited in what you can do though, since all called functions must be pure.
The test discovery finds unit tests based on the name starting with _test_ and has a function signature of mut() -> None or pure() -> None.
Once effect analysis has been improved in the compiler to contain scope local effects, the test discovery will only consider pure functions to be unit tests. See https://github.com/actonlang/acton/issues/1632
Snapshot testing can be enabled by returning a str. The Acton test framework will take care about recognizing the test as a snapshot test and comparing its output to the expected snapshot value.
Sync actor tests
Source:
import logging
import testing
actor MathTester():
def add(a, b):
return a + b
# It is possible to call actors in tests too, in which case the test is an
# "actor test" (the function gets a 'proc' effect inferred when calling actors).
def _test_syncact_simple():
m = MathTester()
testing.assertEqual(m.add(1, 2), 3)
# Actors named prefixed with _tests_ are also considered tests
actor _test_SyncTester():
m = MathTester()
testing.assertEqual(m.add(1, 2), 3)
# Use any test actor name by taking a testing.SyncT as only parameter
actor _SyncTester2(t: testing.SyncT):
log = logging.Logger(t.log_handler)
m = MathTester()
log.info("Calculating numbers..")
testing.assertEqual(m.add(1, 2), 3)
# The traditional function-based approach can also take a testing.SyncT arg
def _test_syncact(testing.SyncT):
"""A test using actors and synchronous control flow"""
# We make use of an actor as the central point for running our test logic.
s = _SyncTester(t)
Run:
acton test
Output:
Building project in /home/user/foo
Compiling example.act for release
Finished compilation in 0.027 s
Final compilation step
Finished final compilation step in 0.526 s
Tests - module example:
SyncTester: OK: 1029 runs in 50.015ms
SyncTester2: OK: 1103 runs in 50.002ms
syncact: OK: 1175 runs in 50.005ms
syncact_simple: OK: 1231 runs in 50.014ms
All 4 tests passed (0.655s)
Since the Acton RTS is multi-threaded and actors are scheduled concurrently on worker threads, using actors imply a degree of non-determinism and so unlike unit tests, which are completely deterministic, actors tests are fundamentally non-deterministic. You can still write deterministic tests as long as you pay attention to how you construct your test results.
testing.SyncT also provides:
t.require(tag)to skip a test when a required capability is not enabled viaacton test --tag TAGt.skip(reason)to explicitly skip the current test
For example, actor A might be scheduled before or after actor B so if the test relies on ordering of the output, it could fail or succeed intermittently. Interacting with the surrounding environment by reading files or communicating over the network introduces even more sources of non-determinism. Avoid it if you can.
The test discovery system finds synchronous tests through:
- Functions: with names starting with
_test_and signaturesproc() -> Noneorproc(testing.SyncT) -> None - Actors: that take a
testing.SyncTparameter (the_test_prefix is optional) or actors with names starting with_test_and no parameters
Snapshot testing can be enabled by returning a str. This only works for test functions, not test actors. Use a wrapping function if you want snapshot testing. The Acton test framework will take care about recognizing the test as a snapshot test and comparing its output to the expected snapshot value.
Async actor tests
Source:
import logging
import testing
actor MathTester():
def add(a, b):
return a + b
actor _AsyncTester(t: testing.AsyncT):
log = logging.Logger(t.log_handler)
def test():
log.info("AsyncTester.test() doing its thing")
t.success()
# Provide output to .success to enable snapshot testing
# t.success("some_output")
# Or if things aren't going well, use .failure or .error
# t.failure(ValueError("whopsy"))
# t.error(ValueError("whopsy"))
after 0: test()
Run:
acton test
Output:
Building project in /home/user/foo
Compiling example.act for release
Finished compilation in 0.028 s
Final compilation step
Finished final compilation step in 0.516 s
Tests - module example:
asyncact1: OK: 1171 runs in 56.181ms
All 1 tests passed (0.695s)
If a particular module is written to be called asynchronously, you will need to use asynchronous tests to test it.
testing.AsyncT also provides:
t.require(tag)to skip a test when a required capability is not enabled viaacton test --tag TAGt.skip(reason)to explicitly skip the current test
The test discovery system finds asynchronous tests by looking for actors that take a testing.AsyncT parameter.
Snapshot testing can be enabled by providing an output of type str to the .success(output: ?str) function. The Acton test framework will take care about recognizing the test as a snapshot test and comparing its output to the expected snapshot value.
Env tests
When you need to test functionality that accesses the environment, like files on disk or connect to something across the network, you need an env test. Do beware of errors related to test setup though, since you now depend on the external environment. TCP ports that you try to listen to might be already taken. Files that you assume exist might not be there.
Source:
import logging
import testing
actor _TestWithEnv(t: testing.EnvT):
log = logging.Logger(t.log_handler)
def test():
log.info("EnvTester.test() running, going to check with the env", {"worker_threads": t.env.nr_wthreads})
t.success()
# Provide output to .success to enable snapshot testing
# t.success("some_output")
# Or if things aren't going well, use .failure or .error
# t.failure(ValueError("whopsy"))
# t.error(ValueError("whopsy"))
after 0: test()
Run:
acton test
Output:
Building project in /home/user/foo
Compiling example.act for release
Finished compilation in 0.023 s
Final compilation step
Finished final compilation step in 0.484 s
Tests - module example:
envtest1: OK: 1213 runs in 50.135ms
All 1 tests passed (0.689s)
The test discovery system finds environment tests by looking for actors that take a testing.EnvT parameter.
testing.EnvT also provides:
t.require(tag)to skip a test when a required capability is not enabled viaacton test --tag TAGt.skip(reason)to explicitly skip the current test
Snapshot testing can be enabled by providing an output of type str to the .success(output: ?str) function. The Acton test framework will take care about recognizing the test as a snapshot test and comparing its output to the expected snapshot value.
Snapshot testing
Snapshot tests compare a produced string with a stored expected value. You can produce snapshot output from unit/sync test functions by returning str, or from async/env tests by calling t.success("...").
import testing
def _test_rendered_profile() -> str:
return '{"name":"Alice","role":"admin"}'
Acton writes snapshot files in your project under snapshots/output/<module>/<test_name> (latest produced value) and snapshots/expected/<module>/<test_name> (expected value used for comparison). Test names in those paths use the display test name, so _test_foo becomes foo.
When an expected snapshot differs (or is missing), running tests normally shows a mismatch:
$ acton test
Building project in /home/user/example
...
Tests - module main:
rendered_profile: FAIL : 254 runs in 50.045ms
testing.NotEqualError: Test output does not match expected snapshot value.
@@ -1,1 +1,1 @@
-{"name":"Alice","role":"user"}
+{"name":"Alice","role":"admin"}
1 out of 1 tests failed (0.244s)
When the new output is correct, accept it as the new expected value with:
$ acton test --accept
Building project in /home/user/example
...
Tests - module main:
rendered_profile: UPDATED : 243 runs in 50.445ms
All 1 tests passed (0.246s)
--accept is the idiomatic flag (aliases: --snapshot-update, --golden-update).
For tests that produce snapshot output, snapshots/output/... is written on every run. That is intentional: it makes it easy to use external diff tools (diff, vimdiff, meld, etc.) against snapshots/expected/... without any extra export step.
For example, to inspect the current snapshot difference for the test above:
diff -u snapshots/expected/main/rendered_profile snapshots/output/main/rendered_profile
Common workflow
- Run
acton test. - If there is a snapshot mismatch, inspect it directly, for example:
diff -u snapshots/expected/main/rendered_profile snapshots/output/main/rendered_profile
- Accept changes with
acton test --accept. - Run
acton testagain to confirm everything passes.
An alternative workflow is to accept first, then inspect version-controlled snapshot changes with Git:
- Run
acton test --accept. - Review what changed with
git diff -- snapshots/expected. - Keep or revert changes before commit, then run
acton test.
Failures vs errors
Tests can have tree different outcomes; success, failure and error.
Success and failure are the two common cases where success is when the test meets the expected assertions and a failure is when it fails to meet a test assertion like testing.assertEqual(1, 2). We also distinguish a third case for test errors which is when a test does not run as expected, hitting an unexpected exception. This could indicate a design issue or that the test environment is not as expected.
All test assertions raise exceptions inheriting from AssertionError which are considered test failures. Any other exception will be considered a test error.
For example, if a test attempts to retrieve https://dummyjson.com/products/1 and check that the returned JSON looks a certain way, it would be a test failure if the returned JSON does not match the expected value. If we try to connect with an invalid URL, like htp:// we would get a different exception and that would be considered a test error. It's probably a bad idea to try to connect to something on the Internet in a test, so avoid that and other sources of non-determinism when possible.
Unit tests
Source:
import random
import testing
def _test_failure():
testing.assertEqual(1, 2)
def _test_flaky():
i = random.randint(0, 2)
if i == 0:
return
elif i == 1:
testing.assertEqual(1, 2)
else:
raise ValueError("Random failure")
def _test_error() -> None:
# Now we could never use a unit test to fetch things from the Internet
# anyway, but it's just to show what the results look like
raise ValueError()
Run:
acton test
Output:
Building project in /home/user/foo
Compiling example.act for release
Finished compilation in 0.020 s
Final compilation step
Finished final compilation step in 0.482 s
Tests - module example:
error: ERR: 454 errors out of 454 runs in 52.733ms
ValueError:
flaky: FLAKY FAIL: 231 failures out of 471 runs in 52.819ms
testing.NotEqualError: Expected equal values but they are non-equal. A: 1 B: 2
failure: FAIL: 408 failures out of 408 runs in 52.837ms
testing.NotEqualError: Expected equal values but they are non-equal. A: 1 B: 2
1 error and 2 failure out of 3 tests (0.691s)
Unit tests are a good starting point for testing small units of your program. Pure functions are deterministic and are thus preferable for tests over non-deterministic tests using actors. You are limited in what you can do though, since all called functions must be pure.
Flaky tests
Flaky tests are those that have different outcomes during different runs, i.e. they are not deterministic. To combat these, acton test will per default attempt to run tests multiple times to ensure that the result is the same. It runs as many test iterations as possible for at least 50ms. If a test is flaky, this will be displayed in the test output.
Source:
import random
import testing
def _test_flaky():
i = random.randint(0, 2)
if i == 0:
return
elif i == 1:
testing.assertEqual(1, 2)
else:
raise ValueError("Random failure")
Run:
acton test
Output:
Building project in /home/user/foo
Compiling example.act for release
Finished compilation in 0.017 s
Final compilation step
Finished final compilation step in 0.453 s
Tests - module example:
flaky: FLAKY FAIL: 565 failures out of 1140 runs in 50.043ms
testing.NotEqualError: Expected equal values but they are non-equal. A: 1 B: 2
1 out of 1 tests failed (0.625s)
Note how this test case is only made possible because the random module has incorrect effects. The type says it is pure while in reality, it is not. There is an issue to improve this by applying a proper effect to the random module, see https://github.com/actonlang/acton/issues/1729, after which this example needs to be rewritten.
Performance testing
It is also possible to run tests in a performance mode, which uses the same basic test definitions (so you can run your tests both as logic test and for performance purposes) but alters the way in which the tests are run. In performance mode, only a single test will be run at a time unlike the normal mode in which many tests are typically run concurrently.
To get good numbers in performance mode, it's good if test functions run for at least a couple of milliseconds. With very short tests, very small differences lead to very large percentage differences.
Source:
import testing
def _test_simple():
a = 0
for i in range(99999):
a += i
Run:
acton test perf
Output:
Building project in /home/user/foo
Compiling example.act for release
Finished compilation in 0.016 s
Final compilation step
Finished final compilation step in 0.451 s
Tests - module example:
simple: OK: 3.21ms Avg: 4.20ms 5.11ms 106 runs in 1005.261ms
All 1 tests passed (1.571s)
(note that the output is rather wide, scroll horizontally to see the full output)
See Stress testing for concurrency-focused stress runs.
Stress testing
Stress testing is meant for concurrency bugs, especially race conditions in FFI / C integrations.
Run it with:
acton test stress
How stress mode runs
- Runs one test function/actor at a time.
- For that test, starts multiple concurrent workers of the same test in one process.
- Worker count defaults to roughly
1.5 * nr_wthreadsso workers must share RTS worker threads. - Override it with
--stress-workers Nwhen you want a specific level of oversubscription. - Stress runs are always fresh (no test-result cache reuse).
- By default, stress runs for up to 5 seconds per test (
--max-time 5000). - Continuous mode is available with
--max-time 0. - In continuous mode,
--max-iteris unbounded unless you set it. - The default stress
--min-timeis 1 second unless overridden (used for calibration; stress run length is controlled by--max-time).
Worker scheduling
Stress workers are split into:
- A no-drift cohort (at least 2 workers, scaling to roughly 25% of workers)
- A staggered-drift cohort (small per-iteration microsecond offsets)
This combines synchronized overlap windows with evolving offsets over time.
Per-worker live stats
In an interactive terminal, stress mode shows one live status line per worker.
- sync worker line:
wN sync RUN/DONE ... @ RATE/s cur=0us tot=0us - drift worker line:
wN drift RUN/DONE ... @ RATE/s cur=DRIFTus tot=TOTALus
The main test line also carries a compact worker summary:
- total:
... RUNS runs in DURATIONms @ RATE/s - worker mix:
workers=N (sync=S drift=D) - iteration estimate, granularity and observed coverage:
iter~...ms coarse~...ms sweep=... calib=... cov=SEEN/TOTAL(PCTpct)iter~is a moving estimate of one test iteration durationcoarse~is the current phase coarseness (lower is finer)- in continuous mode (
--max-time 0),coarse~decreases stepwise over time as sweep depth increases covis observed occupied phase bins at current coarseness (resets when coarseness is refined)
Example:
racy_sum: RUN : 2400 runs in 1500.000ms @ 1600.0/s | workers=8 (sync=2 drift=6) iter~2.340ms coarse~0.037ms sweep=64 cov=37/64(57.8pct)
Use normal test flags to tune duration/iterations, for example:
acton test stress --max-time 30000 --min-iter 100
Pin a higher worker count explicitly:
acton test stress --stress-workers 24 --max-time 30000
Run continuously until interrupted:
acton test stress --max-time 0
Press Ctrl-C to stop and print the partial stress result collected so far.
Performance comparisons
When running in performance mode you can record a snapshot of performance using acton test perf --record. A perf_data file is written to disk with the stored performance data. Subsequent test runs will read this file and show a comparison. The difference is displayed as a percentage increase or decrease in time.
Source:
import testing
def _test_simple():
a = 0
for i in range(99999):
a += i
Run:
acton test perf --record
acton test perf
Output:
Building project in /home/user/foo
Compiling example.act for release
Finished compilation in 0.017 s
Final compilation step
Finished final compilation step in 0.452 s
Tests - module example:
simple: OK: 3.25ms Avg: 4.16ms 7.38ms 122 runs in 1006.002ms
All 1 tests passed (1.565s)
Building project in /home/user/foo
Compiling example.act for release
Already up to date, in 0.000 s
Final compilation step
Finished final compilation step in 0.116 s
Tests - module example:
simple: OK: 3.23ms -0.50% Avg: 4.17ms +0.19% 6.35ms -13.91% 119 runs in 1001.375ms
All 1 tests passed (1.215s)
(note that the output is rather wide, scroll horizontally to see the full output)
Compilation
Acton is a compiled language and as such, outputs binary executables.
While compiled languages are often associated with long compilation times that slow down development, Acton goes to great lengths to offer a great developer experience. Content hashing is used extensively to carefully invalidate and recompile only necessary parts, see incremental compilation for more details.
It is possible to influence the compilation process and the output in various ways.
Optimization modes
Acton defaults to Debug builds. Debug builds compile fast and include
debug symbols, which makes them the right default during development.
For release builds, use acton build --release to better optimize the
final executable. For standalone files, use acton --release foo.act.
--release and --release=safe select the normal release mode.
--release=small optimizes for a smaller binary size. --release=fast
can result in a faster program, but it is generally discouraged because
it disables safety checks and alters language semantics.
Optimized for native CPU features
The default target is somewhat conservative to ensure a reasonable amount of compatibility. On Linux, the default target is GNU Libc version 2.27 which makes it possible to run Acton programs on Ubuntu 18.04 and similar old operating systems. Similarly, a generic x86_64 CPU is assumed which means that newer extra CPU instruction sets are not used.
To compile an executable optimized for the local computer, use --target native. In many cases it can lead to a significant faster program, often running 30% to 100% faster.
Statically linked executables using musl for portability
On Linux, executable programs can be statically linked using the Musl C library, which maximizes portability as there are no runtime dependencies at all.
To compile an executable optimized for portability using musl on x86_64, use --target x86_64-linux-musl.
A default compiled program is dynamically linked with GNU libc & friends
$ acton helloworld.act
Building file helloworld.act
Compiling helloworld.act for release
Finished compilation in 0.013 s
Final compilation step
Finished final compilation step in 0.224 s
$ ldd helloworld
linux-vdso.so.1 (0x00007fff2975b000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f11f472a000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f11f4725000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f11f4544000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f11f453f000)
/lib64/ld-linux-x86-64.so.2 (0x00007f11f4827000)
$
A program linked statically towards Musl has no run time dependencies:
$ acton helloworld.act --target x86_64-linux-musl
Building file helloworld.act
Compiling helloworld.act for release
Finished compilation in 0.013 s
Final compilation step
Finished final compilation step in 0.224 s
$ ldd helloworld
not a dynamic executable
$
Although untested, static linking with musl should work on other CPU architectures.
MacOS does not support static compilation.
Cross-compilation
Acton supports cross-compilation, which means that it is possible to run develop on one computer, say a Linux computer with an x86-64 CPU but build an executable binary that can run on a MacOS computer.
Here's such an example. We can see how per default, the output is an ELF binary for x86-64. By setting the --target argument, acton will instead produce an executable for a Mac.
$ acton --quiet helloworld.act
$ file helloworld
helloworld: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.0.0, with debug_info, not stripped
$ acton --quiet helloworld.act --target x86_64-macos-none
$ file helloworld
helloworld: Mach-O 64-bit x86_64 executable, flags:<NOUNDEFS|DYLDLINK|TWOLEVEL|PIE>
It is not only possible to compile for other operating systems, but also for other CPU architectures. For example, use --target aarch64-macos-any to produce a binary executable for an Apple M1/M2 CPU.
Prebuilt libraries
Acton ships with prebuilt libraries for the local platforms default target, i.e. if you install Acton on a x86-64 Linux machine, it will have libraries prebuilt for x86_64-linux-gnu.2.27. The default target uses these prebuilt libraries which results in a fast build:
$ acton helloworld.act
Building file helloworld.act
Compiling helloworld.act for release
Finished compilation in 0.013 s
Final compilation step
Finished final compilation step in 0.224 s
$
When targeting something that is not the default target, the entire Acton system, including builtins, the run time system, the standard library and external library dependencies is built from source and can take a significant amount of time. The build process is highly parallelized and cached. For example, on an AMD 5950X with 16 cores / 32 threads, it takes around 7 seconds to do a complete rebuild for a small Acton program as can be seen here:
$ acton helloworld.act --target aarch64-macos-none
Building file helloworld.act
Compiling helloworld.act for release
Finished compilation in 0.012 s
Final compilation step
Finished final compilation step in 6.847 s
$
Build cache
In an Acton project, there is a build cache, is is stored in a directory called build-cache in the project directory. The cache is always used for the project local files. If a non-default --target is being used, the built output of the Acton system is also stored in the cache, which means that it is only the first time around that it is slow. Any subsequent build is going to use the cache and run very fast. Like in this example, where the first invocation takes 6.120 seconds and the second one runs in 0.068 seconds.
$ acton new hello
Created project hello
Enter your new project directory with:
cd hello
Compile:
acton build
Run:
./out/bin/hello
Initialized empty Git repository in /home/kll/hello/.git/
$ cd hello/
$ acton build --target native
Building project in /home/kll/hello
Compiling hello.act for release
Finished compilation in 0.012 s
Final compilation step
Finished final compilation step in 6.120 s
$ acton build --target native
Building project in /home/kll/hello
Compiling hello.act for release
Already up to date, in 0.000 s
Final compilation step
Finished final compilation step in 0.068 s
$
When compiling standalone .act files, there is no project and thus no persistent cache, so using a custom --target will always incur a penalty.
Content hash driven incremental compilation
Acton tracks changes at a finer level than whole modules so builds stay fast as your project grows. The compiler keeps multiple different hashes and uses them to decide what and how to recompile.
What gets hashed and tracked
- moduleSrcBytesHash: hash of the raw bytes for a whole
.actfile. It is very cheap to read and hash a.actfile from disk (GB/s). This is stored in the.tyfile and is the authority for deciding whether a cached typed module still matches the source. Parsing is not incremental, it only supports re-parsing a complete module, which is also why a single hash for the entire.actmodule file makes sense. - per-name srcHash: hash of a name's source code. Since this is only the hash of the source code, it can be computed after the parser and before type checking in order to determine what functions we need to rerun through later passes, including type checking which is typically a relatively expensive pass. Note how the next pubHash and implHash are after type checking, so they cannot be used in order to determine if type-checking should be rerun for a function.
- per-name pubHash: hash of a name's public interface (its type signature). Downstream modules only need to re-typecheck if a pubHash they depend on changes. The pubHash also contains the hashes of dependencies, and if those change, our pubHash will change, thus causing re-typecheck.
- per-name implHash: hash of a name's implementation plus the impl hashes it depends on. If an implHash changes, we re-run back passes and tests.
- per-name pubDeps: the public (type signature) hashes of other names that we depend on. If the hashes of any deps change, we must re-typecheck.
- per-name implsDeps: the hashes of the implementation of other names that we depend on. If the hashes of any deps change, we must rerun back passes.
Most names have both a pubHash and implHash. Some derived internal names only have implHash.
How .ty cache validity is decided
Each module gets a cached typed interface file in out/types/<module>.ty.
That cache stores:
- the module source content hash (
moduleSrcBytesHash) - fast-path source metadata such as modification time, change time, and file size
- file identity metadata where available, such as inode/device on POSIX
- the compiler/cache schema version
The important rule is:
- content hash is the correctness authority
- filesystem metadata is only a fast path
In practice, Acton decides reuse like this:
- If the
.tyfile is missing, unreadable, or from an incompatible cache schema version, Acton reparses the.actfile and rebuilds the.ty. - If the cached source metadata still matches the current source file and the
source mtime is strictly older than the
.tymtime, Acton reuses the.tyheader immediately without reading and hashing the source. - If the metadata differs, or the source and
.tymtimes are equal, Acton reads the.actfile and compares its content hash to the storedmoduleSrcBytesHash. - If the hash matches, the source content is unchanged, so Acton reuses the
cached
.tyand refreshes the stored source metadata. - If the hash differs, the source really changed, so Acton reparses and recompiles that module.
This means harmless metadata drift, like a touch, checkout, restore, copy,
or cross-machine sync, does not by itself force a front-end rebuild. It also
means misleading timestamps cannot cause stale typed-module reuse, because
Acton falls back to the source content hash before trusting the cache.
The strict source mtime < .ty mtime check matters on filesystems with coarse
mtime resolution. If a source edit and .ty write land in the same timestamp
tick, equal mtimes are ambiguous: the source may already have changed even
though the cached source metadata still looks identical. Acton therefore treats
equal source and .ty mtimes as a signal to hash the current source instead of
taking the metadata-only fast path.
What changes cause what work
Change function body, same signature
# a.act
def apa() -> int:
return 1
def a() -> int:
return apa()
# c.act
import a
import testing
def _test_foo() -> None:
testing.assertEqual(a.a(), 1)
If you change apa() to return 2, only the impl hashes change since the return type and overall type signature remains the same. c does not re-typecheck, but back passes and tests are re-run.
Example acton build --verbose output (trimmed):
Stale a: source changed
Stale c: impl changes in a.a (used by _test_foo)
Change a signature
If a.a() changes its return type, its pubHash changes. Any module that uses a.a() will re-run front passes.
Stale c: pub changes in a.a (used by _test_foo)
Add or remove an unused import
If no name actually uses the import, per-name deps do not change, so nothing propagates. Changes are only computed and propagated for names that are actually in use, which also means that it is possible to create quite large and monolithic modules without paying a higher cost for longer compilation times of downstream dependents.
Code generation staleness
Generated C/H files embed the module impl hash. If the embedded hash differs from the current module impl hash, the compiler treats the generated code as out of date and regenerates it.
Tests and hashes
Test results are cached by the per-name implHash (plus impl deps). Cached failures are still shown by default. Use --show-cached to include cached successes, or --no-cache to force reruns.
For more on tests, see the Testing section.
Package Management
acton offers integrated package management to declare dependencies on
other Acton packages and automatically download them from their sources
on the Internet.
This sits next to the project tree, not instead of it. See
Projects for how local source discovery works and
Modules for how files under src/ become module names.
The guiding principle behind Acton's package management is to strive for determinism, robustness, and safety. This is primarily achieved by only resolving dependencies at design time. That is, the developer of a particular Acton package determines its exact dependencies, not whomever might be downloading and building it. The identity of a package at a particular point in time, which you can think of as the version of a package, is the hash of its content. That is the foundation of Acton's package management. Each dependency has a hash of its content. A URL is just one place from which this particular version of a package can be downloaded. The hash of packages is determined and recorded at design time. Anyone pulling down and building dependencies will have the hash verified to ensure a deterministic build.
There is no central package repository. Instead, dependencies are defined as URLs from which the dependency package can be downloaded. This is typically a tar.gz file from GitHub, GitLab, or a similar source hosting site. Again, the identity of a version of a package is the content hash. The URL is only where to get it.
Acton is statically compiled. All dependencies are fetched and included at compile time, so there are no runtime dependencies.
Project lineage fingerprint
Each project must declare a fingerprint in Build.act to represent its lineage — the stable identity of the project across versions. This is separate from dependency content hashes:
- Content hashes identify a specific version of a dependency.
- Fingerprints identify the project itself and help Acton deduplicate dependencies and generate consistent build metadata.
Example:
name = "myproject"
fingerprint = 0x1234abcd5678ef00
How it behaves today
nameandfingerprintare required in every project.- Acton validates that the fingerprint matches the name’s lineage prefix.
- If they don’t match or either is missing, the build fails with guidance on how to fix it.
Renaming a project breaks lineage, so generate a new fingerprint for the new name. When you fork a project, also generate a new fingerprint so the fork has its own lineage.
Fingerprint and lineage
Each Acton project must declare a fingerprint in Build.act. The fingerprint represents the project’s lineage — a stable identity that stays the same across releases of the same project.
name = "myproject"
fingerprint = 0x1234abcd5678ef00
What the fingerprint is
- A 64-bit hex value (
0x...). - The upper 32 bits are derived from the project name.
- The lower 32 bits are random.
- Acton uses this lineage to deduplicate dependencies and to generate consistent build metadata.
Rename vs fork
Renaming a project breaks lineage, so generate a new fingerprint for the new name. Forking a project also creates a new lineage, so generate a fresh fingerprint.
If Acton detects a mismatch, it will fail the build and tell you to generate a new fingerprint for the name.
Current behavior
nameandfingerprintare required in every project.- Acton validates the lineage prefix derived from the name.
- Missing or mismatched values fail the build with guidance.
Add Dependency
Add a dependency to your project by using acton pkg add and providing the URL to the project and a local reference name.
In this case we add the example foo package as a dependency.
acton pkg add https://github.com/actonlang/foo/archive/refs/heads/main.zip foo
This will fetch the dependency and add it to the dependencies block in Build.act, resulting in something like:
dependencies = {
"foo": (
url="https://github.com/actonlang/foo/archive/refs/heads/main.zip",
hash="1220cd47344f8a1e7fe86741c7b0257a63567b4c17ad583bddf690eedd672032abdd",
),
}
zig_dependencies = {}
It is possible to edit Build.act by hand, but adding dependencies requires filling in the hash field, which is somewhat tricky.
The foo package provides a single foo module with a foo function (that appropriately returns foo). We can now access it from our main actor:
import foo
actor main(env):
print(foo.foo())
env.exit(0)
Local Dependencies
It is possible to use dependencies available via a local file system path by setting the path attribute. Edit Build.act and add or modify an existing dependency. Set the path attribute to a relative path, e.g.:
dependencies = {
"foo": (
path="../foo"
),
"local_lib": (
path="deps/local_lib"
),
}
zig_dependencies = {}
These are best used for dependencies located within the same git repository or similar. All users need to have the same relative path to the dependency so if the paths stretch over multiple repositories, the user needs to keep the paths aligned.
You can temporarily override the path to a dependency through the --dep argument, e.g. acton build --dep foo=../foo. This can be useful to fork a library and make local modifications to it before submitting them back upstream.
Override the path to a dependency
The configuration in Build.act sets the path or url that is normally used for a dependency. It is possible to temporarily override the path through the --dep argument to acton build.
Let's say we have the following configuration:
dependencies = {
"foo": (
url="https://github.com/actonlang/foo/archive/refs/tags/v1.0.zip",
hash="1220cd47344f8a1e7fe86741c7b0257a63567b4c17ad583bddf690eedd672032abdd",
),
}
zig_dependencies = {}
Now we want to make some modifications to the foo library, so we clone it to a local path. We can now build our project using acton build --dep foo=../foo to temporarily override the foo dependency to use the path ../foo instead of the url in the configuration.
Remove Dependency
You can remove a dependency from your project with acton pkg remove:
acton pkg remove foo
Fetch Dependencies / Enable Airplane Mode
You can fetch all the dependencies of a project by using acton fetch. It will download the dependencies specified in Build.act to the cache.
acton fetch enables you to work offline (a.k.a airplane mode).
C / C++ / Zig dependencies
Much like dependencies on other Acton packages, an Acton project can depend on a Zig package which could be a C / C++ or Zig library, as long as it has a build.zig file.
acton zig-pkg add URL NAME --artifact X --artifact Y- list the libraries you want to link with as artifacts
acton zig-pkg remove NAME
acton zig-pkg add https://github.com/allyourcodebase/zlib/archive/refs/tags/1.3.1.tar.gz zlib --artifacts z
{
"dependencies": {},
"zig_dependencies": {
"zlib": {
"url": "https://github.com/allyourcodebase/zlib/archive/refs/tags/1.3.1.tar.gz",
"hash": "122034ab2a12adf8016ffa76e48b4be3245ffd305193edba4d83058adbcfa749c107",
"artifacts": [
"z"
]
}
}
}
Security and Trust (or the lack thereof)
Working with Zig / C / C++
Acton has C ABI compatibility which makes it trivial to call C functions and fairly simply to call Zig and C++ using C wrapping functions. If you want to integrate a library written in one of these languages, this page is for you.
Regardless of the foreign language used, we need to consider a few things:
- memory allocation, it must play well with the Acton GC
- You can allocate memory via classic
mallocor via Acton GC mallocacton_malloc- normal mallocacton_malloc_atomicfor allocations that are guaranteed to not contain pointers
- Better safe than sorry, use the GC-malloc when in doubt
- Always use Acton GC malloc for actor and class attributes and similar
- object and actor instances are garbage collected by the GC and there is no destructor function, so if you would have used class
mallocthere is no good place for thefree
- object and actor instances are garbage collected by the GC and there is no destructor function, so if you would have used class
- Within pure functions, you can use class
malloc- be sure tofreethe allocations before return of the function, even for error paths
- You can allocate memory via classic
- thread safety, the Acton RTS is threaded and actors are concurrently executed by different threads
- in general, we strive to only keep data per actor and since an actor executes sequentially, we do not need thread safety measures like locks - just make sure you don't try to share data between actors "under the hood"
- libraries must not use global variables though
- asynchronous I/O, the Acton RTS performs asynchronous I/O and any other library that performs I/O need to conform to this model
Integrating a C library (zlib)
This is a guide to integrating C libraries in Acton code. We will use the zlib compression library, written in C, to build an Acton module that supports zlib compression and decompression.
We will only focus on the inflate and deflate functions in zlib. They are pure functions (meaning they only take some input and return some output, they do not have any side effects like writing to some shared state), that makes them easier to integrate than anything that does I/O. While zlib does expose functions to interact with files, we don't want to reimplement file related functionality since we already have this supported by the Acton stdlib.
Create new project
Let's start by making a new Acton project, let's call it acton-zlib. New projects are created with an example "Hello world" app. Let's remove it and start from scratch.
acton new acton-zlib
cd acton-zlib
rm src/*
Acton's low level build system - the Zig build system
The Acton compiler parses .act source code, runs through all its compilation passes with type checking, CPS conversion, lambda lifting etc and finally produces C code. Internally, Acton then uses the Zig build system to compile the generated C code to libraries and finally binary executables.
To add a C library dependency, it first needs to be buildable using the Zig build system, which means that it needs a build.zig file, the config file for the Zig build, somewhat similar to the CMakeLists.txt of CMake. Some projects have already adopted a build.zig in the upstream repo, like PCRE2 and the Boehm-Demers-Weiser GC (both of which are used by Acton). In some cases, there are forks of projects with build.zig added. Otherwise you will need to write one for yourself, which is usually simpler than it might first seem.
Add the zlib C library as a Zig dependency
In the case of zlib, there is already a repo available with a build.zig for zlib. Navigate to the Tags page, find 1.3.1 and the link to the source files, i.e. https://github.com/allyourcodebase/zlib/archive/refs/tags/1.3.1.tar.gz.
Add it to our acton-zlib project:
acton zig-pkg add https://github.com/allyourcodebase/zlib/archive/refs/tags/1.3.1.tar.gz zlib --artifact z
Note the --artifact z which is provided to instruct which library to link with. Headers from the zlib library, like zlib.h, will now become visible to C files in our project and the z library will be linked in with our executables. The easiest way to discover what the artifacts are called is by inspecting the build.zig file of the package. This particular zlib build.zig starts like this:
const std = @import("std");
pub fn build(b: *std.Build) void {
const upstream = b.dependency("zlib", .{});
const lib = b.addStaticLibrary(.{
.name = "z",
.target = b.standardTargetOptions(.{}),
.optimize = b.standardOptimizeOption(.{}),
});
lib.linkLibC();
lib.addCSourceFiles(.{
.root = upstream.path(""),
.files = &.{
"adler32.c",
"crc32.c",
...
It is the .name argument to addStaticLibrary that tells us the name of the artifact. Zig packages might expose multiple such artifacts, as is the case for mbedtls.
acton zig-pkg add will fetch the package from the provided URL and save the hash sum to Build.act, resulting in:
dependencies = {}
zig_dependencies = {
"zlib": (
url="https://github.com/allyourcodebase/zlib/archive/refs/tags/1.3.1.tar.gz",
hash="122034ab2a12adf8016ffa76e48b4be3245ffd305193edba4d83058adbcfa749c107",
artifacts=["z"]
),
}
Create zlib.act Acton module
Next up we need to create the Acton zlib module. Open src/zlib.act and add a compress and decompress function:
pure def compress(data: bytes) -> bytes:
NotImplemented
pure def decompress(data: bytes) -> bytes:
NotImplemented
The NotImplemented statement tells the compiler that the implementation is not written in Acton but rather external. When there is a .ext.c file, the compiler expects it to contain the implementations for the NotImplemented functions. Also note the explicit types. Normally the Acton compiler can infer types, but since there is no Acton code here, only C code, there is nothing to infer from.
Now create src/zlib.ext.c which is where we will do the actual implementation of these functions. We need to add a __ext_init__ function, which runs on module load by the Acton RTS, which must always exist. There is nothing to do in particular for zlib so let's just create an empty function, like so:
void zlibQ___ext_init__() {}
Next, we need to fill in the C functions that map to the Acton functions compress and decompress. By invoking acton build we can get the compiler to generate a skeleton for these. We will also get a large error message, since there is no actual implementation:
user@host$ acton build
... some large error message
Ignore the error and instead check the content of out/types/zlib.c and we will find the C functions we need, commented out:
#include "rts/common.h"
#include "out/types/zlib.h"
#include "src/zlib.ext.c"
B_bytes zlibQ_compress (B_bytes data);
/*
B_bytes zlibQ_compress (B_bytes data) {
// NotImplemented
}
*/
B_bytes zlibQ_decompress (B_bytes data);
/*
B_bytes zlibQ_decompress (B_bytes data) {
// NotImplemented
}
*/
int zlibQ_done$ = 0;
void zlibQ___init__ () {
if (zlibQ_done$) return;
zlibQ_done$ = 1;
zlibQ___ext_init__ ();
}
Copy the commented-out skeleton into our own src/zlib.ext.c. Just in order to get something that compiles, let's just quickly let the functions return the input data. Since both input and output are bytes, this should now compile (and work at run time).
B_bytes zlibQ_compress (B_bytes data) {
return data;
}
B_bytes zlibQ_decompress (B_bytes data) {
return data;
}
user@host:~/acton-zlib$ acton build
Building project in /Users/user/acton-zlib
Compiling zlib.act for release
Finished compilation in 0.005 s
Compiling test_zlib.act for release
Finished compilation in 0.019 s
Final compilation step
user@host:~/acton-zlib$
Add a test module
Before we implement the body of the compress and decompress functions, we can write a small test module which will tell us when we've succeeded. We use some pre-known test data (which we could get from another language implementation):
import testing
import zlib
def _test_roundtrip():
for x in range(100):
i = "hello".encode()
c = zlib.compress(i)
d = zlib.decompress(c)
testing.assertEqual(i, d)
def _test_compress():
for x in range(100):
i = "hello".encode()
c = zlib.compress(i)
testing.assertEqual(c, b'x\x9c\xcbH\xcd\xc9\xc9\x07')
def _test_decompress():
for x in range(1000):
c = b'x\x9c\xcbH\xcd\xc9\xc9\x07'
d = zlib.decompress(c)
testing.assertEqual(d, b'hello')
Note how we run a few test iterations to get slightly better timing measurements for performance testing. Run the test with acton test:
user@host:~/acton-zlib$ acton test
Tests - module test_zlib:
decompress: FAIL: 195 runs in 50.728ms
testing.NotEqualError: Expected equal values but they are non-equal. A: b'x\x9c\xcbH\xcd\xc9\xc9\x07' B: b'hello'
compress: FAIL: 197 runs in 50.886ms
testing.NotEqualError: Expected equal values but they are non-equal. A: b'hello' B: b'x\x9c\xcbH\xcd\xc9\xc9\x07'
roundtrip: OK: 226 runs in 50.890ms
2 out of 3 tests failed (26.354s)
user@host:~/acton-zlib$
As expected, the roundtrip test goes through, since we just return the input data while the compress and decompress tests fail.
Implement the compress function
Now let's fill in the rest of the owl. Below is the body of the zlibQ_compress function. The bulk of this code is not particularly interesting to this guide as it has more to do with standard C usage of zlib, but a few things are worth noting.
B_bytes zlibQ_compress(B_bytes data) {
if (data->nbytes == 0) {
return data;
}
// Prepare the zlib stream
int ret;
z_stream stream;
memset(&stream, 0, sizeof(stream));
ret = deflateInit(&stream, Z_DEFAULT_COMPRESSION);
if (ret != Z_OK) {
$RAISE((B_BaseException)$NEW(B_ValueError, to$str("Unable to compress data, init error: %d", ret)));
}
// Set the input data
stream.avail_in = data->nbytes;
stream.next_in = (Bytef*)data->str;
// Allocate the output buffer using Acton's malloc
size_t output_size = deflateBound(&stream, data->nbytes);
Bytef* output_buffer = (Bytef*)acton_malloc_atomic(output_size);
stream.avail_out = output_size;
stream.next_out = output_buffer;
// Perform the deflate operation
ret = deflate(&stream, Z_FINISH);
if (ret != Z_STREAM_END) {
$RAISE((B_BaseException)$NEW(B_ValueError, $FORMAT("Unable to compress data, error: %d", ret)));
}
// Clean up
deflateEnd(&stream);
return actBytesFromCStringNoCopy(output_buffer);
}
Memory management is always top of mind when writing C, as it the case here. We can allocate memory via the Acton GC-heap malloc or just plain malloc() (the non-GC heap, to be explicit). Since zlibQ_compress is pure, we have no state leaking out of the function other than via its return value. All return values must be allocated on the Acton GC heap, so we know we must use acton_malloc for any value that we return. Any other local variables within the function can use classic malloc, as long as we make sure to explicitly free it up. For class or actor methods, any allocation for class or actor attributes must be performed using the Acton GC malloc, since there is no destructor or similar where a free can be inserted, so using classic malloc would be bound to leak. Also note that in this particular case, we know that the returned bytes value itself is not going to contain any pointers, so by using acton_malloc_atomic we can get a chunk of memory that will not be internally scanned by the GC, which saves a bit of time and thus improves GC performance. If we allocate structs that do carry pointers, they must use the normal acton_malloc().
actBytesFromCStringNoCopy(output_buffer) takes the buffer (already allocated via acton_malloc_atomic()) and wraps it up as a boxed value of the type B_bytes that we return.
Also note how we convert Zlib errors to Acton exceptions where necessary.
Running the test, the compress test now passes while roundtrip has stopped working (since decompress is not implemented yet):
user@host:~/acton-zlib$ acton test
Tests - module test_zlib:
decompress: FAIL: 158 runs in 50.175ms
testing.NotEqualError: Expected equal values but they are non-equal. A: b'x\x9c\xcbH\xcd\xc9\xc9\x07' B: b'hello'
compress: OK: 167 runs in 50.225ms
roundtrip: FAIL: 147 runs in 50.266ms
testing.NotEqualError: Expected equal values but they are non-equal. A: b'hello' B: b'x\x9c\xcbH\xcd\xc9\xc9\x07'
2 out of 3 tests failed (0.941s)
user@host:~/acton-zlib$
Implement the decompress function
Much like the compress function, the decompress function mostly relates to how zlib itself and its interface works. We use the same wrappers and transform errors to exceptions.
B_bytes zlibQ_decompress(B_bytes data) {
if (data->nbytes == 0) {
return data;
}
// Prepare the zlib stream
int ret;
z_stream stream;
memset(&stream, 0, sizeof(stream));
ret = inflateInit(&stream);
if (ret != Z_OK) {
$RAISE((B_BaseException)$NEW(B_ValueError, $FORMAT("Unable to decompress data, init error: %d", ret)));
}
// Set the input data
stream.avail_in = data->nbytes;
stream.next_in = (Bytef*)data->str;
// Allocate the output buffer using Acton's malloc
size_t output_size = 2 * data->nbytes; // Initial output buffer size
Bytef* output_buffer = (Bytef*)acton_malloc_atomic(output_size);
memset(output_buffer, 0, output_size);
stream.avail_out = output_size;
stream.next_out = output_buffer;
// Perform the inflate operation, increasing the output buffer size if needed
do {
ret = inflate(&stream, Z_NO_FLUSH);
if (ret == Z_BUF_ERROR) {
// Increase the output buffer size and continue decompressing
size_t new_output_size = output_size * 2;
output_buffer = (Bytef*)acton_realloc(output_buffer, new_output_size);
stream.avail_out = new_output_size - stream.total_out;
stream.next_out = output_buffer + stream.total_out;
} else if (ret != Z_OK) {
$RAISE((B_BaseException)$NEW(B_ValueError, $FORMAT("Unable to decompress data, error: %d", ret)));
}
} while (ret == Z_BUF_ERROR);
// Clean up
inflateEnd(&stream);
return actBytesFromCStringNoCopy(output_buffer);
}
Final test
user@host:~/acton-zlib$ acton test
Tests - module test_zlib:
decompress: OK: 42 runs in 51.065ms
compress: OK: 25 runs in 50.032ms
roundtrip: OK: 24 runs in 50.053ms
All 3 tests passed (0.738s)
user@host:~/acton-zlib$
And with that, we're done! A simple wrapper around zlib, which is also available on GitHub if you want to study it further.
Run Time System
The Acton Run Time System is what sets up the environment in which an Acton program runs. It performs bootstrapping of the root actor. The worker threads that carry out actual execution of actor continuations are part of the RTS. It is the RTS that handles scheduling of actors and the timer queue. All I/O is handled between modules in the standard library in conjunction with the RTS.
Arguments
It is possible to configure the RTS through a number of arguments. All arguments to the RTS start with --rts-. Use --rts-help to see a list of all arguments:
$ acton examples/helloworld.act
Building file examples/helloworld.act
Compiling helloworld.act for release
Finished compilation in 0.012 s
Final compilation step
Finished final compilation step in 0.198 s
$ examples/helloworld --rts-help
The Acton RTS reads and consumes the following options and arguments. All
other parameters are passed verbatim to the Acton application. Option
arguments can be passed either with --rts-option=ARG or --rts-option ARG
--rts-bt-dbg Interactively debug on SIGILL / SIGSEGV
--rts-debug RTS debug, requires program to be compiled with --optimize Debug
--rts-ddb-host=HOST DDB hostname
--rts-ddb-port=PORT DDB port [32000]
--rts-ddb-replication=FACTOR DDB replication factor [3]
--rts-node-id=ID RTS node ID
--rts-rack-id=RACK RTS rack ID
--rts-dc-id=DATACENTER RTS datacenter ID
--rts-host=RTSHOST RTS hostname
--rts-help Show this help
--rts-mon-log-path=PATH Path to RTS mon stats log
--rts-mon-log-period=PERIOD Periodicity of writing RTS mon stats log entry
--rts-mon-on-exit Print RTS mon stats to stdout on exit
--rts-mon-socket-path=PATH Path to unix socket to expose RTS mon stats
--rts-no-bt Disable automatic backtrace
--rts-log-path=PATH Path to RTS log
--rts-log-stderr Log to stderr in addition to log file
--rts-verbose Enable verbose RTS output
--rts-wthreads=COUNT Number of worker threads [#CPU cores]
$
Worker threads
Per default, the RTS starts as many worker threads as there are CPU threads available, although at least 4. This is optimized for server style workloads where it is presumed that the Acton program is the sole program consuming considerable resources. When there are 4 and more CPU threads available, the worker threads are pinned to each respective CPU thread.
It is possible to specify the number of worker threads with --rts-wthreads=COUNT.
Actor method continuations run to completion, which is why it is wise not to set this value too low. Per default a minimum of 4 threads are started even when there are fewer CPU threads available, which means the operating system will switch between the threads inducing context switching overhead.
Interactive debugging on crashes
If an Acton program crashes, --rts-bt-dbg is a convenient option for launching an interactive debugger. It is triggered on SIGILL / SIGSEGV and launches an interactive GDB session that allows for debugging the running program.
Acton programs do not normally crash with SIGILL / SIGSEGV but it possible either due to bugs in the compiler / RTS / builtins or more likely, for third party libraries that use C FFI, like for TLS, SSH, zlib or similar where a bug in the library triggers a crash on C level.