Skip to main content

Develop & Test Logics

This guide gets you started with the full logic development lifecycle. You'll set up and install Cocolang, then structure a Coco module, then run tests with inline compiler tests and CocoLab so you can catch issues before deployment. After this, you can head to the next section to learn about deployment with the JS SDK.

Prerequisites

Before you start, make sure you are familiar with the basics of Logics

Install Cocolang

Install the Coco CLI by following the official Cocolang install guide:

After installation, verify that the compiler is available:

coco version

You should see the installed Coco and PISA versions in the output.

Install the VS Code Extension

Install the official Cocolang VS Code extension from the marketplace:

This extension provides syntax highlighting and editor support while writing .coco files.

Initialize a Coco Module

Before running compile or test commands, initialize the current folder as a Coco module:

coco nut init total

This creates the coco.nut module configuration used by coco compile and coco test.

Once the module is initialized, you can compile it with:

coco compile

How Coco Tests Work

The inline test syntax (// < / // >) is a lightweight debugging tool rather than a full-featured test framework. A more complete testing system (closer to what Go or Rust provide) is planned for the future. For now, these comment pairs give you a fast way to validate endpoints during development.

Each test is written as a pair of comments:

  • // < ... specifies the command to run
  • // > ... specifies the expected output

For example:

// < invoke TEST.AddOne()
// > new_balance:11

This tells the test runner to invoke AddOne() and verify that the returned value is new_balance: 11.

Rules for Inline Tests

  • every // < line must have a matching // > line
  • the expected output line can be empty if the endpoint does not return a value
  • test pairs can appear anywhere in the file
  • for readability, test pairs are usually placed immediately above the endpoint they validate
  • tests are executed against the module under the temporary test alias TEST

Common Test Patterns

Instead of looking at one large file, it is easier to learn the test syntax through a few small examples.

1. Deploy and Enlist Tests

state logic:
supply U256

state actor:
balance U64

// < deploy TEST.Init()
// >
endpoint deploy Init():
mutate 0 -> total.Logic.supply

// < enlist TEST.InitActor()
// >
endpoint enlist InitActor():
mutate 10 -> total.Sender.balance

What this shows:

  • deploy tests logic-level initialization
  • enlist tests actor-level initialization for the current sender
  • the // > line is empty because these endpoints do not return any values

2. Invoke Test with an Expected Return Value

// < invoke TEST.TstMax(a: 5, b:3)
// > max: 5
endpoint TstMax(a, b U64) -> (max U64):
max = (max) <- max(a:a, b:b)

function max(a, b U64) -> (max U64):
if a >= b:
return (max: a)
max = b

What this shows:

  • invoke calls an endpoint and checks its returned value
  • // > max: 5 means the test passes only if the endpoint returns max: 5
  • the local helper max() is tested through TstMax() because local functions are not invoked directly by the test runner

3. Invoke Test that Mutates Actor State

// < invoke TEST.AddOne()
// > new_balance:11
endpoint dynamic AddOne() -> (new_balance U64):
mutate balance <- total.Sender.balance:
balance += 1
yield new_balance balance

What this shows:

  • the endpoint reads the sender's actor state
  • it updates that state with mutate
  • it returns the new value so the test runner can compare it against // > new_balance:11
  • this test assumes the sender's balance was initialized earlier, such as through InitActor()

Run Tests with the Coco Compiler

Run the compiler-driven test suite from the module root with:

coco test --debug_pisa

The --debug_pisa flag enables the PISA debug runtime that powers the inline test executor. It is required for running // < / // > test pairs.

With the examples above, the output looks like this:

INFO - Test passed: deploy TEST.Init()
INFO - Test passed: enlist TEST.InitActor()
INFO - Test passed: invoke TEST.TstMax(a: 5, b:3) => max: 5
INFO - Test passed: invoke TEST.AddOne() => new_balance: 11
INFO - 4 / 4 tests passed, 0 failed

Reading the Output

  • Test passed: deploy ... means the deploy endpoint completed successfully
  • Test passed: enlist ... means actor initialization completed successfully
  • Test passed: invoke ... => ... means the endpoint was invoked and its returned values matched the expected // > line
  • the final summary shows how many test pairs passed or failed

If you want to try it out

Use this complete sample:

coco total

state logic:
supply U256

state actor:
balance U64

// < deploy TEST.Init()
// >
endpoint deploy Init():
mutate 0 -> total.Logic.supply

// < enlist TEST.InitActor()
// >
endpoint enlist InitActor():
mutate 10 -> total.Sender.balance

// < invoke TEST.TstMax(a: 5, b:3)
// > max: 5
endpoint TstMax(a, b U64) -> (max U64):
max = (max) <- max(a:a, b:b)

function max(a, b U64) -> (max U64):
if a >= b:
return (max: a)
max = b

// < invoke TEST.AddOne()
// > new_balance:11
endpoint dynamic AddOne() -> (new_balance U64):
mutate balance <- total.Sender.balance:
balance += 1
yield new_balance balance

Run it with:

coco nut init total
coco compile
coco test --debug_pisa

Testing with CocoLab

CocoLab is the interactive REPL for exploring logic behavior beyond what inline tests cover. While inline tests are good for automated pass/fail checks, CocoLab lets you deploy a logic, call endpoints in any order, inspect state between calls, and experiment freely. You access it through the Coco CLI with coco lab.

There are two layers: the CLI commands that start or configure a session, and the REPL commands you type once the lab is open.

Starting a Session

The quickest way to get a working session is:

coco lab init

This auto-runs three setup steps for you: it registers a default_user, sets that user as the default sender, and compiles the current module. After it finishes, you're dropped into the REPL ready to deploy and invoke.

For a fully manual session where you control each step yourself, use coco lab start instead. This opens the REPL without any automatic setup, so you'll need to register a user, set the sender, and compile the module yourself (see the walkthrough below).

A Simple Walkthrough

Once the lab is open, here is a typical session broken down step by step.

Register a user and set it as the default sender:

register default_user
set default.sender default_user

register creates a user identity inside the lab session. set default.sender makes that user the implicit caller for all subsequent commands, so you don't have to specify as default_user on every invocation.

Compile the module:

compile total

This loads and compiles the total module into the lab so its endpoints become available for deploy, enlist, and invoke commands.

Deploy the logic:

deploy total.Init()

Runs the deploy endpoint. This initializes logic-level persistent state. In the total example, Init() sets total.Logic.supply to 0.

Enlist the sender as an actor:

enlist total.InitActor()

Runs the enlist endpoint for the current sender. This initializes actor-level state. In the total example, InitActor() sets total.Sender.balance to 10.

Invoke an endpoint:

invoke total.TstMax(a: 5, b: 3)

Calls the TstMax endpoint with the given arguments and prints the returned value. You should see max: 5 in the output.

Invoke an endpoint that mutates actor state:

invoke total.AddOne()

Calls AddOne(), which reads the sender's balance, increments it by 1, and returns the new value. Since InitActor set the balance to 10, you should see new_balance: 11.

Inspect actor state directly:

observe total.Sender.balance

Reads the current value of balance from the sender's actor state on the total logic. This confirms the mutation from AddOne() persisted. The observe command is one of the key advantages of CocoLab over inline tests — you can inspect any piece of logic or actor state at any point during your session.

Exit the lab:

exit

Scripted Sessions

You can define reusable command sequences in coco.nut under [lab.scripts]:

[lab.config.default]
url = "http://127.0.0.1:6060"
env = "main"

[lab.scripts]
smoke-test = [
"register default_user",
"set default.sender default_user",
"compile total",
"deploy total.Init()",
"enlist total.InitActor()",
"invoke total.TstMax(a: 5, b: 3)",
"invoke total.AddOne()",
"observe total.Sender.balance",
]

Then run it with:

coco lab run smoke-test

Useful flags for coco lab run:

  • --no-exit (-x) — stay in the REPL after the script finishes, so you can continue exploring from the prepared state
  • --suppress (-s) — suppress output while the script runs (useful for long setup scripts where you only care about the final state)

Other Useful REPL Commands

Beyond the core workflow above, CocoLab has several commands that help with debugging and inspection:

  • users — list all registered users in the session
  • logics — list all compiled logics
  • get default.sender — show the current default sender
  • wipe logics — remove all loaded logics (also: wipe users, wipe default.sender)
  • get events.RegisterEvent — query emitted events by name (supports filters like from <logic>, on <address>, with {<topics>})
  • storagekey <slot> — compute storage key hashes (supports idx(...), fld(...), key(...) for arrays, fields, and map keys)
  • invoke <Logic>.<Endpoint>() as <user> — override the sender for a single call
  • invoke <Logic>.<Endpoint>() with alice/write, bob/read — add participants with access levels

Suggested Workflow

  1. Write or update the .coco module
  2. Initialize the module with coco nut init <module-name> if needed
  3. Run coco compile to check for syntax and type errors
  4. Add or update // < and // > test pairs for your endpoints
  5. Run coco test --debug_pisa for automated validation
  6. Use coco lab init or coco lab run <script> when you need to inspect state, test calling sequences, or explore behavior interactively

Next Steps

Once your logic compiles and the local tests pass, you can proceed to deploy it on the network: