# Spark service state

Every few seconds, the Spark services publishes its current state. This document serves as reference for the topic and payload schemas used.

All referenced code snippets use the TypeScript interface syntax (opens new window).

# Spark state events

The main Spark state event is published to the brewcast/state/<Service ID> topic. This includes service state, and current block settings and values.

export interface SparkStateEvent {
  key: string; // Service ID
  type: 'Spark.state';
  data: {
    status: SparkStatusDescription;
    blocks: Block[];
    relations: BlockRelation[];
    claims: BlockClaim[];
  } | null;
}

key is always set to the Service ID (eg. spark-one). This will match the slug in the topic.

type is a constant string, used to verify events.

data contains a snapshot of service, controller, and block data. When the service shuts down or loses connection to the eventbus, a message will be published where data is null.

data.status describes the currently connected controller (if any), and whether it is compatible with the service. More on this below.

data.blocks lists all blocks on the controller. The interfaces for all block types are documented here.

data.relations and data.claims contain calculated block metadata. Relations can be used to graph the links between blocks, and claims indicate active control chains.

# Spark status

export interface SparkFirmwareDescription {
  /**
   * Git hash of the built commit in the Firmware repository.
   * Firmware is considered mismatched if the service firmware_version
   * does not match the controller firmware_version.
   */
  firmware_version: string;

  /**
   * Git hash of the built commit in the Protobuf message repository.
   * Firmware is considered incompatible if the service proto_version
   * does not match the controller proto_version.
   */
  proto_version: string;

  /**
   * Date (yyyy-mm-dd) when the firmware repository commit was done.
   */
  firmware_date: string;

  /**
   * Date (yyyy-mm-dd) when the Protobuf repository commit was done.
   */
  proto_date: string;
}

export interface SparkDeviceDescription {
  /**
   * Desired or actual device ID.
   * If the service device ID is empty, this is considered a wildcard.
   */
  device_id: string;
}

export interface SparkServiceDescription {
  /**
   * The unique service ID.
   */
  name: string;

  /**
   * The firmware for which the service was built.
   * This is used to determine compatibility with the controller firmware.
   */
  firmware: SparkFirmwareDescription;

  /**
   * Desired device properties.
   * This is used to determine compatibility with the individual controller.
   */
  device: SparkDeviceDescription;
}

export interface SparkControllerDescription {
  /**
   * System library version.
   * The format will be dependent on the platform.
   */
  system_version: string;

  /**
   * The hardware/software platform used by this controller.
   */
  platform: string;

  /**
   * Stated reason for the most recent controller reset.
   */
  reset_reason: string;

  /**
   * The currently running controller firmware.
   */
  firmware: SparkFirmwareDescription;

  /**
   * Controller device properties.
   * These are specific to the individual controller.
   */
  device: SparkDeviceDescription;
}

For the system to function, the service and controller must be using the same communication and messaging protocols. The service is built to match a specific firmware version, and checks the actual firmware version during the connection process.

If the expectation is incompatible with the reality, the connection process is stopped before blocks can be read or written.

export interface SparkStatusDescription {
  /**
   * If false, the service will not automatically
   * discover and connect to a controller.
   *
   * This value is persistent, and stored in the database.
   */
  enabled: boolean;

  /**
   * The configuration values for the service.
   * These are stable during runtime.
   * The expected firmware information is included during the build,
   * and the name and desired device ID are CLI arguments.
   */
  service: SparkServiceDescription;

  /**
   * The configuration values for the controller.
   * These are only available once the controller is connected,
   * and the handshake is performed.
   * They are expected to be constant.
   */
  controller: SparkControllerDescription | null;

  /**
   * The network address of the connected controller.
   * Its format will be dependent on the connection kind.
   * Simulation and TCP addresses will be formatted as `{host}:{port}`,
   * while USB addresses are a path to the TTY device.
   */
  address: string | null;

  /**
   * The transport layer implementation of the active connection.
   */
  connection_kind: 'SIMULATION' | 'USB' | 'TCP' | null;

  /**
   * Before service-to-controller communication can happen,
   * multiple steps must be taken.
   * The state machine will progress linearly,
   * but may revert to DISCONNECTED at any time.
   *
   * - DISCONNECTED: The service is not connected at a transport level.
   *    If enabled, it is continuously trying to discover a valid controller.
   * - CONNECTED: The service is connected at a transport level,
   *    but has not yet received a handshake.
   * - ACKNOWLEDGED: The service has received a handshake.
   *    If the service is compatible with the controller, it will now synchronize.
   *    Otherwise, the process stops here.
   * - SYNCHRONIZED: The connection process is complete,
   *    and block API calls can be made.
   * - UPDATING: The service is still connected to the controller,
   *    but the transport stream has been handed over to the update handler.
   *    Block API calls will immediately return an error.
   */
  connection_status:
    | 'DISCONNECTED'
    | 'CONNECTED'
    | 'ACKNOWLEDGED'
    | 'SYNCHRONIZED'
    | 'UPDATING';

  /**
   * firmware_error is set when the controller firmware description
   * is compared to the service, and there is a mismatch.
   * The error is always cleared if the service becomes disconnected.
   *
   * - INCOMPATIBLE: the firmware expected by the service is different from
   *    the actual controller firmware to a degree that communication
   *    correctness cannot be guaranteed.
   *    The connection process will be stopped.
   * - MISMATCHED: the firmware expected by the service is different from
   *    the actual controller firmware, but the difference is acceptable.
   */
  firmware_error: 'INCOMPATIBLE' | 'MISMATCHED' | null;

  /**
   * identity_error is set when the controller identity is compared to the service,
   * and there is a mismatch.
   * The error is always cleared if the service becomes disconnected.
   *
   * - INCOMPATIBLE: The desired device ID does not match the actual device ID.
   *    This is a hard error: the connection process will be stopped.
   * - WILDCARD_ID: The service does not specify a device ID, and all IDs are valid.
   *    This is a soft error: it is a valid configuration for a system with a single
   *    controller, but will lead to problems if multiple controllers are present.
   */
  identity_error: 'INCOMPATIBLE' | 'WILDCARD_ID' | null;
}

Expected and actual firmware properties are both included in the Spark status, along with the current state of the connection process.

First, the service attempts to connect to a controller. This process is described in the Spark connection settings guide.

After the service is connected, the state becomes CONNECTED, and the service starts prompting the controller to send a handshake message. This is a plaintext string with firmware and device information. The contents are stored in the status.controller field.

Once the handshake is received, the connection state becomes ACKNOWLEDGED. If the service is incompatible with the controller, the process stops here. Otherwise, it will proceed to the synchronization step.

Some examples:

  • Setting the controller date/time.
  • Setting the controller time zone.
  • Setting the controller display units (Celsius or Fahrenheit).
  • Getting block names from the datastore.

Once this is done, the connection state becomes SYNCHRONIZED. The service will now read/write blocks on the controller.

# Block relations

export interface BlockRelation {
  source: string;
  target: string;
  relation: string[];
  claimed?: boolean;
}

Relevant links between blocks are analyzed, and published as part of the service state. The relations can be used to map the active control chains. For an example of this, see the relations view on the Spark service page in the UI.

While typically the block that defines the link is considered the relation source, this is not guaranteed. For example, the PID block has a link to its input Setpoint, but for the purposes of the control chain, the Setpoint is considered the source, and the PID the target.

# Claims

export interface BlockClaim {
  source: string;
  target: string;
  intermediate: string[];
}

When one block is actively and exclusively controlling another block, this is referred to as a claim. Claiming blocks may in turn be claimed by another block (a Digital Actuator is claimed by a PWM which is claimed by a PID).

These claims are analyzed, and published as part of the service state. A BlockClaim is generated for every combination of claimed block and initial claimer (a claiming block that is not claimed itself).

Given a typical fermentation control scheme with these blocks...

  • Heat PID
  • Heat PWM
  • Heat Actuator
  • Cool PID
  • Cool PWM
  • Cool Actuator
  • Spark Pins

...the following BlockClaim objects will be generated

  • target=Spark Pins, source=Heat PID, intermediate=[Heat Actuator, Heat PWM]
  • target=Heat Actuator, source=Heat PID, intermediate=[Heat PWM]
  • target=Heat PWM, source=Heat PID, intermediate=[]
  • target=Spark Pins, source=Cool PID, intermediate=[Cool Actuator, Cool PWM]
  • target=Cool Actuator, source=Cool PID, intermediate=[Cool PWM]
  • target=Cool PWM, source=Cool PID, intermediate=[]

# Spark patch events

Whenever a single block is changed or removed, a patch event is published. Patch events implicitly modify the last published Spark state event.

Clients are free to ignore patch events, and wait for the next published Spark state event.

Patch events are published to the brewcast/state/<Service ID>/patch topic.

export interface SparkPatchEvent {
  key: string; // Service ID
  type: 'Spark.patch';
  data: {
    changed: Block[];
    deleted: string[];
  };
}

key is always set to the Service ID (eg. spark-one). This will match the slug in the topic.

type is a constant string, used to verify events.

data.changed will be a list of blocks where settings were changed since the last state event. Changes to sensor values will not trigger a patch event.

data.deleted is a list of block IDs matching blocks that were removed since the last state event.

# Spark update events

During firmware updates, progress is published using state events. This does not apply to firmware updates triggered by brewblox-ctl flash.

Update progress events are published to the brewcast/state/<Service ID>/update topic.

export interface SparkUpdateEvent {
  key: string; // Service ID
  type: 'Spark.update';
  data: {
    log: string[];
  };
}

key is always set to the Service ID (eg. spark-one). This will match the slug in the topic.

type is a constant string, used to verify events.

data.log contains new progress messages.