# Scheduling jobs in Python

In the other tutorials, we've looked at polling a HTTP server and building a Docker container, reading from a serial port, and publishing to the history database.

All these scripts were using an interval of a few seconds at most. Sometimes that's way too often. For example: Brewfather requires a minimum interval of 15 minutes (opens new window).

Simply adding a sleep() call for 15 minutes is a very unreliable solution. If your service stops or crashes, it will cancel the interval, leading to very short or very long gaps.

A more practical solution is to use the schedule (opens new window) package.

# Source code

On your Pi, create a new directory scheduledscript. in it, create two files:

script.py

"""
Code example for publishing data to the Brewblox eventbus on a fixed schedule

Dependencies:
- paho-mqtt
- schedule
"""

import json
from random import random
from time import sleep

import schedule
from paho.mqtt import client as mqtt

# 172.17.0.1 is the default IP address for the host running the Docker container
# Change this value if Brewblox is installed on a different computer
HOST = '172.17.0.1'

# 80 is the default port for HTTP, but this can be changed in brewblox env settings.
PORT = 80

# This is a constant value. You never need to change it.
HISTORY_TOPIC = 'brewcast/history'

# The history service is subscribed to all topics starting with 'brewcast/history'
# We can make our topic more specific to help debugging
TOPIC = HISTORY_TOPIC + '/scheduledscript'

# Create a websocket MQTT client
client = mqtt.Client(transport='websockets')
client.ws_set_options(path='/eventbus')


def publish():

    try:
        client.connect_async(host=HOST, port=PORT)
        client.loop_start()

        # https://brewblox.netlify.app/dev/reference/history_events.html
        value = 20 + ((random() - 0.5) * 10)
        message = {
            'key': 'scheduledscript',
            'data': {'value[degC]': value}
        }

        client.publish(TOPIC, json.dumps(message))
        print(f'sent {message}')

    finally:
        client.loop_stop()


# For more examples on how to schedule tasks, see:
# https://github.com/dbader/schedule
schedule.every().minute.at(':05').do(publish)

while True:
    schedule.run_pending()
    sleep(1)

Dockerfile

FROM python:3.7-slim

COPY script.py /app/script.py

RUN pip3 install paho-mqtt schedule

CMD ["python3", "-u", "/app/script.py"]

# Building

Your scheduledscript directory should look like this:

.
├── script.py
└── Dockerfile

To build the image, run:

docker build --tag scheduledscript scheduledscript/

# Running

To run the built image:

docker run --rm --tty scheduledscript

This is exactly the same as the command in the dockerized script tutorial.

# Testing

It's often useful to listen in on what messages the eventbus actually received.

You can do so using https://mitsuruog.github.io/what-mqtt/ (opens new window). Connect to wss://PI_ADDRESS:HTTPS_PORT/eventbus, and listen to your published topic.

Example address: wss://192.168.2.11:443/eventbus. Example topic: brewcast/history/scheduledscript (default in script.py)

You can also listen to all messages published to history by subscribing to brewcast/history/#. This will get very spammy if there are multiple services publishing data.