The file packages/api/src/jobs/index.js
is the entry point to a standalone Node application that runs independently of the API server. In a LiteFarm deployment, this "jobs scheduler" runs as a native Node process (no Docker container). It monitors a local Redis-based queue for incoming certification export jobs. These jobs are initiated by the user in the frontend, causing the API to extract the necessary data and create the queue entries.
The scheduler takes job data from the queue and generates several Excel spreadsheets in the format needed by certifying bodies. These spreadsheets, along with relevant documents that the user has uploaded to storage buckets, are packaged into a single .zip file, and the user receives an email with a link to download this archive.
Running the export server locally
For testing the documents generated by the export server, it may be useful to run the server on your local machine. This can be accomplished with some tweaks to configuration and code. Below are instructions for running the export server using local instances of MinIO + Redis installed with Homebrew.
For the TL;DR version of this, I recommend PR #1929, which however assumes you will be using the real LiteFarm DigitalOcean Spaces credentials (instead of MinIO).
Install and Configure Redis
The job scheduler (set up using Bull) relies on a Redis database to hold and manage the job queue. The easiest way to install Redis locally is through Homebrew. Full instructions can be found here, but in brief, you should only need:
brew install redis
To start the Redis service:
brew services start redis
To check if the service is running:
brew services info redis
Connecting to Redis
Once installed and running, you can interact with your Redis database using redis-cli or with a GUI client. I have been using the graphical client RedisInsight which is put out by Redis Enterprise (but is free!)
The only Redis configuration necessary for the export server is adding a password ("test") to the default Redis instance, which can be done using the edit icon from this screen here:
Generally, you won’t need to work with the Redis database, but you can sometimes get informative messages from the queue about why jobs fail -- the keys of type HASH
are the ones containing information.
Install and Configure MinIO
MinIO will be our free & local AWS S3/Digital Ocean Spaces replacement.
You can install it using Homebrew as well, using the guide here. This tutorial, linked by Artūrs, is also very helpful: Connect Node.js to MinIO with TLS using AWS S3
In brief, to install:
brew install minio/stable/minio
Next create a folder in your user directory (or wherever you prefer) to hold the MinIO data: ~/data
.
Create the environment variable file, leaving the server URL commented out. You can save this file anywhere because you will point to it when starting up the server. I happened to put mine in the same directory as the MinIO data, so at, e.g. ~/data/.minio
# MINIO_ROOT_USER and MINIO_ROOT_PASSWORD sets the root account for the MinIO server. # This user has unrestricted permissions to perform S3 and administrative API operations on any resource in the deployment. # Omit to use the default values 'minioadmin:minioadmin'. # MinIO recommends setting non-default values as a best practice, regardless of environment MINIO_ROOT_USER=myminioadmin MINIO_ROOT_PASSWORD=minio-secret-key-change-me # MINIO_VOLUMES sets the storage volume or path to use for the MinIO server. MINIO_VOLUMES="mnt/data" # MINIO_SERVER_URL sets the hostname of the local machine for use with the MinIO Server # MinIO assumes your network control plane can correctly resolve this hostname to the local machine #MINIO_SERVER_URL="http://minio.example.net"
Then point to your environment file when starting up (from within ~/data
or whatever folder you have previously created for this):
export MINIO_CONFIG_ENV_FILE=~/data/.minio minio server --console-address :9090
Just like AWS S3, in MinIO you can set up a bucket, generate access keys, and configure access rules from the graphical client… but the client can be accessed right on http://localhost:9090 by logging in with the user and password set in the file above. Very cool!
In the MInIO console:
Create a bucket (you will put its name to put into the API
.env
file)Set its Access Policy to “public” (under Buckets > Bucket Name)
Then create an Access Key (screenshot below) and record the key/secret key for use in the next step
Connecting MinIO to the LiteFarm API
For running the export server, the only necessary changes to your API .env
file are adding your bucket name and the MinIO endpoint:
In packages/api/.env
you will have to add one new environment variable:
# The default minio port MINIO_ENDPOINT=http://localhost:9000
And change the value of two environment variables that should already exist in your .env
# Set both of these to your MinIO bucket name PRIVATE_BUCKET_NAME=<MinIO bucket name here> PUBLIC_BUCKET_NAME=<MinIO bucket name here>
The S3 configurations in digitalOceanSpaces.js
(access key, secret access key) are not used by the export server, which instead spawns a node.js child process that runs the aws-cli. (Note: you may have to download aws-cli first).
Set your credentials with your MinIO access key + secret directly in the terminal using
aws configure
And check them with
aws configure list
Make sure that the region name is either removed from ~/.aws/config
or set up correspondingly in your MinIO admin panel.
Set up the folder structure
Add an exports/
directory to LiteFarm packages/api
(it is already gitignored)
Code branch
Some hardcoded Digital Ocean Spaces URLs do need to be refactored out of both the frontend and the backend to make document upload + download (upon email link) work.
These changes have already been done on this branch on GitHub: https://github.com/LiteFarmOrg/LiteFarm/tree/minio
Update the frontend .env
(for download link only)
So that the email link actually leads to a successful download, you will want to add two variables to your frontend .env
file:
In packages/webapp/.env
(these are both new variables):
VITE_DEV_BUCKET_NAME=<MinIO bucket name here> VITE_DEV_ENDPOINT=localhost:9000
(Finally!) Running the export server
Have the normal LiteFarm backend already started in a separate terminal window, then run the export server in packages/api
using
npm run scheduler
Make sure that your frontend is also running at the same time, as some parts of the document export will reference it.
And that's it :)