Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Next create a folder in your user directory (or wherever you prefer) to hold the MinIO data: ~/data.

Create the environment variable file, setting leaving the server URL to localhostcommented out. You can save this file anywhere because you will point to it when starting up the server. I happened to put mine in the same directory as the MinIO data, so at, e.g. ~/data/.minio

Code Block
languagenone
# MINIO_ROOT_USER and MINIO_ROOT_PASSWORD sets the root account for the MinIO server.
# This user has unrestricted permissions to perform S3 and administrative API operations on any resource in the deployment.
# Omit to use the default values 'minioadmin:minioadmin'.
# MinIO recommends setting non-default values as a best practice, regardless of environment

MINIO_ROOT_USER=myminioadmin
MINIO_ROOT_PASSWORD=minio-secret-key-change-me

# MINIO_VOLUMES sets the storage volume or path to use for the MinIO server.

MINIO_VOLUMES="mnt/data"

# MINIO_SERVER_URL sets the hostname of the local machine for use with the MinIO Server
# MinIO assumes your network control plane can correctly resolve this hostname to the local machine

MINIO#MINIO_SERVER_URL="http://localhostminio.example.net"

Then point to your environment file when starting up (from within ~/data or whatever folder you have previously created for this):

...

Just like AWS S3, in MinIO you can set up a bucket, generate access keys, and configure access rules from the graphical client… but the client can be accessed right on http://localhost:9090 by logging in with the user and password set in the file above. Very cool!

In the MInIO console:

  1. Create a bucket , noting its nameand record its name. (You will put this name into both the API & webapp .env files at a later point)

  2. Set its access policy Access Policy to “public” (under Buckets > Bucket Name)

  3. Then create an access key (screenshot below) Access Key and record the key/secret key for use in the next step

...

Connecting MinIO to the LiteFarm API

For running the export server, the only necessary changes to your API .env file are adding your bucket name and the MinIO endpoint:

In packages/api/.env you will have to add three new environment variables:

Code Block
# The default minio port
MINIO_ENDPOINT=http://localhost:9000

And change the value of two environment variables that might already exist in your .env (otherwise, please add them):

Code Block
# Set #both Setof asthese into theyour MinIO bucket name client
PRIVATE_BUCKET_NAME=<MinIO bucket name here>
PUBLIC_BUCKET_NAME=<MinIO bucket name here>

The S3 configurations in digitalOceanSpaces.js (access key, secret access key) are not used by the export server, which instead spawns a node.js child process that runs the aws-cli. (Note: you may have to download aws-cli first to complete the next step).

Set your aws-cli credentials credentials with your MinIO access key + secret directly in the terminal using

...

Some hardcoded Digital Ocean Spaces URLs do need to be refactored out of both the frontend and the backend to make document upload + download (upon email link) work.

You can use These changes have already been done on this branch on GitHub: https://github.com/LiteFarmOrg/LiteFarm/tree/minio

Update: April 2023: As of https://github.com/LiteFarmOrg/LiteFarm/pull/2515 these code changes are now merged into integration and live.

Update the frontend .env (for download link only)

So that the email link actually leads to a successful download, you will want to add two variables to your frontend .env file:

In packages/webapp/.env (these are both new variables):

Code Block
VITE_DEV_BUCKET_NAME=<MinIO bucket name here>
VITE_DEV_ENDPOINT=localhost:9000

(Finally!) Running the export server

Run Have the normal LiteFarm backend already running in a separate terminal window, then run the export server in packages/api using

...