Installation

System Installation

This guide follows the real deployment flow used in the deployment folder: source code is stored on GitHub, GitHub Actions packages the release, and the server receives it over SSH. This approach is useful when you want deployment to happen after a normal push instead of a manual upload every time.

Understand these 6 ideas first

  • GitHub repository: the place that stores your project source code
  • GitHub Actions: the automation that builds and deploys after code is pushed
  • server: the machine that runs the real website
  • .env: backend configuration such as domain, database, mail, cache, and queue
  • .env.admin: frontend configuration for the admin panel
  • SSH key: the credential pair that lets GitHub connect to the server securely without typing a password

What does this installation flow look like?

  1. You push code to the dev or main branch.
  2. GitHub Actions runs the CI/CD workflow.
  3. The workflow creates a compressed artifact with the code and .github/deployment.
  4. GitHub uploads that artifact to the server over SSH.
  5. The server creates a new release folder, links storage, .env, and .env.admin, then runs the install steps.
  6. When everything succeeds, the current symlink moves to the new release and the server keeps the five newest releases.

Step 1: Prepare what you need

Before you begin, make sure you have:

  • access to the GitHub repository
  • permission to create repository Secrets and Variables
  • SSH access to the server
  • the website domain and the admin domain
  • database details
  • mail, Redis, and file storage details if the project uses them

On the server, check these tools first:

  • PHP in the version expected by the workflow, currently 8.3
  • Composer
  • Node.js and pnpm
  • the acl package so setfacl is available
  • a web server such as nginx or apache
  • a background process manager such as supervisor

Important detail

In the current deployment scripts, pnpm build steps run directly on the server. That applies to the main frontend build and also to admin when an admin directory exists. The server therefore needs working Node.js and pnpm.

Step 2: Upload the project to GitHub

If the project does not have a repository yet:

  1. Create a new repository on GitHub.
  2. Upload the project source code into that repository.
  3. Create at least two working branches, usually dev for staging and main for production.

If the repository already exists, verify:

  • dev is the branch used for testing or staging
  • main is the branch used for production
  • the .github folder is present in source control

Step 3: Copy the workflow and deployment scripts into the project

The current project already includes working samples under deployment/build-script. Copy them into the application repository:

  1. Copy deployment/build-script/workflows/ci-cd.yml into .github/workflows/ci-cd.yml.
  2. Copy the full deployment/build-script/deployment folder into .github/deployment/.

The minimum structure should look like this:

.github/
  workflows/
    ci-cd.yml
  deployment/
    prepare.sh
    deploy.sh
    hooks/
      before-activation.sh
      after-activation.sh
      flush-opcache.sh
      set-file-permissions.sh

Step 4: Review the workflow before using it

The sample workflow runs on pushes to dev and main. Review these points before the first deployment:

  • PHP_VERSION is currently 8.3
  • the sample deploy-dev job still contains the placeholder /var/www/[domain_folder]
  • the sample deploy-main job reads DOMAIN_FOLDER

In practice, this means you should update the workflow first:

  • replace the deploy-dev placeholder with a real path
  • or standardize both environments to use repository variables

Simple example:

  • if the project lives in /var/www/my-project
  • then base_directory must be /var/www/my-project
  • do not point it to /var/www/my-project/current

prepare.sh checks this and will stop if you point to the current folder directly.

Step 5: Create a deploy user on the server

Use a separate account for automated deployment when possible, for example deploy or githubconnector. Avoid using root unless you absolutely need it.

That user should:

  • have access to the project folder
  • be able to run php, composer, and pnpm
  • belong to the same group as the web server user, commonly www-data or nginx

Example:

sudo usermod -aG www-data deploy

Step 6: Create an SSH key for GitHub deployment

You need a key pair:

  • the private key goes into GitHub Secrets
  • the public key goes into ~/.ssh/authorized_keys for the deploy user on the server

Example command:

ssh-keygen -t ed25519 -C "github-actions-deploy"

Then:

  1. copy the full private key content into the GitHub secret SSH_PRIVATE_KEY
  2. add the public key to the deploy user’s authorized_keys file on the server

Common mistake

Do not paste the public key into SSH_PRIVATE_KEY. The deployment preparation script checks for this and will stop if the value looks like a public key.

Step 7: Generate known hosts

known hosts tells GitHub which server identity it should trust during SSH.

Generate it like this:

ssh-keyscan -p 22 -H your-server.com

If the server uses another port, replace 22 with the real one. Store the full output in the GitHub secret SSH_KNOWN_HOSTS.

Step 8: Add GitHub Secrets and Variables

In the repository, go to Settings -> Secrets and variables -> Actions.

Create these Secrets:

  • REMOTE_USER
  • REMOTE_HOST
  • REMOTE_PORT
  • SSH_PRIVATE_KEY
  • SSH_KNOWN_HOSTS

Create these Variables if you follow the sample workflow:

  • DOMAIN_FOLDER

That variable is used to build the base path, for example:

/var/www/${DOMAIN_FOLDER}

Step 9: Prepare the base directory on the server

With the current release-based deployment flow, each project should have one base directory, for example:

/var/www/my-project

Create the basic structure first:

sudo mkdir -p /var/www/my-project/config
sudo mkdir -p /var/www/my-project/storage
sudo chown -R deploy:www-data /var/www/my-project
sudo chmod -R 2775 /var/www/my-project

After the first deployment, the structure will look more like this:

/var/www/my-project
  .env
  .env.admin
  config/
    setting.php
  modules_statuses.json
  storage/
  releases/
    1/
    2/
  current -> /var/www/my-project/releases/2

If you are migrating from an older layout that uses shared/storage and shared/.env, the deployment script can detect that and continue using it.

Step 10: Create .env on the server

The .env file must exist in the base directory and must not be empty. If it is empty, deploy.sh will stop.

Example location:

/var/www/my-project/.env

At minimum, you usually need to fill in:

  • application settings: APP_NAME, APP_ENV, APP_KEY, APP_URL, APP_DEBUG
  • domain and session settings: APP_DOMAIN, ASSET_URL, SESSION_DOMAIN, SANCTUM_STATEFUL_DOMAINS
  • locale and timezone: DEFAULT_LOCALE, TIMEZONE
  • database: DB_CONNECTION, DB_HOST, DB_PORT, DB_DATABASE, DB_USERNAME, DB_PASSWORD
  • cache, queue, and session: CACHE_DRIVER, QUEUE_CONNECTION, SESSION_DRIVER, REDIS_*
  • internal auth keys: ISPA_API_AUTH_KEY, ISPA_ADMIN_AUTH_KEY
  • mail: MAIL_* if the project sends mail
  • file storage: AWS_* if the project uses object storage

The variables that most often affect sign-in are:

  • APP_URL
  • SESSION_DOMAIN
  • SANCTUM_STATEFUL_DOMAINS
  • ISPA_ADMIN_AUTH_KEY

If admin runs on a separate subdomain, these values must match the real domains exactly. Wrong values here often cause cookie issues, login failures, or broken API calls.

Step 11: Create .env.admin on the server

This file is for the admin frontend and must also not be empty.

Example location:

/var/www/my-project/.env.admin

The most common variables are:

  • VITE_DOMAIN: the admin domain
  • VITE_BASE_URL: the backend API URL used by admin
  • VITE_PROTOCOL: usually https
  • VITE_NODE_ENV: the runtime environment
  • VITE_DEFAULT_LOCALE: the default admin language
  • VITE_APP_ADMIN_AUTH_KEY: the admin auth key used by frontend
  • VITE_SECURE_COOKIE: should be enabled under HTTPS

Two variables are especially important:

  • VITE_BUILD_VERSION: the deployment script updates this automatically during build
  • VITE_APP_ADMIN_AUTH_KEY: it must stay compatible with the backend admin auth key

Step 12: Prepare config/setting.php

If the project uses config/setting.php for persistent project settings, place it here:

/var/www/my-project/config/setting.php

If it does not exist during the first deployment:

  • the script tries to reuse it from the previous release if available
  • otherwise it copies the file from the current release artifact

This is how the system keeps that file stable across deployments.

Step 13: Configure file and directory permissions

This part matters if you want uploads, logs, cache, and queue workers to run without write errors.

According to deployment/build-script/deployment/hooks/set-file-permissions.sh, the system expects:

  • the deploy user and web server user to share a group
  • setfacl to be installed
  • writable directories to use 0775

Useful checks:

sudo apt install acl
id deploy
id www-data

If the deploy user is not in the web server group:

sudo usermod -aG www-data deploy

The hook also checks config/filesystems.php. The local and public disks should use 0775 for public directories. If they do not, the deployment can stop to prevent future upload failures.

Step 14: Configure the web server

Your domain should point to the public directory of the active release:

/var/www/my-project/current/public

The deployment folder already contains several .conf examples for nginx, including patterns for:

  • the main website domain
  • admin or frontend domains
  • serving files from /storage
  • HTTPS configuration

The main rule is simple: always point the PHP site to current/public, not to a specific release folder.

Step 15: Configure queue and schedule workers

For stable CMS operation, background workers are usually required. The deployment folder already includes example queue-*.conf and schedule-*.conf files for supervisor.

What each one does:

  • queue: processes background jobs such as email, sync, and other heavy tasks
  • schedule: runs recurring tasks

In the examples:

  • queue uses artisan queue:work
  • schedule uses artisan schedule:work

After copying the config into supervisor, you normally run:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl status

If the project uses a dedicated queue such as gpt_caller, it should have its own worker just like the sample files.

Step 16: Push code and run the first deployment

When the setup above is ready:

  1. commit .github/workflows/ci-cd.yml and .github/deployment/*
  2. push to dev or main
  3. open the Actions tab on GitHub and watch the workflow

If everything is configured correctly, the server will automatically:

  • upload the artifact
  • create a new release directory
  • link storage, .env, .env.admin, and setting.php
  • install Composer dependencies
  • run migrations
  • run module migrations
  • sync CMS permissions and translations
  • build admin with pnpm
  • switch current to the new release
  • flush OPCache
  • keep only the five newest releases

A detail that can look unusual

After the admin build finishes, the current script deletes the admin directory from the release. That is normal for this deployment setup and does not automatically mean the deployment failed.

Step 17: Check the system after deployment

After the first successful run, verify:

  1. the public website opens correctly
  2. the admin panel opens and allows sign-in
  3. you can create a record or upload a file
  4. the main dashboard and key modules work
  5. storage/logs does not show new critical errors
  6. supervisorctl status shows queue and schedule processes running

Common problems and quick explanations

GitHub cannot connect to the server

The most common causes are:

  • wrong REMOTE_HOST or REMOTE_PORT
  • wrong SSH_PRIVATE_KEY
  • wrong or missing SSH_KNOWN_HOSTS

The workflow says .env or .env.admin is empty

The deployment script stops if these files exist but contain no real values. Fill them in on the server and run the deployment again.

The workflow cannot find Composer

Composer is not installed on the server, or the deploy user cannot access it in the shell environment.

The workflow fails on file permissions

The usual reasons are:

  • acl is not installed
  • the deploy user is not in the same group as the web server user
  • the project directory or storage is not writable enough

The workflow cannot connect to the database

The before-activation.sh hook checks database connectivity before running migrations. Review DB_HOST, DB_PORT, DB_DATABASE, DB_USERNAME, and DB_PASSWORD.

The workflow fails while flushing OPCache

That step uses APP_URL. If APP_URL is wrong, points to the wrong protocol, or does not resolve to the active site, the OPCache flush can fail.

Helpful handoff checklist for non-technical owners

When handing the system over, keep a short checklist:

  • where the GitHub repository lives
  • who can view Actions
  • which user is used for SSH deployment
  • where .env lives on the server
  • where .env.admin lives on the server
  • how to check whether queue and schedule are running
  • how to identify the latest release under releases

That way, even someone who does not work with code every day can still follow the right places to inspect the system.