Laravel Cloud Hosting: The Complete Production Setup Guide
PHP 8.3, Composer, queue workers, and Redis — the complete Laravel production infrastructure.
In This Guide
Laravel Cloud Hosting: A Production Deployment Guide for PHP Applications
Laravel is the most popular PHP framework for a reason — it makes complex application patterns approachable and ships with everything you need for modern web development. Production deployment is where many Laravel developers hit friction for the first time. Queues, scheduled tasks, WebSockets, database migrations, and Redis dependencies all need to be running correctly and simultaneously. This guide covers how to deploy Laravel reliably on cloud infrastructure.
What Makes Laravel Deployment Different
A basic PHP application deploys simply: upload files, PHP executes them on each request. Laravel adds complexity because a production Laravel application is rarely just a web server:
Queues: Background jobs processed by php artisan queue:work. This is a persistent process that must stay running independently of web requests. If it dies, queued jobs accumulate without processing.
Scheduler: Laravel's task scheduler runs via a cron job: * * * * * php artisan schedule:run. Scheduled tasks (sending emails, cleaning databases, generating reports) won't execute without this.
WebSockets: Applications using Laravel Echo and Pusher or Laravel WebSockets run a separate WebSocket server process.
Redis: Session storage, queue backend, and cache driver. Must be available before your app starts.
Database migrations: Schema changes need to run before new code deploys. Order matters — deploying code that expects a new column before running the migration that creates it causes 500 errors.
Getting all of this running correctly in production is the real challenge of Laravel deployment.
Preparing Your Laravel Application
Environment Configuration
Laravel's .env file handles local development. In production, set all values as environment variables through your hosting platform — never commit .env to your repository.
Required production environment variables:
APP_NAME="My App"
APP_ENV=production
APP_KEY=base64:your-32-char-key-here
APP_DEBUG=false
APP_URL=https://myapp.com
DB_CONNECTION=mysql
DB_HOST=internal-db-host
DB_PORT=3306
DB_DATABASE=myapp_production
DB_USERNAME=myapp_user
DB_PASSWORD=secure-password-here
CACHE_DRIVER=redis
QUEUE_CONNECTION=redis
SESSION_DRIVER=redis
REDIS_HOST=internal-redis-host
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=smtp.mailgun.org
MAIL_PORT=587
MAIL_USERNAME=postmaster@mg.myapp.com
MAIL_PASSWORD=your-mailgun-key
When your database and Redis run on the same platform as your Laravel app, use internal hostnames for DB_HOST and REDIS_HOST. This routes traffic over the private network rather than the public internet — lower latency, no bandwidth costs, no exposure to the public internet.
The APP_KEY
APP_KEY is used for all encryption in Laravel: encrypted cookies, sessions, encrypted model attributes. Generate it with:
php artisan key:generate --show
Copy the output (starts with base64:) and store it as your APP_KEY environment variable. Never regenerate this key on a production application — existing encrypted data becomes unreadable.
Optimizing for Production
Run these commands as part of every deployment:
# Cache configuration (eliminates .env file parsing on every request)
php artisan config:cache
# Cache routes (eliminates route file parsing on every request)
php artisan route:cache
# Cache views (pre-compiles Blade templates)
php artisan view:cache
# Autoloader optimization (faster class loading)
composer install --optimize-autoloader --no-dev
The config and route cache commands are the most impactful — they reduce request processing time significantly by eliminating repeated filesystem reads. The --no-dev flag on composer removes development-only packages from production.
Important: if you're caching config, environment variables must be set before running config:cache. The cache file bakes in your current env values. Changes to environment variables require running config:cache again.
Running Migrations Safely
Database migrations should run before the new version of your application code starts serving traffic:
php artisan migrate --force
The --force flag is required in production (Laravel prompts for confirmation in production environments by default, which blocks automated deployments).
Deployment order:
1. New code is built into a container
2. Migration command runs against the production database
3. New container starts serving traffic
4. Old container is stopped
If the migration fails, the deployment stops and the old container continues running. This is the correct behavior — never let new code run against an unprepared database schema.
Laravel Queue Worker Configuration
The queue worker needs to run as a separate persistent process:
php artisan queue:work redis \
--sleep=3 \
--tries=3 \
--timeout=90 \
--max-time=3600 \
--queue=default,notifications,emails
Key flags:
--timeout=90: Jobs running longer than 90 seconds are killed. Set this based on your longest expected job. Without a timeout, stuck jobs block the worker indefinitely.
--tries=3: Failed jobs are retried up to 3 times before being moved to the failed jobs table.
--max-time=3600: The worker restarts after 3600 seconds (1 hour). This ensures memory leaks in long-running workers don't accumulate indefinitely.
--queue=default,notifications,emails: Processes queues in priority order. Jobs on the notifications queue process after the default queue drains.
Deploy the queue worker as a separate service on your cloud platform, pointing at the same application code. It doesn't need to handle HTTP traffic — just set the start command to the queue:work command above.
Laravel Horizon for Queue Monitoring
If you're using Redis queues, Laravel Horizon provides a dashboard for queue health:
composer require laravel/horizon
php artisan horizon:install
php artisan horizon:publish
Start command: php artisan horizon
Horizon gives you real-time visibility into:
- Jobs processed per minute
- Queue throughput
- Failed job counts and stack traces
- Job wait times per queue
- Worker pool size and utilization
For production applications where queue processing is business-critical, Horizon is essential. You need to know whether your queue workers are keeping up with demand before your users notice.
Laravel Scheduler Setup
The scheduler requires a cron entry that runs every minute:
* * * * * cd /app && php artisan schedule:run >> /dev/null 2>&1
On a container platform, this is typically configured as:
- A separate cron-based service running on your platform
- A startup command that launches a cron daemon alongside your app
- Some platforms support "cron jobs" as a first-class service type
Alternative: if your platform doesn't support cron natively, run the scheduler as a persistent process:
php artisan schedule:work
This runs the scheduler in the foreground, checking for due tasks every minute. Less efficient than a real cron job but simpler to configure.
Dockerfile for Laravel
If your platform supports Docker deployments:
FROM php:8.3-fpm-alpine
# Install system dependencies
RUN apk add --no-cache \
nginx \
supervisor \
nodejs \
npm \
curl
# Install PHP extensions
RUN docker-php-ext-install \
pdo_mysql \
opcache \
bcmath \
pcntl
# Install Redis PHP extension
RUN pecl install redis && docker-php-ext-enable redis
# Install Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
WORKDIR /app
# Copy composer files and install dependencies
COPY composer.json composer.lock ./
RUN composer install --optimize-autoloader --no-dev
# Copy application files
COPY . .
# Build assets
RUN npm ci && npm run build
# Set permissions
RUN chown -R www-data:www-data /app/storage /app/bootstrap/cache
# Copy configuration files
COPY docker/nginx.conf /etc/nginx/nginx.conf
COPY docker/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY docker/php.ini /usr/local/etc/php/conf.d/production.ini
EXPOSE 80
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
Supervisord config to run Nginx, PHP-FPM, and queue worker together:
[supervisord]
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0
[program:nginx]
command=nginx -g "daemon off;"
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:php-fpm]
command=php-fpm -F
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
[program:queue-worker]
command=php /app/artisan queue:work redis --sleep=3 --tries=3 --timeout=90 --max-time=3600
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Storage and File Uploads
Laravel's storage directory needs to persist between deployments. Container filesystems are ephemeral — files written to the container disappear when it restarts.
The correct approach for production:
Option 1: S3-compatible object storage
// config/filesystems.php
'default' => env('FILESYSTEM_DISK', 's3'),
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'), // for non-AWS S3 providers
],
Store uploaded files in S3 or a compatible service (Cloudflare R2, Backblaze B2, DigitalOcean Spaces). Generates permanent URLs that survive container restarts.
Option 2: Persistent volume mount
If your platform supports persistent volumes, mount one at /app/storage. The volume persists independently of the container. This is simpler to configure but doesn't work well with horizontal scaling — all containers must mount the same volume.
Redis for Sessions and Cache
Running PHP sessions in Redis instead of the filesystem has two advantages: sessions survive container restarts, and multiple app containers can share session state (enabling horizontal scaling).
SESSION_DRIVER=redis
CACHE_DRIVER=redis
REDIS_HOST=your-internal-redis-host
Redis on the same internal network as your Laravel app adds negligible latency to session reads. On platforms where Redis is co-located, session operations complete in under 1ms.
Health Check Endpoint
// routes/web.php
Route::get('/health', function () {
$checks = [
'database' => false,
'redis' => false,
'queue' => false,
];
try {
DB::connection()->getPdo();
$checks['database'] = true;
} catch (Exception $e) {}
try {
Redis::ping();
$checks['redis'] = true;
} catch (Exception $e) {}
$checks['queue'] = Cache::remember('health_queue_check', 60, fn() => true);
$healthy = !in_array(false, $checks, true);
return response()->json([
'status' => $healthy ? 'healthy' : 'degraded',
'checks' => $checks,
], $healthy ? 200 : 503);
});
Cloud platforms use this endpoint to verify your application is running correctly before routing traffic to a new container. Returning 503 when the database is unavailable prevents traffic from hitting a broken deployment.
The Production Checklist
Before going live:
- [ ]
APP_DEBUG=false— never expose stack traces to users - [ ]
APP_KEYset and backed up somewhere safe - [ ]
php artisan config:cacheruns on every deployment - [ ]
php artisan route:cacheruns on every deployment - [ ]
php artisan migrate --forceruns before new code starts serving - [ ] Queue worker running as a separate persistent service
- [ ] Scheduler configured (cron or
schedule:work) - [ ] Sessions stored in Redis, not filesystem
- [ ] File uploads going to object storage, not local disk
- [ ] Health check endpoint returning 200 for all service checks
- [ ] Logs going to stdout/stderr (not log files)
Laravel in production is reliable when these pieces are in place. The complexity is real, but it's manageable complexity — each piece has a clear purpose and a clear configuration.
Deploy Your App with Git Push
Automatic builds, environment variables, live logs, rollback, and custom domains. No server management required.
Deploy Free — No Card RequiredPowered by WHMCompleteSolution