Connecting a Rails Application to a Read Replica
Feb 12, 2025

Connecting a Rails Application to a Read Replica
Introduction
When working with high-traffic Rails applications, splitting read and write operations between a primary database and read replicas can improve performance and scalability. In our case, we had an ApplicationRecord model that initially used only the primary database and a logging database. Our goal was to separate the logging database into read and write connections.
To achieve this, we first simulated a replication setup using Docker and then configured our Rails application to connect to the read replica appropriately.
Simulating Replication with Docker
To test replication locally, we created a Docker Compose setup with a primary PostgreSQL instance and a read replica.
Docker Compose Configuration
version: '3.8'x-postgres-common: &postgres-common image: postgres:14-alpine user: postgres restart: always healthcheck: test: 'pg_isready -U user --dbname=logs' interval: 10s timeout: 5s retries: 5services: postgres_primary: <<: *postgres-common ports: - 15432:5432 environment: POSTGRES_USER: user POSTGRES_DB: logs POSTGRES_PASSWORD: password POSTGRES_HOST_AUTH_METHOD: "scram-sha-256\nhost replication all 0.0.0.0/0 md5" POSTGRES_INITDB_ARGS: "--auth-host=scram-sha-256" command: | postgres -c wal_level=replica -c hot_standby=on -c max_wal_senders=10 -c max_replication_slots=10 -c hot_standby_feedback=on volumes: - ./00_init.sql:/docker-entrypoint-initdb.d/00_init.sql postgres_replica: <<: *postgres-common ports: - 15433:5432 environment: PGUSER: replicator PGPASSWORD: replicator_password command: | bash -c " until pg_basebackup --pgdata=/var/lib/postgresql/data -R --slot=replication_slot --host=postgres_primary --port=5432 do echo 'Waiting for primary to connect...' sleep 1s done echo 'Backup done, starting replica...' chmod 0700 /var/lib/postgresql/data postgres " depends_on: - postgres_primary
This setup ensures that the read replica (postgres_replica) follows the primary database (postgres_primary). We adjusted ports to avoid conflicts with a locally running PostgreSQL instance.
The configuration was inspired by this GitHub repository and this Medium article.
Rails Database Configuration
We modified the database.yml file to include separate read and write connections for the logging database.
config/database.yml
default: &default adapter: postgresql encoding: utf8 min_messages: warning pool: 5 timeout: 5000development: primary: <<: *default database: fakturaonline_development audit_log: <<: *default url: <%= ENV['PRIMARY_LOG_URL'] %> migrations_paths: db/logs_migrate audit_log_read: <<: *default url: <%= ENV['READ_LOG_URL'] %> replica: truetest: primary: <<: *default database: fakturaonline_test url: <%= ENV['DATABASE_URL'] %> audit_log: <<: *default database: fakturaonline_test_logs url: <%= ENV['DATABASE_URL'] %> migrations_paths: db/logs_migrate audit_log_read: <<: *default database: fakturaonline_test_logs url: <%= ENV['DATABASE_URL'] %> replica: truestaging: primary: <<: *default url: <%= ENV['DATABASE_URL'] %> audit_log: <<: *default url: <%= ENV.fetch('LOGS_DATABASE_URL', ENV.fetch('DATABASE_URL', nil)) %> migrations_paths: db/logs_migrate audit_log_read: <<: *default url: <%= ENV.fetch('LOGS_DATABASE_READ_URL', ENV.fetch('DATABASE_URL', nil)) %> replica: trueproduction: primary: <<: *default pool: <%= ENV.fetch('DB_POOL', ENV.fetch('RAILS_MAX_THREADS', 8)) %> url: <%= ENV['DATABASE_URL'] %> audit_log: <<: *default pool: <%= ENV.fetch('DB_POOL', ENV.fetch('RAILS_MAX_THREADS', 8)) %> url: <%= ENV['LOGS_DATABASE_URL'] %> migrations_paths: db/logs_migrate audit_log_read: <<: *default pool: <%= ENV.fetch('DB_POOL', ENV.fetch('RAILS_MAX_THREADS', 8)) %> url: <%= ENV.fetch('LOGS_DATABASE_READ_URL', ENV.fetch('DATABASE_URL', nil)) %> replica: true
For local testing with Docker, we used environment variables to point to the correct databases:
development: primary: <<: *default database: fakturaonline_development audit_log: <<: *default url: <%= ENV['PRIMARY_LOG_URL'] %> migrations_paths: db/logs_migrate audit_log_read: <<: *default url: <%= ENV['READ_LOG_URL'] %> replica: true
Environment Variables
PRIMARY_LOG_URL="postgresql://user:[email protected]:15432/fakturaonline_logs"READ_LOG_URL="postgresql://user:[email protected]:15433/fakturaonline_logs"
Configuring Rails Models for Read Replicas
We updated our ActiveRecord models to use the new read and write connections.
ApplicationRecord
class ApplicationRecord < ActiveRecord::Base self.abstract_class = true connects_to database: { writing: :primary, reading: :primary }end
For the audit logs, we directed write queries to the primary audit log database and read queries to the read replica:
class AuditLogRecord < ActiveRecord::Base self.abstract_class = true connects_to database: { writing: :audit_log, reading: :audit_log_read }end
Configuring Rails to Automatically Select Databases
To further optimize database connections, we configured an initializer to manage database selection based on request type:
config/initializers/database_selector.rb
unless Rails.env.test? Rails.application.configure do config.active_record.database_selector = { delay: 2.seconds } config.active_record.database_resolver = ActiveRecord::Middleware::DatabaseSelector::Resolver config.active_record.database_resolver_context = ActiveRecord::Middleware::DatabaseSelector::Resolver::Session endend
This setup enables automatic switching between primary and read replicas based on the request type. The database_selector introduces a 2-second delay before switching to the read replica, ensuring consistency for recently written records.
Handling Issues with Scenic Gem and Materialized Views
We use the Scenic gem for materialized views. However, when trying to refresh the view, we encountered this error:
Write query attempted while in readonly mode: REFRESH MATERIALIZED VIEW "subscription_overviews";
Since refresh_materialized_view is a write operation, we needed to explicitly switch to the writing role:
def self.refresh ApplicationRecord.connected_to(role: :writing) do Scenic.database.refresh_materialized_view( table_name, concurrently: false, cascade: true ) endend
This ensures that REFRESH MATERIALIZED VIEW runs against the primary database, avoiding issues with read replicas.
Verifying Read Replica Connections
To confirm that read queries are hitting the replica, we ran the following SQL:
result = AuditLogRecord.connection.execute(<<~SQL) SELECT pg_sleep(10), current_database() AS db, inet_server_addr() AS host, inet_server_port() AS portSQL
We also monitored active connections in TablePlus to verify that queries were being executed on the correct database instance.
Conclusion
By simulating replication in Docker, configuring Rails to use read replicas, and handling Scenic materialized views, we successfully optimized our logging database for scalability and performance. Using read replicas allows us to distribute read queries efficiently, improving the overall responsiveness of our Rails application.