Installation & Setup
Install the gem
Add GoodPipeline to your Gemfile:
gem "good_pipeline"Then install:
bundle installRun the install generator
The install generator creates the database migration:
bin/rails generate good_pipeline:install
bin/rails db:migrateThis creates four tables: good_pipeline_pipelines, good_pipeline_steps, good_pipeline_dependencies, and good_pipeline_chains.
Configure GoodJob
GoodPipeline requires GoodJob to preserve job records so it can read terminal failure metadata:
# config/initializers/good_job.rb
GoodJob.preserve_job_records = trueGoodPipeline will raise GoodPipeline::ConfigurationError at boot if this is not set.
Configure queue names (optional)
GoodPipeline routes its internal jobs to dedicated queues by default. You can override them globally:
# config/initializers/good_pipeline.rb
GoodPipeline.coordination_queue_name = "pipeline_coordination" # StepFinishedJob, PipelineReconciliationJob
GoodPipeline.callback_queue_name = "pipeline_callbacks" # PipelineCallbackJobDefaults are "good_pipeline_coordination" and "good_pipeline_callbacks". Per-pipeline overrides are also available via the class DSL — see Defining Pipelines.
Mount the dashboard (optional)
# config/routes.rb
mount GoodPipeline::Engine => "/good_pipeline"See the Web Dashboard page for details.
Your first pipeline
Define a pipeline by subclassing GoodPipeline::Pipeline and implementing configure:
class DataIngestionPipeline < GoodPipeline::Pipeline
description "Fetches, transforms, and loads data"
def configure(source_id:)
run :fetch, FetchJob, with: { source_id: source_id }
run :transform, TransformJob, with: { source_id: source_id }, after: :fetch
run :load, LoadJob, with: { source_id: source_id }, after: :transform
end
endRun it:
DataIngestionPipeline.run(source_id: 42)This enqueues :fetch immediately. When it succeeds, :transform is enqueued. When that succeeds, :load is enqueued. If any step fails, the pipeline halts by default.
Next steps
- Defining Pipelines — full DSL reference and DAG patterns
- Conditional Branching — take different paths at runtime
- Failure Strategies — control what happens when steps fail
- Pipeline Chaining — wire pipelines together
- Monitoring — inspect pipeline and step state