Change Management
How changes to the WiserReview platform are planned, approved, implemented, and verified. Every deployment follows a controlled, traceable process, from routine bug fixes to emergency security patches.
Version
1.0
Effective Date
March 2026
Owner
Security Team
1. Change Classification
All changes are classified before proceeding. Classification determines the required approval level and review process. When in doubt, classify at a higher level.
| Classification | Definition | Examples |
|---|---|---|
| Standard | Routine, low-risk changes with well-understood impact and proven implementation patterns | Bug fixes, UI changes, dependency patch updates, copy changes, performance optimizations |
| Significant | Changes with broader impact, new functionality, or structural modifications | New features, API contract changes, database schema changes, new integrations, authentication changes, infrastructure configuration changes |
| Emergency | Urgent changes required to restore service, remediate active security vulnerabilities, or address critical incidents | P1/P2 incident hotfixes, security vulnerability patches, credential rotation due to compromise, Cloudflare WAF rule emergency updates |
2. Change Request and Approval
All changes are managed through GitHub Pull Requests. This provides a complete audit trail: what changed, who approved it, and when it was deployed.
Standard Changes
| PR creation | Developer creates PR with a clear description of the change and its purpose |
| Code review | At least one other developer must review and approve the PR |
| Merge | Approved PR merged; triggers CI/CD pipeline automatically |
| Monitoring | Post-deployment monitoring for errors and performance regression |
Significant Changes
| PR creation | Developer creates PR with detailed description: what is changing, why, what the impact is, and how it was tested |
| Code review | Engineering Lead reviews and approves; security-relevant changes (auth, data access) also reviewed by Security Officer |
| Staging verification | Change deployed to staging environment and verified before production merge |
| Merge and deploy | Approved PR merged; CI/CD pipeline deploys to production |
| Monitoring | Extended post-deployment monitoring; alerts watched closely for 24 hours |
Emergency Changes
| Authorization | Verbal authorization from Engineering Lead (and Security Officer for P1 security incidents) |
| Implementation | Developer implements fix with focus on correctness and minimal blast radius |
| Expedited review | At minimum, a second pair of eyes reviews the change, even if abbreviated |
| Deploy | Change deployed via CI/CD pipeline (same pipeline as normal deployments) |
| Retroactive docs | Full PR and documentation completed within 24 hours of deployment |
| Post-incident review | Emergency change included in post-incident review if triggered by an incident |
3. CI/CD Pipeline
All production deployments go through the GitHub Actions CI/CD pipeline. No changes are applied to production servers directly.
All deployments go through GitHub Actions. Credentials are stored in encrypted secrets — never in source code.
Docker Multi-Stage Builds
All Docker images use multi-stage builds to minimize the production attack surface:
- Build stage: compiles code, installs all dependencies, runs build tools
- Production stage: copies only runtime artifacts and production dependencies into a clean minimal base image
- Result: production images contain no source code, build tools, source maps, test frameworks, or development packages
4. Security in the Pipeline
| Control | Implementation |
|---|---|
| Secrets management | All credentials stored in GitHub's encrypted secrets vault: never in source code, Dockerfiles, or build logs |
| No plaintext credentials | GitHub Actions automatically masks registered secrets in build logs |
| Authenticated container registry | Docker images pushed to and pulled from Azure Container Registry: no public image access |
| Multi-stage Docker builds | Production images contain only runtime dependencies: dev tools, test code, and build artifacts excluded |
| No direct production access | The CI/CD pipeline is the only path to production: no SSH access, no manual file changes, no hotfixes outside the pipeline |
| Code review as security gate | All code changes reviewed before merge: a second pair of eyes on every change catches security issues before production |
Security Pipeline Roadmap
- Q3 2026: Automated dependency vulnerability scanning (npm audit, Snyk) integrated as a CI/CD gate
- Q3 2026: Docker image vulnerability scanning before push to Azure Container Registry
- 2027: Static application security testing (SAST) integrated into PR checks
5. Environment Separation
| Environment | Purpose | Access |
|---|---|---|
| Development | Local developer machines; feature development | Individual developer only |
| Staging | Integration testing; pre-production verification | Engineering team |
| Production | Live customer-facing services | CI/CD pipeline only; no direct human access to containers |
- Production credentials are stored only in GitHub encrypted secrets vault and injected only into production deployments
- Staging uses separate database instances with anonymized or synthetic data, not production customer data
- Developers cannot access production database credentials directly
6. Rollback Procedures
| Method | When to Use | Time |
|---|---|---|
| Azure App Services deployment slot swap | Instant rollback by swapping production slot to previous version | ~5 minutes |
| Redeploy previous Docker image | Trigger pipeline to deploy the previous image tag from Azure Container Registry | ~15–20 minutes |
| Full rebuild from git tag | Rollbacks requiring source code changes; build from previous tagged release | ~20–30 minutes |
Database Rollback
- •MongoDB Atlas point-in-time recovery can restore data to any point within the backup window
- •Database schema changes are designed to be reversible where possible
- •For irreversible schema changes, data is backed up before migration and staged carefully through the approval process
- •Database rollback requires Security Officer authorization due to potential data implications
7. Post-Deployment Verification
| Check | Method | Timeframe |
|---|---|---|
| Health checks | All service health-check endpoints return healthy status | Immediately after deployment |
| Error rate baseline | Sentry error rates return to pre-deployment baseline | First 15–30 minutes |
| Latency monitoring | API response times remain within normal bounds (below 5s alert threshold) | First 15–30 minutes |
| Functional verification | Engineering Lead or developer manually verifies key user flows affected by the change | Within 30 minutes |
For Significant changes, the engineering team maintains heightened monitoring for the first 24 hours after deployment.
8. Emergency Changes
For P1 and P2 incidents where the full change management process would cause unacceptable harm, an expedited path is available.
Authorization
- •P2 incidents: Engineering Lead authorizes
- •P1 incidents (including security breaches): Security Officer authorizes
Key Constraint
Emergency changes do not bypass the CI/CD pipeline. All emergency deployments go through the same GitHub Actions pipeline as normal deployments.
Expedited Process
- Engineering Lead and available senior developer identify the fix
- Abbreviated peer review: at minimum, verbal walkthrough of the change
- Deploy via normal CI/CD pipeline
- Verify health checks and error rates immediately post-deployment
- Full PR documentation and post-deployment review completed within 24 hours
- Emergency change included in post-incident review