45.10. Migration Strategies: From Python to Rust
Warning
Don’t Rewrite Everything. The biggest mistake teams make is “The Big Bang Rewrite”. Stop. Do not rewrite your 500k line Flask app in Rust. Use the Strangler Fig Pattern.
45.10.1. The Strangler Fig Pattern
The Strangler Fig is a vine that grows around a tree, eventually replacing it. In software, this means wrapping your Legacy System (Python) with a new Proxy (Rust).
Phase 1: The Rust Gateway
Place a Rust axum proxy in front of your FastAPI service.
Initially, it just forwards traffic.
use axum::{
body::Body,
extract::State,
http::{Request, Uri},
response::Response,
Router,
routing::any,
};
use hyper::client::HttpConnector;
use hyper_util::client::legacy::Client;
use hyper_util::rt::TokioExecutor;
type HttpClient = Client<HttpConnector, Body>;
#[derive(Clone)]
struct AppState {
client: HttpClient,
python_backend: String,
}
async fn proxy_handler(
State(state): State<AppState>,
mut req: Request<Body>,
) -> Response<Body> {
// Rewrite URI to Python backend
let path = req.uri().path();
let query = req.uri().query().map(|q| format!("?{}", q)).unwrap_or_default();
let uri = format!("{}{}{}", state.python_backend, path, query);
*req.uri_mut() = uri.parse::<Uri>().unwrap();
// Forward to Python
state.client.request(req).await.unwrap()
}
#[tokio::main]
async fn main() {
let client = Client::builder(TokioExecutor::new()).build_http();
let state = AppState {
client,
python_backend: "http://localhost:8000".to_string(),
};
let app = Router::new()
.route("/*path", any(proxy_handler))
.with_state(state);
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}
Phase 2: Strangling Endpoints
Identify the slowest endpoint (e.g., /embedding).
Re-implement only that endpoint in Rust.
Update the Proxy to serve /embedding locally, and forward everything else.
#![allow(unused)]
fn main() {
async fn proxy_handler(
State(state): State<AppState>,
req: Request<Body>,
) -> Response<Body> {
let path = req.uri().path();
// Strangle: Handle /embedding in Rust
if path == "/embedding" || path.starts_with("/embedding/") {
return rust_embedding_handler(req).await;
}
// Everything else goes to Python
forward_to_python(state, req).await
}
async fn rust_embedding_handler(req: Request<Body>) -> Response<Body> {
// Parse JSON body
let body = axum::body::to_bytes(req.into_body(), usize::MAX).await.unwrap();
let payload: EmbeddingRequest = serde_json::from_slice(&body).unwrap();
// Run Rust embedding model (e.g., fastembed)
let embeddings = compute_embeddings(&payload.texts);
// Return JSON
let response = EmbeddingResponse { embeddings };
Response::builder()
.header("content-type", "application/json")
.body(Body::from(serde_json::to_vec(&response).unwrap()))
.unwrap()
}
}
Phase 3: The Library Extraction
Move shared logic (business rules, validation) into a Rust Common Crate (my-core).
Expose this to Python via PyO3.
Now both the Legacy Python App and the New Rust App share the exact same logic.
/monorepo
├── crates/
│ ├── core/ # Shared business logic
│ │ ├── Cargo.toml
│ │ └── src/lib.rs
│ ├── py-bindings/ # PyO3 wrapper for Python
│ │ ├── Cargo.toml
│ │ └── src/lib.rs
│ └── server/ # New Rust API server
│ ├── Cargo.toml
│ └── src/main.rs
├── python-app/ # Legacy Python application
│ ├── app/
│ └── requirements.txt
└── Cargo.toml # Workspace root
crates/core/src/lib.rs:
#![allow(unused)]
fn main() {
/// Validates an email address according to RFC 5322
pub fn validate_email(email: &str) -> Result<(), ValidationError> {
if email.is_empty() {
return Err(ValidationError::Empty);
}
if !email.contains('@') {
return Err(ValidationError::MissingAtSign);
}
// More validation...
Ok(())
}
/// Computes user fraud score based on behavior signals
pub fn compute_fraud_score(signals: &UserSignals) -> f64 {
let mut score = 0.0;
if signals.ip_country != signals.billing_country {
score += 0.3;
}
if signals.session_duration_seconds < 5 {
score += 0.2;
}
if signals.failed_payment_attempts > 2 {
score += 0.4;
}
score.min(1.0)
}
}
crates/py-bindings/src/lib.rs:
#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use core::{validate_email, compute_fraud_score, UserSignals};
#[pyfunction]
fn py_validate_email(email: &str) -> PyResult<bool> {
match validate_email(email) {
Ok(()) => Ok(true),
Err(_) => Ok(false),
}
}
#[pyfunction]
fn py_compute_fraud_score(
ip_country: &str,
billing_country: &str,
session_duration: u64,
failed_payments: u32,
) -> f64 {
let signals = UserSignals {
ip_country: ip_country.to_string(),
billing_country: billing_country.to_string(),
session_duration_seconds: session_duration,
failed_payment_attempts: failed_payments,
};
compute_fraud_score(&signals)
}
#[pymodule]
fn my_core_py(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_function(wrap_pyfunction!(py_validate_email, m)?)?;
m.add_function(wrap_pyfunction!(py_compute_fraud_score, m)?)?;
Ok(())
}
}
45.10.2. Identifying Candidates for Rewrite
Don’t guess. Measure.
Use py-spy to find CPU hogs.
# Install py-spy
pip install py-spy
# Record profile for 60 seconds
py-spy record -o profile.svg --pid $(pgrep -f "uvicorn")
# Top functions (live view)
py-spy top --pid $(pgrep -f "uvicorn")
Example py-spy Output Analysis
%Own %Total OwnTime TotalTime Function (filename:line)
45.2% 45.2% 4.52s 4.52s json.loads (json/__init__.py:346)
23.1% 23.1% 2.31s 2.31s pd.DataFrame.apply (pandas/core/frame.py:8740)
12.5% 67.7% 1.25s 6.77s process_batch (app/handlers.py:142)
8.3% 8.3% 0.83s 0.83s re.match (re.py:188)
Analysis:
json.loads(45%): Replace withorjson(Rust-based JSON parser). Instant 10x win.DataFrame.apply(23%): Replace with Polars (Rust DataFrame). 100x win.re.match(8%): Replace withregexcrate. 5x win.
Migration Priority Matrix
| Component | Python Time | Rust Time | Effort | ROI | Priority |
|---|---|---|---|---|---|
| JSON Parsing | 4.52s | 0.05s | Low (drop-in) | 90x | P0 |
| DataFrame ETL | 2.31s | 0.02s | Medium | 115x | P0 |
| Regex Matching | 0.83s | 0.15s | Low | 5x | P1 |
| HTTP Handling | 0.41s | 0.08s | High | 5x | P2 |
| ORM Queries | 0.38s | N/A | Very High | 1x | Skip |
Good Candidates:
- Serialization:
json.loads/pandas.read_csv. Rust is 100x faster. - Loops:
for x in giant_list:. Rust vectorization wins. - String Processing: Tokenization, Regex. Rust is efficient.
- Async Orchestration: Calling 5 APIs in parallel. Tokio is cheaper than asyncio.
Bad Candidates:
- Orchestration Logic: Airflow DAGs. Python is fine.
- Data Viz: Matplotlib is fine.
- One-off Scripts: Don’t use Rust for ad-hoc analysis.
- ORM-heavy code: The DB is the bottleneck, not Python.
45.10.3. The FFI Boundary: Zero-Copy with Arrow
Passing data between Python and Rust is expensive if you copy it.
Use Arrow (via pyarrow and arrow-rs).
The C Data Interface
Arrow defines a C ABI for sharing arrays between languages without copying.
#![allow(unused)]
fn main() {
use arrow::array::{Float32Array, ArrayRef};
use arrow::ffi::{FFI_ArrowArray, FFI_ArrowSchema};
use pyo3::prelude::*;
use pyo3::ffi::Py_uintptr_t;
#[pyfunction]
fn process_arrow_array(
py: Python,
array_ptr: Py_uintptr_t,
schema_ptr: Py_uintptr_t,
) -> PyResult<f64> {
// Import from C pointers (Zero-Copy!)
let array = unsafe {
let ffi_array = &*(array_ptr as *const FFI_ArrowArray);
let ffi_schema = &*(schema_ptr as *const FFI_ArrowSchema);
arrow::ffi::import_array_from_c(ffi_array, ffi_schema).unwrap()
};
// Downcast to concrete type
let float_array = array.as_any().downcast_ref::<Float32Array>().unwrap();
// Compute sum (Pure Rust, no GIL)
let sum: f64 = py.allow_threads(|| {
float_array.values().iter().map(|&x| x as f64).sum()
});
Ok(sum)
}
}
Python side:
import pyarrow as pa
from my_rust_lib import process_arrow_array
# Create PyArrow array
arr = pa.array([1.0, 2.0, 3.0, 4.0], type=pa.float32())
# Get C pointers
array_ptr = arr._export_to_c()
schema_ptr = arr.type._export_to_c()
# Call Rust (Zero-Copy!)
result = process_arrow_array(array_ptr, schema_ptr)
print(f"Sum: {result}") # Sum: 10.0
If you start copying 1GB vectors, the serialization cost outweighs the compute execution gain.
Polars DataFrame Passing
For DataFrames, use Polars which is already Rust-native:
import polars as pl
from my_rust_lib import process_dataframe_rust
df = pl.DataFrame({
"id": range(1_000_000),
"value": [float(i) * 1.5 for i in range(1_000_000)]
})
# Polars uses Arrow under the hood
# The Rust side receives it as arrow::RecordBatch
result = process_dataframe_rust(df)
#![allow(unused)]
fn main() {
use polars::prelude::*;
use pyo3_polars::PyDataFrame;
#[pyfunction]
fn process_dataframe_rust(df: PyDataFrame) -> PyResult<f64> {
let df: DataFrame = df.into();
let sum = df.column("value")
.unwrap()
.f64()
.unwrap()
.sum()
.unwrap_or(0.0);
Ok(sum)
}
}
45.10.4. The PyO3 Object Lifecycle
Understanding Python<'_> lifetime is critical.
When you write fn foo(py: Python, obj: PyObject), you are holding the GIL.
The GIL Pool
Python manages memory with Reference Counting. Rust manages memory with Ownership. PyO3 bridges them.
#![allow(unused)]
fn main() {
fn massive_allocation(py: Python) {
let list = PyList::empty(py);
for i in 0..1_000_000 {
// This memory is NOT freed until the function returns
// because the GIL is held and Python can't run GC
list.append(i).unwrap();
}
}
// GIL is released here, Python can now garbage collect
}
Fix: Use Python::allow_threads to release GIL during long Rust computations.
#![allow(unused)]
fn main() {
fn heavy_compute(py: Python, input: Vec<f32>) -> f32 {
// Release GIL. Do pure Rust math.
// Other Python threads can run during this time
let result = py.allow_threads(move || {
input.iter().sum()
});
// Re-acquire GIL to return result to Python
result
}
}
Memory Management Best Practices
#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyfunction]
fn process_large_data(py: Python, data: Vec<f64>) -> PyResult<Vec<f64>> {
// BAD: Holding GIL during compute
// let result: Vec<f64> = data.iter().map(|x| x * 2.0).collect();
// GOOD: Release GIL for compute
let result = py.allow_threads(move || {
data.into_iter().map(|x| x * 2.0).collect::<Vec<_>>()
});
Ok(result)
}
#[pyfunction]
fn streaming_process(py: Python, callback: PyObject) -> PyResult<()> {
for i in 0..1000 {
// Acquire GIL only to call Python callback
Python::with_gil(|py| {
callback.call1(py, (i,))?;
Ok::<(), PyErr>(())
})?;
// Heavy Rust work without GIL
std::thread::sleep(std::time::Duration::from_millis(10));
}
Ok(())
}
}
45.10.5. Handling Panic Across Boundaries
If Rust panics, it’s a SIGABRT. The Python process dies instantly.
This is unacceptable in production.
Always catch Unwind.
#![allow(unused)]
fn main() {
use std::panic;
use pyo3::prelude::*;
use pyo3::exceptions::PyRuntimeError;
#[pyfunction]
fn safe_function(py: Python) -> PyResult<String> {
let result = panic::catch_unwind(|| {
// Risky code that might panic
let data = vec![1, 2, 3];
data[10] // This would panic!
});
match result {
Ok(val) => Ok(format!("Success: {}", val)),
Err(e) => {
// Convert panic to Python exception
let msg = if let Some(s) = e.downcast_ref::<&str>() {
s.to_string()
} else if let Some(s) = e.downcast_ref::<String>() {
s.clone()
} else {
"Unknown panic".to_string()
};
Err(PyRuntimeError::new_err(format!("Rust panicked: {}", msg)))
}
}
}
}
PyO3 does this automatically for you in #[pyfunction], but not in extern "C" callbacks or when using raw FFI.
Custom Panic Hook for Debugging
#![allow(unused)]
fn main() {
use std::panic;
pub fn install_panic_hook() {
panic::set_hook(Box::new(|panic_info| {
let location = panic_info.location().map(|l| {
format!("{}:{}:{}", l.file(), l.line(), l.column())
}).unwrap_or_else(|| "unknown".to_string());
let message = if let Some(s) = panic_info.payload().downcast_ref::<&str>() {
s.to_string()
} else {
"Unknown panic".to_string()
};
eprintln!("RUST PANIC at {}: {}", location, message);
// Log to external system
// send_to_sentry(location, message);
}));
}
}
45.10.6. The “Extension Type” Pattern
Instead of rewriting functions, define new Types. Python sees a Class. Rust sees a Struct.
#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use std::collections::VecDeque;
#[pyclass]
struct MovingAverage {
window_size: usize,
values: VecDeque<f32>,
sum: f32,
}
#[pymethods]
impl MovingAverage {
#[new]
fn new(window_size: usize) -> Self {
MovingAverage {
window_size,
values: VecDeque::with_capacity(window_size),
sum: 0.0,
}
}
fn update(&mut self, value: f32) -> f32 {
self.values.push_back(value);
self.sum += value;
if self.values.len() > self.window_size {
let old = self.values.pop_front().unwrap();
self.sum -= old;
}
self.sum / self.values.len() as f32
}
fn reset(&mut self) {
self.values.clear();
self.sum = 0.0;
}
#[getter]
fn current_average(&self) -> f32 {
if self.values.is_empty() {
0.0
} else {
self.sum / self.values.len() as f32
}
}
#[getter]
fn count(&self) -> usize {
self.values.len()
}
}
}
Python usage:
from my_rust_lib import MovingAverage
ma = MovingAverage(100)
for x in data_stream:
avg = ma.update(x)
print(f"Current average: {avg}")
print(f"Final count: {ma.count}")
ma.reset()
This is 50x faster than a Python collections.deque because:
- No Python object allocation per update
- No GIL contention
- Cache-friendly memory layout
45.10.7. Team Transformation: Training Python Engineers
You cannot hire 10 Rust experts overnight. You must train your Python team.
The 8-Week Curriculum
Week 1-2: Ownership Fundamentals
- The Borrow Checker is your friend
- Ownership, Borrowing, and Lifetimes
- Lab: Convert a Python class to Rust struct
Week 3-4: Pattern Matching & Enums
Option<T>replacesNonechecksResult<T, E>replaces try/except- Lab: Error handling without exceptions
Week 5-6: Structs & Traits
- Composition over Inheritance
- Implementing traits (
Debug,Clone,Serialize) - Lab: Design a data processing pipeline
Week 7-8: Async Rust
- Tokio vs Asyncio mental model
- Channels and message passing
- Lab: Build a simple HTTP service
Objection Handling Script
| Developer Says | Lead Responds |
|---|---|
| “I’m fighting the compiler!” | “The compiler is stopping you from shipping a bug that would wake you up at 3AM. Thank it.” |
| “Prototyping is slow.” | “True. Prototype in Python. Rewrite the hot path in Rust when specs stabilize.” |
| “We don’t have time to learn.” | “Invest 2 weeks now, save 2 hours/week forever in debugging memory issues.” |
| “Python is fast enough.” | “Show them the py-spy profile. Numbers don’t lie.” |
| “What about async/await?” | “Rust async is just like Python async. The syntax is nearly identical.” |
45.10.8. The Hybrid Repository (Monorepo)
Do not split Python and Rust into different Git repos. You need them to sync.
Directory Structure
/my-repo
├── .github/
│ └── workflows/
│ ├── python-ci.yml
│ ├── rust-ci.yml
│ └── integration.yml
├── crates/
│ ├── core/ # Pure Rust logic
│ │ ├── Cargo.toml
│ │ └── src/
│ │ ├── lib.rs
│ │ ├── validation.rs
│ │ └── scoring.rs
│ ├── py-bindings/ # PyO3 bindings
│ │ ├── Cargo.toml
│ │ ├── pyproject.toml # Maturin config
│ │ └── src/lib.rs
│ └── server/ # Rust microservice
│ ├── Cargo.toml
│ └── src/main.rs
├── python-app/
│ ├── src/
│ │ └── my_app/
│ ├── tests/
│ ├── pyproject.toml
│ └── requirements.txt
├── tests/
│ └── integration/ # Cross-language tests
│ ├── test_rust_python.py
│ └── test_api_parity.py
├── Cargo.toml # Workspace root
├── Makefile
└── docker-compose.yml
CI Pipeline Configuration
.github/workflows/integration.yml:
name: Integration Tests
on:
push:
branches: [main]
pull_request:
jobs:
build-rust:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- name: Build Rust Crates
run: cargo build --release --workspace
- name: Upload Rust Artifacts
uses: actions/upload-artifact@v4
with:
name: rust-binaries
path: target/release/
build-python-wheel:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install Maturin
run: pip install maturin
- name: Build Wheel
run: |
cd crates/py-bindings
maturin build --release
- name: Upload Wheel
uses: actions/upload-artifact@v4
with:
name: python-wheel
path: crates/py-bindings/target/wheels/*.whl
integration-tests:
needs: [build-rust, build-python-wheel]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Download Wheel
uses: actions/download-artifact@v4
with:
name: python-wheel
path: ./wheels
- name: Install Dependencies
run: |
pip install ./wheels/*.whl
pip install -e ./python-app
pip install pytest
- name: Run Integration Tests
run: pytest tests/integration/ -v
45.10.9. Metric-Driven Success
Define success before you start.
| Metric | Python Baseline | Rust Target | Actual Result |
|---|---|---|---|
| P50 Latency | 120ms | 15ms | 8ms |
| P99 Latency | 450ms | 50ms | 38ms |
| Max Concurrency | 200 | 5,000 | 8,000 |
| RAM Usage (Idle) | 4GB | 500MB | 380MB |
| RAM Usage (Peak) | 12GB | 2GB | 1.8GB |
| Docker Image Size | 3.2GB | 50MB | 45MB |
| Cold Start Time | 8.0s | 0.1s | 0.05s |
| CPU @ 1000 RPS | 85% | 15% | 12% |
If you don’t hit these numbers, debug:
- Too many
.clone()calls? - Holding GIL during compute?
- Not using
allow_threads? - Wrong data structure (HashMap vs BTreeMap)?
Benchmarking Setup
#![allow(unused)]
fn main() {
use criterion::{criterion_group, criterion_main, Criterion, BenchmarkId};
fn benchmark_processing(c: &mut Criterion) {
let mut group = c.benchmark_group("data_processing");
for size in [1_000, 10_000, 100_000, 1_000_000] {
let data: Vec<f64> = (0..size).map(|i| i as f64).collect();
group.bench_with_input(
BenchmarkId::new("rust", size),
&data,
|b, data| b.iter(|| process_rust(data))
);
}
group.finish();
}
criterion_group!(benches, benchmark_processing);
criterion_main!(benches);
}
45.10.10. Case Study: The AdTech Incremental Rewrite
Company: AdTech Startup with Real-time Bidding. Problem: Flask server hitting 200ms timeout budget. Solution: Incremental Strangler Fig over 6 months.
Phase 1: Drop-in Replacements (Week 1-2)
# Before
import json
data = json.loads(raw_bytes)
# After (Rust-based, 10x faster)
import orjson
data = orjson.loads(raw_bytes)
Impact: 20% latency reduction.
Phase 2: Hot Path Extraction (Month 1-2)
Identified via py-spy that feature extraction was 60% of latency.
# Before: Pandas
features = df.apply(lambda row: extract_features(row), axis=1)
# After: Polars (Rust)
import polars as pl
features = df.select([
pl.col("user_id"),
pl.col("bid_price").log().alias("log_price"),
pl.col("timestamp").str.to_datetime().alias("parsed_time"),
])
Impact: 50% latency reduction.
Phase 3: Full Service Replacement (Month 3-6)
Replaced Flask with Axum, calling Polars directly.
#![allow(unused)]
fn main() {
async fn bid_handler(
State(state): State<AppState>,
Json(request): Json<BidRequest>,
) -> Json<BidResponse> {
// Feature extraction in Polars
let features = extract_features(&request, &state.feature_store);
// Model inference
let bid = state.model.predict(&features);
Json(BidResponse {
bid_price: bid,
bidder_id: state.bidder_id.clone(),
})
}
}
Final Impact:
- Latency: 200ms → 15ms (13x improvement)
- Throughput: 500 RPS → 15,000 RPS (30x improvement)
- Server count: 20 → 2 (90% cost reduction)
45.10.11. Fallback Safety: Shadow Mode
When you ship the new Rust version, keep the Python version running as a fallback.
#![allow(unused)]
fn main() {
async fn proxy_with_fallback(
State(state): State<AppState>,
req: Request<Body>,
) -> Response<Body> {
// Try Rust first
let rust_result = tokio::time::timeout(
Duration::from_millis(50),
rust_handler(req.clone())
).await;
match rust_result {
Ok(Ok(response)) => {
// Log success
metrics::counter!("rust_success").increment(1);
response
}
Ok(Err(e)) => {
// Rust returned error, fallback
tracing::warn!("Rust failed: {}, falling back", e);
metrics::counter!("rust_error_fallback").increment(1);
python_handler(req).await
}
Err(_) => {
// Timeout, fallback
tracing::warn!("Rust timeout, falling back");
metrics::counter!("rust_timeout_fallback").increment(1);
python_handler(req).await
}
}
}
}
Shadow Comparison Mode
Run both, compare results, log differences:
#![allow(unused)]
fn main() {
async fn shadow_compare(req: Request<Body>) -> Response<Body> {
let req_clone = clone_request(&req);
// Run both in parallel
let (rust_result, python_result) = tokio::join!(
rust_handler(req),
python_handler(req_clone)
);
// Compare (async, non-blocking)
tokio::spawn(async move {
if rust_result.body != python_result.body {
tracing::error!(
"MISMATCH: rust={:?}, python={:?}",
rust_result.body,
python_result.body
);
}
});
// Return Python (trusted) result
python_result
}
}
Once diffs == 0 for a week, switch to returning Rust result. Once diffs == 0 for a month, delete Python.
45.10.12. Final Workflow: The “Rust-First” Policy
Once you migrate 50% of your codebase, flip the default. New services must be written in Rust unless:
- It is a UI (Streamlit/Gradio).
- It uses a library that only exists in Python (e.g., specialized research code).
- It is a throwaway script (< 100 lines, used once).
- The team lacks Rust expertise for that specific domain.
Policy Document Template
# Engineering Standards: Language Selection
## Default: Rust
New microservices, data pipelines, and performance-critical components
MUST be implemented in Rust unless an exception applies.
## Exceptions (Require Tech Lead Approval)
1. **UI/Visualization**: Streamlit, Gradio, Dash → Python OK
2. **ML Training**: PyTorch, TensorFlow → Python OK
3. **Prototyping**: < 1 week project → Python OK
4. **Library Lock-in**: Dependency only exists in Python → Python OK
## Hybrid Components
- Business logic: Rust crate
- Python bindings: PyO3/Maturin
- Integration: Both languages share the same logic
## Review Process
1. Propose language in Design Doc
2. If not Rust, justify exception
3. Tech Lead approval required for exceptions
This policy stops technical debt from accumulating again.
[End of Section 45.10]