TradeWatch

A Production Support Monitoring & Trade Analytics Pipeline

Tech Stack: Python · MySQL · Bash · Linux · cron · Git · regex · python-dotenv

Overview

TradeWatch is an end-to-end production support simulation based on the tooling used by technology teams in financial services. The system generates realistic trade data and microservice application logs, persists structured records to a relational database, parses and analyzes log output, and orchestrates the entire workflow through scheduled shell scripts. I designed it to mirror the operational patterns of a real production environment with health checks, log rotation, automated reporting, and least-privilege database access, rather than a toy project that exercises one skill in isolation.

Problem & Motivation

Production support and DevOps roles in financial services require a working understanding of multiple stacks: Linux for the operating environment, SQL for the data layer, Python for analysis and automation, and shell scripting for orchestration. Existing tutorials tend to silo these skills. I wanted a single project that exercised all of them in a realistic context — one where a query failure, a credential leak, or a misconfigured cron job would have the same consequences they would on a real trading floor. The result is TradeWatch: a self-contained system that you could plausibly drop into a production support runbook.

Architechture

The project is organized into three layers:

Data layer. A normalized MySQL schema with five tables: instruments, traders, trades, log_events, and daily_summary. Trades reference instruments and traders through foreign key constraints. The total_value column on trades is a generated stored column computed from quantity and price. Indexes on trade_date, status, trader_id, log_level, and timestamp support the analytical queries in the reporting layer. A dedicated tradewatch_user MySQL account holds all application privileges, scoped only to the tradewatch database, the principle of least privilege rather than relying on root.

Application layer. Four Python modules built around a single db_config module that loads credentials from a .env file via python-dotenv. The trade generator simulates incoming orders with realistic price ranges per instrument and a weighted status distribution that reflects how most trades succeed, but a small percentage fail or cancel. The log generator emits timestamped entries across six microservices (trade-engine, order-gateway, risk-service, market-data-feed, settlement-service, auth-service) at four severity levels. The log analyzer parses these files with a compiled regex pattern, batches inserts into MySQL with transactional rollback, and produces a formatted summary. The trade reporter queries the database for daily volume, top traders, sector breakdowns, and failure rates, then writes the results to stdout and to a timestamped report file.

Automation layer. Four shell scripts coordinated by cron. The health check script verifies that MySQL is reachable, the database is accessible under the application user, and disk and memory thresholds are within bounds. The disk monitor scans all mounted partitions against a configurable threshold. The log rotator gzips and archives any log files older than seven days. The pipeline orchestrator activates the virtual environment, runs the four Python scripts in sequence, and tees the combined output to a timestamped pipeline log. Cron schedules run the full pipeline every weekday morning, the health check every thirty minutes during business hours, the disk monitor hourly, and the log rotator weekly.

Engineering Decisions Worth Noting

Secrets management from day one. Credentials live in a .env file that is never committed to version control. A .env.example file documents the required variables for anyone cloning the repository. The connection module fails loudly with a helpful error message if credentials are missing, rather than emitting a cryptic stack trace. This pattern was deliberately chosen because it matches what real-world deployments require, even if the project itself runs locally.

Reproducibility through virtual environments. All Python dependencies are pinned in requirements.txt. The orchestration script activates the virtual environment automatically before invoking any Python code, so the cron job runs against the same interpreter and library versions as interactive use. Cloning the repository and running pip install -r requirements.txt is enough to reproduce the environment exactly.

Parameterized queries throughout. Every SQL statement that incorporates runtime data uses parameter substitution rather than string concatenation. This eliminates SQL injection as a class of vulnerability and is a habit worth establishing on any project that talks to a database.

Resilient error handling. Database operations are wrapped in try/except blocks with explicit transaction rollback on failure. The pipeline orchestrator uses set -e so any failure halts the run rather than silently continuing through a broken stage. Log parsing tracks how many lines failed to match the expected format, which surfaces parser drift when log formats change.

Sample Output

The trade reporter produces a summary that looks something like this:

=================================================================
  TRADEWATCH TRADE ANALYTICS REPORT
  Period: 2026-04-06 to 2026-05-06
  Generated: 2026-05-06 09:14:22
=================================================================

  OVERVIEW
  Total trades                    1,247
  Total volume          $47,832,915.40
  Avg trade value           $38,358.23
  Buys / Sells               624 /   623

  STATUS BREAKDOWN
  Executed                        1,089
  Pending                            42
  Cancelled                          71
  Failed                             45
  Failure rate                     3.6%

  TOP INSTRUMENTS BY VOLUME
  AAPL     Apple Inc.                  148    $5,142,820.00
  TSLA     Tesla Inc.                  142    $4,891,330.00

=================================================================
  TRADEWATCH TRADE ANALYTICS REPORT
  Period: 2026-04-06 to 2026-05-06
  Generated: 2026-05-06 09:14:22
=================================================================

  OVERVIEW
  Total trades                    1,247
  Total volume          $47,832,915.40
  Avg trade value           $38,358.23
  Buys / Sells               624 /   623

  STATUS BREAKDOWN
  Executed                        1,089
  Pending                            42
  Cancelled                          71
  Failed                             45
  Failure rate                     3.6%

  TOP INSTRUMENTS BY VOLUME
  AAPL     Apple Inc.                  148    $5,142,820.00
  TSLA     Tesla Inc.                  142    $4,891,330.00

=================================================================
  TRADEWATCH TRADE ANALYTICS REPORT
  Period: 2026-04-06 to 2026-05-06
  Generated: 2026-05-06 09:14:22
=================================================================

  OVERVIEW
  Total trades                    1,247
  Total volume          $47,832,915.40
  Avg trade value           $38,358.23
  Buys / Sells               624 /   623

  STATUS BREAKDOWN
  Executed                        1,089
  Pending                            42
  Cancelled                          71
  Failed                             45
  Failure rate                     3.6%

  TOP INSTRUMENTS BY VOLUME
  AAPL     Apple Inc.                  148    $5,142,820.00
  TSLA     Tesla Inc.                  142    $4,891,330.00

The log analyzer produces an ASCII bar chart of which services are generating the most errors, which is exactly the kind of signal a production support analyst would use to identify a misbehaving service before users notice.

What I Learned

Three lessons stood out from building this project. The first is that engineering hygiene (virtual environments, gitignored secrets, dedicated database users, parameterized queries) is not ceremony. Each of these prevented a concrete problem during development. The MySQL setup, for instance, initially failed with a 1698 access-denied error because Ubuntu's default install uses socket auth for root; creating a dedicated application user fixed the problem and produced better architecture as a side effect.

The second is that error handling separates a demo from production-ready code. Every script in TradeWatch had to handle missing credentials, unreachable databases, malformed log lines, and failed inserts. The discipline of writing the failure path before the happy path made the code substantially more reliable.

The third is that wiring layers together teaches you more than studying any one in isolation. Reading the MySQL documentation does not prepare you for handling a connection that times out mid-batch. Reading shell scripting tutorials does not prepare you for activating a Python virtual environment from cron. The interactions between layers are where the real engineering lives.

Next Steps

Future iterations would containerize the system with Docker for portability, add a Prometheus exporter and Grafana dashboard for visual monitoring, integrate Slack or email alerting when the health check fails, build a CI pipeline with GitHub Actions to run linting and tests on every commit, and add a small Flask or FastAPI layer to expose the analytics as a queryable HTTP API. Each of these extensions would map cleanly onto a real production environment without changing the core architecture.

Source code available on GitHub: github.com/huicodes/tradewatch

More Projects

© 2026 Henry Ike