The Analyst I Built for Myself
I've spent years making monitoring data actionable through AIOps and integration pipelines. Building a personal market research tool turned out to be exactly the same problem.

For years, my job has been to take raw noise — thousands of alerts from SolarWinds, Cisco, Aruba, ThousandEyes — and make it mean something. The monitoring tools generate the signal. AIOps correlates it. ServiceNow turns it into something a human can act on. The pipeline is the product. I never thought much about applying the same logic outside of work, until I started losing evenings to the NSE website, staring at tables of numbers that told me nothing useful.
The problem with market data is not that there isn't enough of it. The problem is the same one I deal with every day at work: volume without interpretation is just noise. Every evening after market close, the NSE website has indices, movers, gainers, losers, sector swings — all accurate, all present, none of it assembled into something that answers the one question I actually want answered: what happened today, and should I care?
That gap is what became market-research.mandar.me.
The idea came together quickly once I recognised it as an integration problem. I had seen it before. On one side: a data source producing structured output on a schedule. On the other: a human who needs a summary, not a spreadsheet. In between: a pipeline. The only difference was that instead of routing alerts through an AIOps platform, I was routing market snapshots through an LLM. The pattern was identical.
I wrote the requirements first. The system had to be automated — it fetches data at market close every trading day, runs analysis, and makes the report available. No manual triggers, no babysitting. That constraint came from experience: any process that requires a human to remember to run it will eventually go unrun. The scheduler is not a convenience; it is a design decision.
The architecture ended up being deliberately modest. A Python backend, a lightweight database, a Next.js frontend. No overengineered orchestration. I have worked with complex infrastructure long enough to know that complexity is seductive and often unnecessary. The question is always: what is the smallest thing that reliably does the job? That answer was surprisingly simple.
The part I found most interesting to design was the AI prompt. This is where the integration thinking showed up most clearly. A model given raw data and asked to "analyse it" will produce something generic. But a model given a structured brief — index performance, top movers, sector patterns, notable outliers — and asked to reason about correlations between them, produces something that reads like a morning research note. The difference is in how the input is assembled, not in the model itself. That is, again, an integration problem.
I have spent two decades making enterprise systems talk to each other. The underlying discipline — identify what each system produces, decide what the next system needs, build the bridge — does not change much regardless of the domain. What surprised me about building this tool was not how different it felt from my day job, but how similar. The same instinct that makes you good at connecting monitoring tools to ITSM platforms will serve you equally well connecting market data feeds to an AI analyst.
The tools are new. The thinking is the same.
You already know how to build it. You just haven't pointed it at your own problem yet.
Written by Mandar Oak
← More posts