Skip to main content

Command Palette

Search for a command to run...

What did I do in August, September

Updated
5 min read
What did I do in August, September
V

I am still deciding what should I write here.

Yeah, I forgot to write in August, my bad. Sometimes I forget. But I’m back now.

I’ll also add my thoughts on using AI. Be patient, my friends this will be a bit longer than usual.


August

I optimised an API. In short, it was about pre-calculation and storage instead of running a huge join each time. This reduced the API response time from 23 seconds to 7 seconds. It’s still a join, but a much smaller one. :P

I spent quite a bit of time on this because it included backfilling data. The backfilling command took a lot of time. I didn’t work on optimising it because it was a one time thing.

Then I resolved a race condition issue caused by a webhook call. select_for_update worked well. Essentially, I just locked the row I was updating/creating. The problem was that the webhook calls were arriving milliseconds apart, creating two rows with the same data.

We could have added a unique_together constraint at the DB level, but I didn’t want to do that. I’m not that familiar with that part of the system and didn’t want to risk changes at the database level.

What else did I do… hmm.

Yeah, some CI/CD integrations and a bunch of other things. Refactored code, made it modular, deleted a lot of unused code, investigated production issues.

Most of my time went into testing the optimization. We had no unit tests, still don’t. I know that needs to be fixed, but migrations fail every time Django tests run on a fresh DB. I tried fixing it but couldn’t. There’s a migration file that doesn’t align with the DB schema narrative. :(


September

Remember how I said I’d been working on a reporting feature with data export? Well, it went OOM when selecting a large date range, and then it went OOM again when writing to Excel files.

Testing was brutal. Each large-scale test took 30–40 minutes. Every iteration felt like pulling teeth.

So I rolled out chunking:

  • Fetch data in chunks.

  • Write each chunk to CSV.

  • Run in-memory aggregations.

  • Write final aggregations to file.

  • Re-process the CSV chunk by chunk and export to XLSX.

But I still get OOM errors at different stages. Chunking solved DB-fetching OOMs, but file-writing OOMs persist. I think I’ll have to split the XLSX files into chunks too. And while doing this, I must ensure no unnecessary data stays in memory.

I also built some new APIs and made changes to existing features, like ingesting new data from a webhook.

Right now, I’m working on caching for another project. It currently has no cache, and a lot of queries hit the DB unnecessarily. Cache invalidation requirements led us to build a custom cache layer because we want invalidation triggered by changes in DB tables.

We also observed a lot of APIs making N+1 queries to the DB. We’re working on fixing them. We’re using django-silk to monitor all this behavior. Once your codebase grows large, observability becomes essential. Otherwise, things go haywire and you only notice at the last moment. You have to be proactive.


Thoughts on AI

I use multiple LLMs to write and fix code. One thing I’ve learned: you need tests. Without them, things become very difficult. The time from dev to “dev complete” has decreased, but the bottleneck has now moved to code reviews. After deploying for testing, multiple edge-case bugs show up, creating open loops and taking time to resolve.

Do I believe the cycle has become faster overall? Yes. Do I have data to prove it? Well, if I did, I’d be writing a paper on it. :P

AI does need supervision to drive it toward the correct answers or goals. Sometimes it starts “chasing its tail”, not great for us. There’s no single process for getting the best answers. Tech problems are usually open-ended and full of trade-offs, but AI can’t decide which trade-offs are acceptable.

I believe in developer liberty. Devs should be allowed to use whatever tools make them more productive and make their lives easier.

Context engineering is important, but sometimes magic happens, so you need to be open to that too. Make sure you provide the right amount of context, not too much and not too little. Otherwise, the LLM may consider factors that don’t really affect functionality. Be clear about your goal, but not too rigid about the approach.

There’s no “magic prompt.” Most of it is you talking to the LLM and working toward a solution. Treat it as your co-programmer, but don’t surrender your decision-making. Maintain your critical thinking skills. Don’t be fooled by the LLM, cross-check things whenever you’re suspicious.

If you have a very specific task, then no context is needed. For example: “Please extract the highlighted sentence from this screenshot.”

Some prompts that sometimes help: “please help,” “please fix this or my boss will fire me,” “my career depends on solving this issue,” “the deadline is closing in on me,” etc.

I don’t know what else to say. There’s no secret sauce. I don’t use autocomplete extensions like Cursor. Does that put me at a disadvantage? I don’t think so. I’m fine with the chat interface because I need to converse while working on a problem. Also, somehow talking to LLMs doesn’t count as social interaction, it doesn’t tire my mind.

I use all the LLMs I can, gather their code and approaches, and then choose whichever feels best to me.

I will close it now. I think this is all.

We will meet later.

Cheers.

A

How are you handling the test migrations? We do this by using the db schema and dump we get from db, also migrations are maintained in a sql file, so they are copied hence updated.

More from this blog

Vivek's Tryst with Tech

35 posts

builder of APIs, reducer of latencies, optimiser of code, fixer of issues

Backend Engineer | On my way to become 10x engineer