ok-dev

The Use-Case Manifesto

Ok, rant incoming. Please be advised of the possibility of finding toxic levels of digressing and rambling in this post

It seems very distant now, but only because the immense weight we've insisted on giving artificial intelligence has accelerated the pace of time for those of us who have decided (or been forced) to follow the matter more or less closely.

Not so long ago, as I say, I attended an AWS event related to SAP. One of those gatherings where those who offer the technology meet those who are supposed to implement it for real clients with equally real budgets. AWS Bedrock had just been born, the service had been enabled for the AWS SDK for ABAP, and we were in the midst of an incipient collective epiphany. We sensed thousands of possibilities, like phosphenes: lights you think you see but can't quite focus on. A bunch of shiny things we knew we wanted to grab, but for which we had no clear purpose.

The interesting part came in the side conversations. I slipped into a few exchanges between AWS people and developers from clients and partners—that mortar binding the hyperscaler's technology to the solutions that actually get deployed. And what I found was an unexpected honesty: nobody had a clear use case. Both sides, provider and client, acknowledged between the lines that they were there looking for a problem to apply a solution they already had. The event itself was an attempt to pool that lack, to collectively find a justification for a need that only existed because we had imposed it on ourselves.

Doesn't that sound strange? Not just the fact that they were looking for answers among ABAP developers (for if anyone has grazed more happily and passively inside SAP's walled garden, it's precisely its programmer base—though that seems to be changing), but because of that inversion of terms. Seeking the application for the technology, rather than the other way around. We've suffered hype cycles triggered by the emergence of specific technologies before, but this one has been and continues to be especially intense.

Starting with what shines...

I can understand an engineer falling in love with a technology. I can understand it because I'm an engineer who tries to understand things, break them, put them back together, and then understand why they no longer work as expected. The simple act of playing with something you initially struggle to understand is empowering. I can understand, as I say, that part of us—detached from strategies and KPIs—feeling very attracted to shiny things. But engineers reach a point where the intellectual exercise must transcend into something. That's when the use case turns the game, the conjuring trick, into something that can actually make someone's life easier.

...but without losing sight of what for

Not all scientific discoveries can be applied, and those we knew about for many years but didn't manage to make useful until much later rarely had an entire industry (I'd even say an entire economy) behind them to fuel the hype. It's very hard to stop, observe the landscape, and apply a calm analysis to the current state of AI. The speculative nature inherent to the economy reaches its peak in this case.

I don't deny the usefulness of this technology, but it's evident that a bubble exists around the matter. There are other angles to the problem that concern me: sustainability, the disruption to labor relations, the inequality it may deepen. I won't develop them here because they deserve separate treatment, but I do want to make clear that the problem isn't only economic. What's truly worrying is AI's orthogonality to all the fronts we already have open, the ease with which it permeates and magnifies problems that already existed.

But this is the zeitgeist. We know that the actors carrying out this enterprise are not the most recommendable, that the global political landscape is not ideal for setting limits, and that we've lived through this before. We're probably living through the months before the bursting of a bubble we'll tell our grandchildren about. But most likely, we'll still have to go to work tomorrow. Escapism is for billionaires with rockets; here, most of us will have to clock in on time, right up until the eve of the apocalypse.

And if what you do for a living is engineering, and on top of that you have to apply AI to real use cases, it's important to know where we stand and, as far as possible, how to minimize the damage.

Crafting your own tools

Resistance is an option if there's an alternative, but there won't be consensus on stopping the use of AI—at least not overnight. There's too much money at stake, and acknowledging that the emperor has no clothes is going to cost a fortune. The solution, while we wait for the collective penny to drop, is to be aware that this is a material we'll have to craft our tools from, as long as it's the right one for the job.

What has worked for me is simple, though writing it down feels almost obvious against a mainstream that prefers to deny the evident: know the technology well before applying it. Test, play, understand its limitations and its costs at every level. If you don't test and understand, you can't evaluate, and if you don't evaluate well, you won't apply well.

I've also learned to assume nothing. We're doing this backwards, seeking utility for a technology almost by imperative when we usually identify the need before looking for the remedy. The best applications aren't always the most obvious ones, and lateral thinking and the happy coincidences that emerge from the most trivial experimentation are perfect for this.

And perhaps most importantly: don't forget that using AI is just another form of automation. We can call it whatever fancy names we like, but regardless of how you reach the final result, it's still that. And you should only automate what you already know well, what you could do manually, whose every step you understand, and whose result you could predict knowing the variables at play. Otherwise, what you're automating is your own ignorance.

There's one last point that keeps nagging at me, perhaps the most uncomfortable one: being aware of the residue you generate. The English term is more than expressive: slop. That slurry of content, code, or decisions you produce when you let the machine work without real supervision. Every interaction with these systems leaves a trace, and taking ownership of that trace—reviewing it, correcting it, discarding it when it's worthless—is a way of staying honest with the craft. It's not just a matter of technical quality; it's a way of not adding to the noise that's already drowning us.