Friday, September 9, 2022

Application Framework Stabilization


It's a classic dilemma. The product owner needs to add or change something - a new requirement in a rammed schedule. How to do it, what algorithms, on what cubes, how complex the balalaika will have to be composed? Is it possible to get by with little blood, what to aim for when the leading developers are free?


A typical example: in an almost finished application, you need to add more data-based functionality beyond what you thought at the beginning. For example, now you need to show in your personal account the dynamics of the average price for a product not for a month, but for three months, and this is a critical requirement. The database is designed, partitioned and optimized for other queries. With a simple SQL query, we have a response time of not 50 ms, as it should be for an AWP, but 5 seconds (the algorithm can be very complicated). Or the data began to come from an external IS in a much more complex JSON and everything began to sag a lot. Or suddenly partners do not have time with their releases and you urgently need to support the format that they have now, and switch to a new one only as soon as they can finalize it.

You have a release plan, create another cache layer, change table structures, change parsing technologies, etc. - all this has a very negative effect on the stability of the application framework. "Slaps" and "duct tape", attached in a frantic rush mode, are made, as a rule, by the first available method and then covered with new cultural layers and become an integral part of the product architecture.


Using the data science stack as a data flow dam makes it fairly easy to:


localize and isolate the point of change (see scenario 2) and minimize the impact of changed requirements on the architecture of the base product;

conduct local operational research on solutions - their resource intensity and productivity and choose the minimum sufficient;

receive a significant delay in time for a systematic solution of this issue (if such a need remains relevant) within the next software release.

Разработка (dev) и data science в enterprise

 No one except the developers and the owner of the product knows in detail how the product actually works, how it should work, which operations are business-relevant and which are technological. And that means they have the cards in their hands, the monitoring service is certainly not up to it. Adding various business information on transactions to the business log, including:


user information;

information on the basket / personal account;

the content of responses from third-party systems;

the content of responses from the database;

and so on allows you to get a complete detail of all the processes taking place in the software directly in a productive environment, and this is worth a lot. In fact, we get software monitoring and full-fledged business intelligence in one bottle.


The funny thing is that with this approach, you can additionally get monitoring of the software used by corporate services and microservices. Detailed logging of calls to external systems will give a complete breakdown of both response times and errors, and can also serve as the basis for claim work with those responsible for these external ISs. It is difficult to argue when all the moves are recorded and accounted for with an accuracy of up to milliseconds, and the conversation is in terms of financial losses.

Application Framework Stabilization

It's a classic dilemma. The product owner needs to add or change something - a new requirement in a rammed schedule. How to do it, what ...