It's a classic dilemma. The product owner needs to add or change something - a new requirement in a rammed schedule. How to do it, what algorithms, on what cubes, how complex the balalaika will have to be composed? Is it possible to get by with little blood, what to aim for when the leading developers are free?
A typical example: in an almost finished application, you need to add more data-based functionality beyond what you thought at the beginning. For example, now you need to show in your personal account the dynamics of the average price for a product not for a month, but for three months, and this is a critical requirement. The database is designed, partitioned and optimized for other queries. With a simple SQL query, we have a response time of not 50 ms, as it should be for an AWP, but 5 seconds (the algorithm can be very complicated). Or the data began to come from an external IS in a much more complex JSON and everything began to sag a lot. Or suddenly partners do not have time with their releases and you urgently need to support the format that they have now, and switch to a new one only as soon as they can finalize it.
You have a release plan, create another cache layer, change table structures, change parsing technologies, etc. - all this has a very negative effect on the stability of the application framework. "Slaps" and "duct tape", attached in a frantic rush mode, are made, as a rule, by the first available method and then covered with new cultural layers and become an integral part of the product architecture.
Using the data science stack as a data flow dam makes it fairly easy to:
localize and isolate the point of change (see scenario 2) and minimize the impact of changed requirements on the architecture of the base product;
conduct local operational research on solutions - their resource intensity and productivity and choose the minimum sufficient;
receive a significant delay in time for a systematic solution of this issue (if such a need remains relevant) within the next software release.