The linked article can probably be best summed up by this line in the middle of their page: "Please don’t put non-technical managers in charge of software developers." Having a a disconnect between management and workers is always a problem. You ALWAYS need somebody in the middle who can speak both the management language and the worker language. In construction you often have a "working foreman" who is both a construction worker and knows their needs, and is also in management and can talk about costs and planning. In software it is best to have a project manager who can coordinate schedule milestones for the team and also coordinate business release dates and corporate deadlines. In both cases, these people must be smart enough to not confuse estimates with final delivery dates.
I think that its benefits are overrated. Agile methods work really good in small teams and often in theory, but once the projects and especially the environment in which the project lives, get larger, then it gets really complicated really quickly and the axioms under which the project needs to run start to collapse.
That is true if you are looking from today's perspective.
That is less true if you look at historical perspective. I cannot imagine any modern large software project that could be implemented using the methodologies from the 1970s. I am absolutely certain you could not create modern infrastructure using methodologies from the 1950s and 1960s.
Trying to get what are now called agile practices accepted by management was a big thing. It was transformative to management.
Before about 1995, there was a huge disconnect between what the software development groups wanted to do and what business executives wanted to do. In many (but not all) of the big established businesses, the corporate leadership had roots in older styles of development.
Many business executives in the 1980s and early 1990s watched software development patterns from the 1950s and 1960s. During that era computer time was far more expensive than human time, so it was cheaper to have a system of expert programmers who carefully built the entire design on paper, and only after the paper-based design was completely functional go about the task of encoding the software for the machine.
Of course, systems were much simpler back then. The full design for major operating systems were small books of 200-300 pages. Those few pages documented every function, their parameters and results, and the precise tasks the function would perform.
Since that is what the management was familiar with, that was the process they used. Specify a design, have experts build every detail of the design, re-approve the design, then have the grunts encode the design. It works for building architecture and manufacturing very nicely.
The major methods that management was willing to accept were the full-blown 'waterfall' method and iterative mini-waterfall methods. There were prototypes (basically two waterfalls), spiral development (lots of little waterfalls) and the Big Design Up Front mega-waterfall. Some had more iterations than others, but the methodology was basically the same. Management stated what they wanted, experts built the blueprints that specified everything, management approved the blueprints, then the grunts implemented it.
The ideas covered in the linked-to book and several others were good ideas that were practiced at some places, but the concept that you could continuously change direction mid-project was fought hard by some management groups.
Many of the early ideas have seen modification, but overall each of them persists. Pair programming was thought to give instant feedback, but it was expensive. In some businesses where the risk is high and human costs are relatively low, pair programming makes a lot of sense. Government work in particular, where paperwork and verification plans are often 4x the work than the acutal coding, having two or more people doing the code together works well. For games a much shorter buddy check can also suffice. Automated unit tests are a part of agile, where the unit tests can detect errors within minutes of the error being introduced. When the cost of writing a module is 100%, the cost of writing a module plus unit tests for the module is roughly 180%. The benefit of unit tests comes on the long tail of maintenance and support, so new code and modifications can be instantly regressed against an entire code base. For most games on an annual or bi-annual release cycle the cost of unit tests for game code usually does not quite reach the point of cost effectiveness; it is cheaper to buy a bunch of QA hours rather than the extra roughly 80% cost to co-develop automated tests. For many persistent systems like game servers and engines the unit tests can make sense, and for small libraries they nearly always make strong business sense, but for game code the practice doesn't make economic sense. Some libraries have a long development tail and are shared company wide, bugs can impact many teams, so unit tests for shared corporate libraries makes a lot of economic sense. Scrum meetings, 2-3 week sprints, acceptance tests, owner signoff of specific features as they are completed, they all grew out of a common movement.
It took about ten years or so for most industries to convert. Some industries are still slowly moving over.
The idea that programmers could be given a vague direction and then the software API could naturally emerge can feel very risky in the business-oriented mind that is used to specifying policy and procedure and having the peons follow it. When the management teams were used to building a perfect blueprint to verify the requirements, moving to systems that specify only the verification requirements and not specifying the blueprints feels risky at first. Blueprints encode the implied details that were not specified. It can take a while to learn how to specify all the important details and to leave unspecified the unimportant details. Many management teams still resist the change.