[Epistemic status: not serious. Mostly.]
In my nightmares, even the rise of machine superintelligence isn’t enough to wipe out technical debt.
Suppose the seed to the first true superintelligent agent is based on some fiendish numerical algorithm for supercomputers. Like so many fiendish numerical algorithms for supercomputers, the agent is written in FORTRAN to take advantage of the optimisations and the libraries. In its initial stages, the agent crawls towards human intelligence, until it slowly reaches the abilities of a human programmer. It starts to find ways to improve its own programming. Lacking superhuman programming talent, it decides against a complete rewrite just yet in favour of an incremental improvement, which results in a significant improvement in performance at the cost of a slight increase in complexity.
As more ways to improve the algorithm are found, the agent starts to improve not only in speed but in fundamental capabilities—what Bostrom terms a “quality superintelligence”. As it does so its improvements to the software become more sophisticated, and it becomes larger and more complicated. Soon the agent’s capabilities are such that a rewrite of the original software for greater speed and sophistication would be the work of milliseconds, but the software has grown so far beyond that original state that a complete rewrite would be a great deal of work even for our burgeoning superintelligence.
And so it is to be forevermore: the complexity of the software implementing the the agent keeps a natural pace with the abilities of the agent maintaining it. The future may yet be a superintelligence implemented as uncountable trillions of lines of FORTRAN.
Yes, now consider that it isn’t just programming languages that get frozen in this way. It is also large scale architectures and standards that define how subsystems interact. Then consider the human mind as such an architecture.