In this way, you can minimize the additional complexity spent on the problem; instead, you reuse the complexity already invested in the existing abstractions.
If you know more about an abstraction, you're more likely to be able to solve a problem in terms of that abstraction. So, while working on a problem, or while preparing for future problems, try to learn abstractions that seem likely to help.
Choose abstractions that are likely to continue being maintained; ones that are old, widely used, and open source. Such abstractions also tend to be higher quality.
The history of computing has been a history of layering new abstractions on top of old ones. This might be good or bad - it's too soon to tell - but either way, at some point you need to actually use the abstractions you've created, rather than endlessly build more.
Think a lot before you create a new abstraction. You don't need to, and probably shouldn't, build a new abstraction if you're just solving one problem; just write a program to solve that one problem. If you do feel that you need to create a new abstraction, try to create as little novelty as possible. It is hard to create good abstractions, so you should do it as little as possible.
Remember that you can change your problem to fit existing abstractions. It's more important to save complexity than it is to faithfully solve a problem, because sacrificing simplicity to obtain a solution will lose both solution and simplicity in the end.
For an example of these principles in action, see almost anything I've written or done. Some abstractions that I particularly like to use are general-purpose programming languages and the Linux syscall API. This principle is the absolute core of all my thinking about programming, and inasmuch as I've done anything novel, I credit this principle for leading me to good ideas.