"Law of Leaky abstraction" states that:
All non-trivial abstractions, to some degree, are leaky.
What does this mean? In my words, anything that is non-trivial cannot be completely abstracted away with a layer without exposing some effects of the thing that is being abstracted away.
Wikipedia page starts with an example of how TCP/IP protocol combination tries to abstract away that the network layer is actually unreliable from the layers above it. However, whenever there is a packet loss, the re-transmission of packets result in longer latency, which is observed by the application layer. All details aren't abstracted away since the application needs to be aware that there might be delays.
In real world, we see a lot of applications optimised for loading pages knowing that there might be delays loading the entire page so they fetch a basic version and load the rest of the contents using JS in an asynchronous fashion. I feel that this is an opimization could have been only made by knowing how the underlying transport layer works.
I personally feel leaky abstractions are necessary in software development. Any and all optimizations are usually based on details of underlying platform. For example, compilers were invented to abstract away the machine from a software developer so that they can write programs which can run on any platform/CPU/architecture. But, people working on systems often write architecture specific code because the underlying architecture or CPU is terrible at doing something generic or superior in doing something specific.
To commit to perfect abstractions means we lose the benefits provided by certain underlying implementations which isn't provided by other implementations.
This blog is a great read on leaky abstractions.