While the original poster mentioned Windows 7 specifically, a few others have hinted toward better answers by saying things like "on
modern operating systems..."
There are a lot of computers out there. Only the tiniest minority of computers are able to run systems like Windows. Your computer is made up of hundreds or even thousands of other smaller computers. The ram chips are self contained computers. The sound card (these days just a single chip on the motherboard) is usually a collection of self contained computers. The graphics card likely contains a large number of sub-computers. There are various types, such as DSPs (Digital Signal Processors) and FPGAs (Field Programmable Gate Arrays), and they serve many different purposes, such as microcontrollers that manage the motors on a drive.
do notice there's a behavior in memory leaks, such that if a program were to create memory leaks, Windows 7's Task Manager will always show the program's name and process in the list even if the program had already been closed. Same applies to MSBuild for Visual Studios, if the affected program left behind tons of unresolved commands, we kill the process in Task Manager.
My guess is that memory leaks are always contained and recollected back into the resource pool when the affected process has been killed by Task Manager. And that there is no way to create a memory leak that is persistent and permanently damaging to the RAM in the computer... am I right in this?
Both yes and no.
Windows and other modern operating systems do a great job at virtualizing everything. You get access to a virtual memory space, not physical memory spaces. You get access to a memory buffer used for rendering, not the physical graphics card. You get access to buffers representing space on disk, not the actual physical drive.
That doesn't mean the actual physical object cannot be accessed, because obviously the operating system and device drivers must access it. However, it does mean that most unprivileged programs will have a very difficult time damaging the hardware. Most "big" computers, meaning things like desktop computers, mainframes, and so on, have virtualized hardware access for decades. For the big systems Windows was a latecomer in 1995.
On those big systems access to various systems in controlled by what are frequently called 'security rings'. The innermost security rings get raw access, the outermost security rings get virtualized access, and things in between get varying degrees of access.
At the user program level, which are the outermost levels, since everything is virtualized by the inner levels all the parts are tracked by necessity. So in these cases when a process dies it is very easy to reclaim all their memory. The virtual systems are cleaned up and everything self destructs cleanly. On the other hand, programs on the innermost ring, often called "ring zero", have direct access to everything. When a ring zero process dies, there isn't anything above it to clean up short of rebooting the system.
It is difficult for a user application to gain access to the inner security rings on accident, but there is a never-ending stream of exploits that attackers can use to elevate their privileges. After they have access the damage they can do is only limited by physical and software failsafe systems built into lower levels.
They say that hackers can create physical damage to hardware components just by software alone. If they are able to create memory leaks and not make it so that it's impossible to be isolated within a process that had quit, it will be a tough day for a researcher.
This is true, but it usually isn't due to memory leaks.
An attacker may be able to run a program at the lowest security levels. By doing so they can bypass protections normally present on the hardware. This might include sending a software signal to a video card or processor to intentionally overclock itself, do high power computations, and also shut off fans. While the operations don't directly harm the hardware they establish conditions where the hardware is much more likely to fail if additional safeguards are not present. Today's video cards and other components typically shut off the computer when they overheat, but in past years it was more likely for grey-blue smoke to be the first indication of overheating.
But all that is for the "big" computers. The microcontrollers and small components are often much less protected. It is much more difficult, but once an attacker has gained control of a bigger system the attacker can provide a more specific attack for a smaller system, such as the microcontroller of a drive or for a usb device or even
a nuclear centrifuge. While the big computer's operating system is likely to have virtualized hardware and multiple layers of failsafes, the smaller devices generally have much less protection against damage. An intentionally misprogrammed USB controller could send out very high voltage and short out devices. An intentionally misprogrammed hard drive controller could crash the head or even physically grind away sections of the platter. And rather famously, an intentionally misconfigured centrifuge controller could modify motor speeds and unbalance the load causing the equipment to wear down quickly and require very expensive and difficult repairs. These are almost universally intentional malicious acts.
Way back in the old days when there were very few computer vendors and every computer had exactly the same parts and the failsafes were less common, it was much easier to physically damage the hardware both by opportunity and by less fault-tolerant design. Consider back in the early 1980s there were very few possible configurations of machines; if your office had 8088 machines there was only one vendor and only one hardware configuration. An attack against a specific chip could take down thousands of businesses globally. These days you can buy ten seemingly identical computers yet have each one include slightly different components that are incompatible. That makes it much more difficult -- but obviously not impossible -- for attackers to implement hardware-damaging attacks.
These days it is statistically impossible to accidentally harm the physical computer through something as simple as a memory leak. There may be some ultra-obscure combination of random chance that approaching a specific memory pattern might happen to do something on a unique combination of systems, but it is similarly likely to have that same harm come from a stray bit of space radiation. The odds are so close to zero that it doesn't meaningfully exist.